首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider causal inference in randomized studies for survival data with a cure fraction and all-or-none treatment non compliance. To describe the causal effects, we consider the complier average causal effect (CACE) and the complier effect on survival probability beyond time t (CESP), where CACE and CESP are defined as the difference of cure rate and non cured subjects’ survival probability between treatment and control groups within the complier class. These estimands depend on the distributions of survival times in treatment and control groups. Given covariates and latent compliance type, we model these distributions with transformation promotion time cure model whose parameters are estimated by maximum likelihood. Both the infinite dimensional parameter in the model and the mixture structure of the problem create some computational difficulties which are overcome by an expectation-maximization (EM) algorithm. We show the estimators are consistent and asymptotically normal. Some simulation studies are conducted to assess the finite-sample performance of the proposed approach. We also illustrate our method by analyzing a real data from the Healthy Insurance Plan of Greater New York.  相似文献   

2.
Procedures for estimating the parameters of the general class of semiparametric models for recurrent events proposed by Peña and Hollander [(2004). Models for recurrent events in reliability and survival analysis. In: Soyer R., Mazzuchi T., Singpurwalla N. (Eds.), Mathematical Reliability: An Expository Perspective. Kluwer Academic Publishers, Dordrecht, pp. 105–123 (Chapter 6)] are developed. This class of models incorporates an effective age function encoding the effect of changes after each event occurrence such as the impact of an intervention, it models the impact of accumulating event occurrences on the unit, it admits a link function in which the effect of possibly time-dependent covariates are incorporated, and it allows the incorporation of unobservable frailty components which induce dependencies among the inter-event times for each unit. The estimation procedures are semiparametric in that a baseline hazard function is nonparametrically specified. The sampling distribution properties of the estimators are examined through a simulation study, and the consequences of mis-specifying the model are analyzed. The results indicate that the flexibility of this general class of models provides a safeguard for analyzing recurrent event data, even data possibly arising from a frailty-less mechanism. The estimation procedures are applied to real data sets arising in the biomedical and public health settings, as well as from reliability and engineering situations. In particular, the procedures are applied to a data set pertaining to times to recurrence of bladder cancer and the results of the analysis are compared to those obtained using three methods of analyzing recurrent event data.  相似文献   

3.
The counting process with the Cox-type intensity function has been commonly used to analyse recurrent event data. This model essentially assumes that the underlying counting process is a time-transformed Poisson process and that the covariates have multiplicative effects on the mean and rate function of the counting process. Recently, Pepe and Cai, and Lawless and co-workers have proposed semiparametric procedures for making inferences about the mean and rate function of the counting process without the Poisson-type assumption. In this paper, we provide a rigorous justification of such robust procedures through modern empirical process theory. Furthermore, we present an approach to constructing simultaneous confidence bands for the mean function and describe a class of graphical and numerical techniques for checking the adequacy of the fitted mean–rate model. The advantages of the robust procedures are demonstrated through simulation studies. An illustration with multiple-infection data taken from a clinical study on chronic granulomatous disease is also provided.  相似文献   

4.
Missing data are common in many experiments, including surveys, clinical trials, epidemiological studies, and environmental studies. Unconstrained likelihood inferences for generalized linear models (GLMs) with nonignorable missing covariates have been studied extensively in the literature. However, parameter orderings or constraints may occur naturally in practice, and thus the efficiency of a statistical method may be improved by incorporating parameter constraints into the likelihood function. In this paper, we consider constrained inference for analysing GLMs with nonignorable missing covariates under linear inequality constraints on the model parameters. Specifically, constrained maximum likelihood (ML) estimation is based on the gradient projection expectation maximization approach. Further, we investigate the asymptotic null distribution of the constrained likelihood ratio test (LRT). Simulations study the empirical properties of the constrained ML estimators and LRTs, which demonstrate improved precision of these constrained techniques. An application to contaminant levels in an environmental study is also presented.  相似文献   

5.
The lognormal distribution is quite commonly used as a lifetime distribution. Data arising from life-testing and reliability studies are often left truncated and right censored. Here, the EM algorithm is used to estimate the parameters of the lognormal model based on left truncated and right censored data. The maximization step of the algorithm is carried out by two alternative methods, with one involving approximation using Taylor series expansion (leading to approximate maximum likelihood estimate) and the other based on the EM gradient algorithm (Lange, 1995). These two methods are compared based on Monte Carlo simulations. The Fisher scoring method for obtaining the maximum likelihood estimates shows a problem of convergence under this setup, except when the truncation percentage is small. The asymptotic variance-covariance matrix of the MLEs is derived by using the missing information principle (Louis, 1982), and then the asymptotic confidence intervals for scale and shape parameters are obtained and compared with corresponding bootstrap confidence intervals. Finally, some numerical examples are given to illustrate all the methods of inference developed here.  相似文献   

6.
It is well known that the normal mixture with unequal variance has unbounded likelihood and thus the corresponding global maximum likelihood estimator (MLE) is undefined. One of the commonly used solutions is to put a constraint on the parameter space so that the likelihood is bounded and then one can run the EM algorithm on this constrained parameter space to find the constrained global MLE. However, choosing the constraint parameter is a difficult issue and in many cases different choices may give different constrained global MLE. In this article, we propose a profile log likelihood method and a graphical way to find the maximum interior mode. Based on our proposed method, we can also see how the constraint parameter, used in the constrained EM algorithm, affects the constrained global MLE. Using two simulation examples and a real data application, we demonstrate the success of our new method in solving the unboundness of the mixture likelihood and locating the maximum interior mode.  相似文献   

7.
8.
Recurrent event data often arise in biomedical studies, with examples including hospitalizations, infections, and treatment failures. In observational studies, it is often of interest to estimate the effects of covariates on the marginal recurrent event rate. The majority of existing rate regression methods assume multiplicative covariate effects. We propose a semiparametric model for the marginal recurrent event rate, wherein the covariates are assumed to add to the unspecified baseline rate. Covariate effects are summarized by rate differences, meaning that the absolute effect on the rate function can be determined from the regression coefficient alone. We describe modifications of the proposed method to accommodate a terminating event (e.g., death). Proposed estimators of the regression parameters and baseline rate are shown to be consistent and asymptotically Gaussian. Simulation studies demonstrate that the asymptotic approximations are accurate in finite samples. The proposed methods are applied to a state-wide kidney transplant data set.  相似文献   

9.
The estimation of the mixtures of regression models is usually based on the normal assumption of components and maximum likelihood estimation of the normal components is sensitive to noise, outliers, or high-leverage points. Missing values are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this article, we propose the mixtures of regression models for contaminated incomplete heterogeneous data. The proposed models provide robust estimates of regression coefficients varying across latent subgroups even under the presence of missing values. The methodology is illustrated through simulation studies and a real data analysis.  相似文献   

10.
We review five packages for estimating finite mixtures, BINOMIX, C.A. MAN, MIX, and the maximum likelihood routines of BMDP and STATA. The focus of the review is on numerical issues rather than matters such as user interface because the success or failure of an algorithm to yield a mixture model is likely to be the most important issue facing a researcher. The problem of suitable initial values is discussed throughout.  相似文献   

11.
We developed methods for estimating the causal risk difference and causal risk ratio in randomized trials with noncompliance. The developed estimator is unbiased under the assumption that biases due to noncompliance are identical between both treatment arms. The biases are defined as the difference or ratio between the expectations of potential outcomes for a group that received the test treatment and that for the control group in each randomly assigned group. Although the instrumental variable estimator yields an unbiased estimate under a sharp null hypothesis but may yield a biased estimate under a non-null hypothesis, the bias of the developed estimator does not depend on whether this hypothesis holds. Then the estimate of the causal effect from the developed estimator may have a smaller bias than that from the instrumental variable estimator when the treatment effect exists. There is not yet a standard method for coping with noncompliance, and thus it is important to evaluate estimates under different assumptions. The developed estimator can serve this purpose. Its application to a field trial for coronary heart disease is provided.  相似文献   

12.
In this paper, we focus on exact inference for exponential distribution under multiple Type-I censoring, which is a general form of Type-I censoring and represents that the units are terminated at different times. The maximum likelihood estimate of mean parameter is calculated. Further, the distribution of maximum likelihood estimate is derived and it yields an exact lower confidence limit for the mean parameter. Based on a simulation study, we conclude that the exact limit outperforms the bootstrap limit in terms of the coverage probability and average limit. Finally, a real dataset is analyzed for illustration.  相似文献   

13.
The Hidden semi-Markov models (HSMMs) were introduced to overcome the constraint of a geometric sojourn time distribution for the different hidden states in the classical hidden Markov models. Several variations of HSMMs were proposed that model the sojourn times by a parametric or a nonparametric family of distributions. In this article, we concentrate our interest on the nonparametric case where the duration distributions are attached to transitions and not to states as in most of the published papers in HSMMs. Therefore, it is worth noticing that here we treat the underlying hidden semi-Markov chain in its general probabilistic structure. In that case, Barbu and Limnios (2008 Barbu , V. , Limnios , N. ( 2008 ). Semi-Markov Chains and Hidden Semi-Markov Models Toward Applications: Their Use in Reliability and DNA Analysis . New York : Springer . [Google Scholar]) proposed an Expectation–Maximization (EM) algorithm in order to estimate the semi-Markov kernel and the emission probabilities that characterize the dynamics of the model. In this article, we consider an improved version of Barbu and Limnios' EM algorithm which is faster than the original one. Moreover, we propose a stochastic version of the EM algorithm that achieves comparable estimates with the EM algorithm in less execution time. Some numerical examples are provided which illustrate the efficient performance of the proposed algorithms.  相似文献   

14.
We propose a method for estimating parameters in generalized linear models with missing covariates and a non-ignorable missing data mechanism. We use a multinomial model for the missing data indicators and propose a joint distribution for them which can be written as a sequence of one-dimensional conditional distributions, with each one-dimensional conditional distribution consisting of a logistic regression. We allow the covariates to be either categorical or continuous. The joint covariate distribution is also modelled via a sequence of one-dimensional conditional distributions, and the response variable is assumed to be completely observed. We derive the E- and M-steps of the EM algorithm with non-ignorable missing covariate data. For categorical covariates, we derive a closed form expression for the E- and M-steps of the EM algorithm for obtaining the maximum likelihood estimates (MLEs). For continuous covariates, we use a Monte Carlo version of the EM algorithm to obtain the MLEs via the Gibbs sampler. Computational techniques for Gibbs sampling are proposed and implemented. The parametric form of the assumed missing data mechanism itself is not `testable' from the data, and thus the non-ignorable modelling considered here can be viewed as a sensitivity analysis concerning a more complicated model. Therefore, although a model may have `passed' the tests for a certain missing data mechanism, this does not mean that we have captured, even approximately, the correct missing data mechanism. Hence, model checking for the missing data mechanism and sensitivity analyses play an important role in this problem and are discussed in detail. Several simulations are given to demonstrate the methodology. In addition, a real data set from a melanoma cancer clinical trial is presented to illustrate the methods proposed.  相似文献   

15.
Non-Gaussian spatial responses are usually modeled using spatial generalized linear mixed model with spatial random effects. The likelihood function of this model cannot usually be given in a closed form, thus the maximum likelihood approach is very challenging. There are numerical ways to maximize the likelihood function, such as Monte Carlo Expectation Maximization and Quadrature Pairwise Expectation Maximization algorithms. They can be applied but may in such cases be computationally very slow or even prohibitive. Gauss–Hermite quadrature approximation only suitable for low-dimensional latent variables and its accuracy depends on the number of quadrature points. Here, we propose a new approximate pairwise maximum likelihood method to the inference of the spatial generalized linear mixed model. This approximate method is fast and deterministic, using no sampling-based strategies. The performance of the proposed method is illustrated through two simulation examples and practical aspects are investigated through a case study on a rainfall data set.  相似文献   

16.
Finite mixture models have provided a reasonable tool to model various types of observed phenomena, specially those which are random in nature. In this article, a finite mixture of Weibull and Pareto (IV) distribution is considered and studied. Some structural properties of the resulting model are discussed including estimation of the model parameters via expectation maximization (EM) algorithm. A real-life data application exhibits the fact that in certain situations, this mixture model might be a better alternative than the rival popular models.  相似文献   

17.
Summary.  Likelihood inference for discretely observed Markov jump processes with finite state space is investigated. The existence and uniqueness of the maximum likelihood estimator of the intensity matrix are investigated. This topic is closely related to the imbedding problem for Markov chains. It is demonstrated that the maximum likelihood estimator can be found either by the EM algorithm or by a Markov chain Monte Carlo procedure. When the maximum likelihood estimator does not exist, an estimator can be obtained by using a penalized likelihood function or by the Markov chain Monte Carlo procedure with a suitable prior. The methodology and its implementation are illustrated by examples and simulation studies.  相似文献   

18.
Owing to the nature of the problems and the design of questionnaires, discrete polytomous data are very common in behavioural, medical and social research. Analysing the relationships between the manifest and the latent variables based on mixed polytomous and continuous data has proven to be difficult. A general structural equation model is investigated for these mixed outcomes. Maximum likelihood (ML) estimates of the unknown thresholds and the structural parameters in the covariance structure are obtained. A Monte Carlo–EM algorithm is implemented to produce the ML estimates. It is shown that closed form solutions can be obtained for the M-step, and estimates of the latent variables are produced as a by-product of the analysis. The method is illustrated with a real example.  相似文献   

19.
Progressive multi-state models provide a convenient framework for characterizing chronic disease processes where the states represent the degree of damage resulting from the disease. Incomplete data often arise in studies of such processes, and standard methods of analysis can lead to biased parameter estimates when observation of data is response-dependent. This paper describes a joint analysis useful for fitting progressive multi-state models to data arising in longitudinal studies in such settings. Likelihood based methods are described and parameters are shown to be identifiable. An EM algorithm is described for parameter estimation, and variance estimation is carried out using the Louis’ method. Simulation studies demonstrate that the proposed method works well in practice under a variety of settings. An application to data from a smoking prevention study illustrates the utility of the method.  相似文献   

20.
It is well-known that the nonparametric maximum likelihood estimator (NPMLE) of a survival function may severely underestimate the survival probabilities at very early times for left truncated data. This problem might be overcome by instead computing a smoothed nonparametric estimator (SNE) via the EMS algorithm. The close connection between the SNE and the maximum penalized likelihood estimator is also established. Extensive Monte Carlo simulations demonstrate the superior performance of the SNE over that of the NPMLE, in terms of either bias or variance, even for moderately large Samples. The methodology is illustrated with an application to the Massachusetts Health Care Panel Study dataset to estimate the probability of being functionally independent for non-poor male and female groups rcspectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号