首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Independent censoring is commonly assumed in survival analysis. However, it may be questionable when censoring is related to event time. We model the event and censoring time marginally through accelerated failure time models, and model their association by a known copula. An iteration algorithm is proposed to estimate the regression parameters. Simulation results show the improvement of the proposed method compared to the naive method under independent censoring. Sensitivity analysis gives the evidences that the proposed method can obtain reasonable estimates even when the forms of copula are misspecified. We illustrate its application by analyzing prostate cancer data.  相似文献   

2.
The product limit or Kaplan‐Meier (KM) estimator is commonly used to estimate the survival function in the presence of incomplete time to event. Application of this method assumes inherently that the occurrence of an event is known with certainty. However, the clinical diagnosis of an event is often subject to misclassification due to assay error or adjudication error, by which the event is assessed with some uncertainty. In the presence of such errors, the true distribution of the time to first event would not be estimated accurately using the KM method. We develop a method to estimate the true survival distribution by incorporating negative predictive values and positive predictive values, into a KM‐like method of estimation. This allows us to quantify the bias in the KM survival estimates due to the presence of misclassified events in the observed data. We present an unbiased estimator of the true survival function and its variance. Asymptotic properties of the proposed estimators are provided, and these properties are examined through simulations. We demonstrate our methods using data from the Viral Resistance to Antiviral Therapy of Hepatitis C study.  相似文献   

3.
Despite decades of research in the medical literature, assessment of the attributable mortality due to nosocomial infections in the intensive care unit (ICU) remains controversial, with different studies describing effect estimates ranging from being neutral to extremely risk increasing. Interpretation of study results is further hindered by inappropriate adjustment (a) for censoring of the survival time by discharge from the ICU, and (b) for time-dependent confounders on the causal path from infection to mortality. In previous work (Vansteelandt et al. Biostatistics 10:46–59), we have accommodated this through inverse probability of treatment and censoring weighting. Because censoring due to discharge from the ICU is so intimately connected with a patient’s health condition, the ensuing inverse weighting analyses suffer from influential weights and rely heavily on the assumption that one has measured all common risk factors of ICU discharge and mortality. In this paper, we consider ICU discharge as a competing risk in the sense that we aim to infer the risk of ‘ICU mortality’ over time that would be observed if nosocomial infections could be prevented for the entire study population. For this purpose we develop marginal structural subdistribution hazard models with accompanying estimation methods. In contrast to subdistribution hazard models with time-varying covariates, the proposed approach (a) can accommodate high-dimensional confounders, (b) avoids regression adjustment for post-infection measurements and thereby so-called collider-stratification bias, and (c) results in a well-defined model for the cumulative incidence function. The methods are used to quantify the causal effect of nosocomial pneumonia on ICU mortality using data from the National Surveillance Study of Nosocomial Infections in ICU’s (Belgium).  相似文献   

4.
We consider the analysis of spell durations observed in event history studies where members of the study panel are seen intermittently. Challenges for analysis arise because losses to followup are frequently related to previous event history, and spells typically overlap more than one observation period. We provide methods of estimation based on inverse probability of censoring weighting for parametric and semiparametric Cox regression models. Selection of panel members through a complex survey design is also addressed, and the methods are illustrated in an analysis of jobless spell durations based on data from the Statistics Canada Survey of Labour and Income Dynamics. The Canadian Journal of Statistics 40: 1–21; 2012 © 2012 Statistical Society of Canada  相似文献   

5.
Frailty models can be fit as mixed-effects Poisson models after transforming time-to-event data to the Poisson model framework. We assess, through simulations, the robustness of Poisson likelihood estimation for Cox proportional hazards models with log-normal frailties under misspecified frailty distribution. The log-gamma and Laplace distributions were used as true distributions for frailties on a natural log scale. Factors such as the magnitude of heterogeneity, censoring rate, number and sizes of groups were explored. In the simulations, the Poisson modeling approach that assumes log-normally distributed frailties provided accurate estimates of within- and between-group fixed effects even under a misspecified frailty distribution. Non-robust estimation of variance components was observed in the situations of substantial heterogeneity, large event rates, or high data dimensions.  相似文献   

6.
Papers dealing with measures of predictive power in survival analysis have seen their independence of censoring, or their estimates being unbiased under censoring, as the most important property. We argue that this property has been wrongly understood. Discussing the so-called measure of information gain, we point out that we cannot have unbiased estimates if all values, greater than a given time τ, are censored. This is due to the fact that censoring before τ has a different effect than censoring after τ. Such τ is often introduced by design of a study. Independence can only be achieved under the assumption of the model being valid after τ, which is impossible to verify. But if one is willing to make such an assumption, we suggest using multiple imputation to obtain a consistent estimate. We further show that censoring has different effects on the estimation of the measure for the Cox model than for parametric models, and we discuss them separately. We also give some warnings about the usage of the measure, especially when it comes to comparing essentially different models.  相似文献   

7.
In industrial life tests, reliability analysis and clinical trials, the type-II progressive censoring methodology, which allows for random removals of the remaining survival units at each failure time, has become quite popular for analyzing lifetime data. Parameter estimation under progressively type-II censored samples for many common lifetime distributions has been investigated extensively. However, how to estimate unknown parameters of the mixed distribution models under progressive type-II censoring schemes is still a challenging and interesting problem. Based on progressively type-II censored samples, this paper addresses the estimation problem of mixed generalized exponential distributions. In addition, it is observed that the maximum-likelihood estimates (MLEs) cannot be easily obtained in closed form due to the complexity of the likelihood function. Thus, we make good use of the expectation-maximization algorithm to obtain the MLEs. Finally, some simulations are implemented in order to show the performance of the proposed method under finite samples and a case analysis is illustrated.  相似文献   

8.
The recurrent-event setting, where the subjects experience multiple occurrences of the event of interest, are encountered in many biomedical applications. In analyzing recurrent event data, non informative censoring is often assumed for the implementation of statistical methods. However, when a terminating event such as death serves as part of the censoring mechanism, validity of the censoring assumption may be violated because recurrence can be a powerful risk factor for death. We consider joint modeling of recurrent event process and terminating event under a Bayesian framework in which a shared frailty is used to model the association between the intensity of the recurrent event process and the hazard of the terminating event. Our proposed model is implemented on data from a well-known cancer study.  相似文献   

9.
Four distribution-free tests are developed for use in matched pair experiments when data may be censored: a bootstrap based on estimates of the median difference, and three rerandomization tests. The latter include a globally almost most powerful (GAMP) test which uses the original data and two modified Gilbert-Gehan tests which use the ranks. Computation time is reduced by using a binary count to generate subsamples and by restricting subsampling to the uncensored pairs. In Monte Carlo simulations against normal alternatives, mixed normal alternatives, and exponential alternatives, the GAMP test is most powerful with light censoring, the rank test is most powerful with heavy censoring. The bootstrap degenerates to the sign test and is least powerful.  相似文献   

10.
Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003 Goldstein , H. ( 2003 ). Multilevel Statistical Models. 3rd ed . London : Edward Arnold . [Google Scholar]) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.  相似文献   

11.
Panel count data often occur in a long-term study where the primary end point is the time to a specific event and each subject may experience multiple recurrences of this event. Furthermore, suppose that it is not feasible to keep subjects under observation continuously and the numbers of recurrences for each subject are only recorded at several distinct time points over the study period. Moreover, the set of observation times may vary from subject to subject. In this paper, regression methods, which are derived under simple semiparametric models, are proposed for the analysis of such longitudinal count data. Especially, we consider the situation when both observation and censoring times may depend on covariates. The new procedures are illustrated with data from a well-known cancer study.  相似文献   

12.
Variable screening for censored survival data is most challenging when both survival and censoring times are correlated with an ultrahigh-dimensional vector of covariates. Existing approaches to handling censoring often make use of inverse probability weighting by assuming independent censoring with both survival time and covariates. This is a convenient but rather restrictive assumption which may be unmet in real applications, especially when the censoring mechanism is complex and the number of covariates is large. To accommodate heterogeneous (covariate-dependent) censoring that is often present in high-dimensional survival data, we propose a Gehan-type rank screening method to select features that are relevant to the survival time. The method is invariant to monotone transformations of the response and of the predictors, and works robustly for a general class of survival models. We establish the sure screening property of the proposed methodology. Simulation studies and a lymphoma data analysis demonstrate its favorable performance and practical utility.  相似文献   

13.
In biomedical studies where the event of interest is recurrent (e.g., hospitalization), it is often the case that the recurrent event sequence is subject to being stopped by a terminating event (e.g., death). In comparing treatment options, the marginal recurrent event mean is frequently of interest. One major complication in the recurrent/terminal event setting is that censoring times are not known for subjects observed to die, which renders standard risk set based methods of estimation inapplicable. We propose two semiparametric methods for estimating the difference or ratio of treatment-specific marginal mean numbers of events. The first method involves imputing unobserved censoring times, while the second methods uses inverse probability of censoring weighting. In each case, imbalances in the treatment-specific covariate distributions are adjusted out through inverse probability of treatment weighting. After the imputation and/or weighting, the treatment-specific means (then their difference or ratio) are estimated nonparametrically. Large-sample properties are derived for each of the proposed estimators, with finite sample properties assessed through simulation. The proposed methods are applied to kidney transplant data.  相似文献   

14.
Summary.  Recurrent events models have had considerable attention recently. The majority of approaches show the consistency of parameter estimates under the assumption that censoring is independent of the recurrent events process of interest conditional on the covariates that are included in the model. We provide an overview of available recurrent events analysis methods and present an inverse probability of censoring weighted estimator for the regression parameters in the Andersen–Gill model that is commonly used for recurrent event analysis. This estimator remains consistent under informative censoring if the censoring mechanism is estimated consistently, and it generally improves on the naïve estimator for the Andersen–Gill model in the case of independent censoring. We illustrate the bias of ad hoc estimators in the presence of informative censoring with a simulation study and provide a data analysis of recurrent lung exacerbations in cystic fibrosis patients when some patients are lost to follow-up.  相似文献   

15.
Type I and Type II censored data arise frequently in controlled laboratory studies concerning time to a particular event (e.g., death of an animal or failure of a physical device). Log-location-scale distributions (e.g., Weibull, lognormal, and loglogistic) are commonly used to model the resulting data. Maximum likelihood (ML) is generally used to obtain parameter estimates when the data are censored. The Fisher information matrix can be used to obtain large-sample approximate variances and covariances of the ML estimates or to estimate these variances and covariances from data. The derivations of the Fisher information matrix proceed differently for Type I (time censoring) and Type II (failure censoring) because the number of failures is random in Type I censoring, but length of the data collection period is random in Type II censoring. Under regularity conditions (met with the above-mentioned log-location-scale distributions), we outline the different derivations and show that the Fisher information matrices for Type I and Type II censoring are asymptotically equivalent.  相似文献   

16.
In survival data analysis it is frequent the occurrence of a significant amount of censoring to the right indicating that there may be a proportion of individuals in the study for which the event of interest will never happen. This fact is not considered by the ordinary survival theory. Consequently, the survival models with a cure fraction have been receiving a lot of attention in the recent years. In this article, we consider the standard mixture cure rate model where a fraction p 0 of the population is of individuals cured or immune and the remaining 1 ? p 0 are not cured. We assume an exponential distribution for the survival time and an uniform-exponential for the censoring time. In a simulation study, the impact caused by the informative uniform-exponential censoring on the coverage probabilities and lengths of asymptotic confidence intervals is analyzed by using the Fisher information and observed information matrices.  相似文献   

17.
Parametric models for interval censored data can now easily be fitted with minimal programming in certain standard statistical software packages. Regression equations can be introduced, both for the location and for the dispersion parameters. Finite mixture models can also be fitted, with a point mass on right (or left) censored observations, to allow for individuals who cannot have the event (or already have it). This mixing probability can also be allowed to follow a regression equation.Here, models based on nine different distributions are compared for three examples of heavily censored data as well as a set of simulated data. We find that, for parametric models, interval censoring can often be ignored and that the density, at centres of intervals, can be used instead in the likelihood function, although the approximation is not always reliable. In the context of heavily interval censored data, the conclusions from parametric models are remarkably robust with changing distributional assumptions and generally more informative than the corresponding non-parametric models.  相似文献   

18.
The hybrid censoring scheme is a mixture of Type-I and Type-II censoring schemes. Based on hybrid censored samples, we first derive the maximum likelihood estimators of the unknown parameters and the expected Fisher’s information matrix of the generalized inverted exponential distribution (GIED). Monte Carlo simulations are performed to study the performance of the maximum likelihood estimators. Next we consider Bayes estimation under the squared error loss function. These Bayes estimates are evaluated by applying Lindley’s approximation method, the importance sampling procedure and Metropolis–Hastings algorithm. The importance sampling technique is used to compute the highest posterior density credible intervals. Two data sets are analyzed for illustrative purposes. Finally, we discuss a method of obtaining the optimum hybrid censoring scheme.  相似文献   

19.
In this paper, we extended a parallel system survival model based on the bivariate exponential to incorporate a time varying covariate. We calculated the bias, standard error and rmse of the parameter estimates of this model at different censoring levels using simulated data. We then compared the difference in the total error when a fixed covariate model was used instead of the true time varying covariate model. Following that, we studied three methods of constructing confidence intervals for such models and conclusions were drawn based on the results of the coverage probability study. Finally, the results obtained by fitting the diabetic retinopathy study data to the model were analysed.  相似文献   

20.
The Type-II progressive censoring scheme has become very popular for analyzing lifetime data in reliability and survival analysis. However, no published papers address parameter estimation under progressive Type-II censoring for the mixed exponential distribution (MED), which is an important model for reliability and survival analysis. This is the problem that we address in this paper. It is noted that maximum likelihood estimation of unknown parameters cannot be obtained in closed form due to the complicated log-likelihood function. We solve this problem by using the EM algorithm. Finally, we obtain closed form estimates of the model. The proposed methods are illustrated by both some simulations and a case analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号