首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Several observational studies give rise to randomly left truncated data. In a nonparametric model for such data X denotes a variable of interest, T denotes the truncation variable and the distributions of both X and T are left unspecified. For this model, the product-limit estimator, which is also the maximum likelihood estimator of the survival curve, has been widely discussed. In this article, a nonparametric Bayes estimator of the survival function based on randomly left truncated data and Dirichlet process prior is presented. Some new results on the mixtures of Dirichlet processes in the context of truncated data are obtained. These results are then used to derive the Bayes estimator of the survival function under squared error loss. The weak convergence of the Bayes estimator is studied. An example using transfusion related AIDS data quoted in Kalbfleisch and Lawless (1989) is considered.  相似文献   

2.
The lognormal distribution is quite commonly used as a lifetime distribution. Data arising from life-testing and reliability studies are often left truncated and right censored. Here, the EM algorithm is used to estimate the parameters of the lognormal model based on left truncated and right censored data. The maximization step of the algorithm is carried out by two alternative methods, with one involving approximation using Taylor series expansion (leading to approximate maximum likelihood estimate) and the other based on the EM gradient algorithm (Lange, 1995). These two methods are compared based on Monte Carlo simulations. The Fisher scoring method for obtaining the maximum likelihood estimates shows a problem of convergence under this setup, except when the truncation percentage is small. The asymptotic variance-covariance matrix of the MLEs is derived by using the missing information principle (Louis, 1982), and then the asymptotic confidence intervals for scale and shape parameters are obtained and compared with corresponding bootstrap confidence intervals. Finally, some numerical examples are given to illustrate all the methods of inference developed here.  相似文献   

3.
It is well-known that the nonparametric maximum likelihood estimator (NPMLE) may severely under-estimate the survival function with left truncated data. Based on the Nelson estimator (for right censored data) and self-consistency we suggest a nonparametric estimator of the survival function, the iterative Nelson estimator (INE), for arbitrarily truncated and censored data, where only few nonparametric estimators are available. By simulation we show that the INE does well in overcoming the under-estimation of the survival function from the NPMLE for left-truncated and interval-censored data. An interesting application of the INE is as a diagnostic tool for other estimators, such as the monotone MLE or parametric MLEs. The methodology is illustrated by application to two real world problems: the Channing House and the Massachusetts Health Care Panel Study data sets.  相似文献   

4.
In this note, we consider estimating the bivariate survival function when both survival times are subject to random left truncation and one of the survival times is subject to random right censoring. Motivated by Satten and Datta [2001. The Kaplan–Meier estimator as an inverse-probability-of-censoring weighted average. Amer. Statist. 55, 207–210], we propose an inverse-probability-weighted (IPW) estimator. It involves simultaneous estimation of the bivariate survival function of the truncation variables and that of the censoring variable and the truncation variable of the uncensored components. We prove that (i) when there is no censoring, the IPW estimator reduces to NPMLE of van der Laan [1996a. Nonparametric estimation of the bivariate survival function with truncated data. J. Multivariate Anal. 58, 107–131] and (ii) when there is random left truncation and right censoring on only one of the components and the other component is always observed, the IPW estimator reduces to the estimator of Gijbels and Gürler [1998. Covariance function of a bivariate distribution function estimator for left truncated and right censored data. Statist. Sin. 1219–1232]. Based on Theorem 3.1 of van der Laan [1996a. Nonparametric estimation of the bivariate survival function with truncated data. J. Multivariate Anal. 58, 107–131, 1996b. Efficient estimation of the bivariate censoring model and repairing NPMLE. Ann. Statist. 24, 596–627], we prove that the IPW estimator is consistent under certain conditions. Finally, we examine the finite sample performance of the IPW estimator in some simulation studies. For the special case that censoring time is independent of truncation time, a simulation study is conducted to compare the performances of the IPW estimator against that of the estimator proposed by van der Laan [1996a. Nonparametric estimation of the bivariate survival function with truncated data. J. Multivariate Anal. 58, 107–131, 1996b. Efficient estimation of the bivariate censoring model and repairing NPMLE. Ann. Statist. 24, 596–627]. For the special case (i), a simulation study is conducted to compare the performances of the IPW estimator against that of the estimator proposed by Huang et al. (2001. Nonnparametric estimation of marginal distributions under bivariate truncation with application to testing for age-of-onset application. Statist. Sin. 11, 1047–1068).  相似文献   

5.
Peto and Peto (1972) have studied rank invariant tests to compare two survival curves for right censored data. We apply their tests, including the logrank test and the generalized Wilcoxon test, to left truncated and interval censored data. The significance levels of the tests are approximated by Monte Carlo permutation tests. Simulation studies are conducted to show their size and power under different distributional differences. In particular, the logrank test works well under the Cox proportional hazards alternatives, as for the usual right censored data. The methods are illustrated by the analysis of the Massachusetts Health Care Panel Study dataset.  相似文献   

6.
In many practical situations, complete data are not available in lifetime studies. Many of the available observations are right censored giving survival information up to a noted time and not the exact failure times. This constitutes randomly censored data. In this paper, we consider Maxwell distribution as a survival time model. The censoring time is also assumed to follow a Maxwell distribution with a different parameter. Maximum likelihood estimators and confidence intervals for the parameters are derived with randomly censored data. Bayes estimators are also developed with inverted gamma priors and generalized entropy loss function. A Monte Carlo simulation study is performed to compare the developed estimation procedures. A real data example is given at the end of the study.  相似文献   

7.
In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates.  相似文献   

8.
Tree‐based methods are frequently used in studies with censored survival time. Their structure and ease of interpretability make them useful to identify prognostic factors and to predict conditional survival probabilities given an individual's covariates. The existing methods are tailor‐made to deal with a survival time variable that is measured continuously. However, survival variables measured on a discrete scale are often encountered in practice. The authors propose a new tree construction method specifically adapted to such discrete‐time survival variables. The splitting procedure can be seen as an extension, to the case of right‐censored data, of the entropy criterion for a categorical outcome. The selection of the final tree is made through a pruning algorithm combined with a bootstrap correction. The authors also present a simple way of potentially improving the predictive performance of a single tree through bagging. A simulation study shows that single trees and bagged‐trees perform well compared to a parametric model. A real data example investigating the usefulness of personality dimensions in predicting early onset of cigarette smoking is presented. The Canadian Journal of Statistics 37: 17‐32; 2009 © 2009 Statistical Society of Canada  相似文献   

9.
The randomization design used to collect the data provides basis for the exact distributions of the permutation tests. The truncated binomial design is one of the commonly used designs for forcing balance in clinical trials to eliminate experimental bias. In this article, we consider the exact distribution of the weighted log-rank class of tests for censored data under the truncated binomial design. A double saddlepoint approximation for p-values of this class is derived under the truncated binomial design. The speed and accuracy of the saddlepoint approximation over the normal asymptotic facilitate the inversion of the weighted log-rank tests to determine nominal 95% confidence intervals for treatment effect with right censored data.  相似文献   

10.
11.
Missing covariates data with censored outcomes put a challenge in the analysis of clinical data especially in small sample settings. Multiple imputation (MI) techniques are popularly used to impute missing covariates and the data are then analyzed through methods that can handle censoring. However, techniques based on MI are available to impute censored data also but they are not much in practice. In the present study, we applied a method based on multiple imputation by chained equations to impute missing values of covariates and also to impute censored outcomes using restricted survival time in small sample settings. The complete data were then analyzed using linear regression models. Simulation studies and a real example of CHD data show that the present method produced better estimates and lower standard errors when applied on the data having missing covariate values and censored outcomes than the analysis of the data having censored outcome but excluding cases with missing covariates or the analysis when cases with missing covariate values and censored outcomes were excluded from the data (complete case analysis).  相似文献   

12.

It is well known that the nonparametric maximum likelihood estimator (NPMLE) can severely underestimate the survival probabilities at early times for left-truncated and interval-censored (LT-IC) data. For arbitrarily truncated and censored data, Pan and Chappel (JAMA Stat Probab Lett 38:49–57, 1998a, Biometrics 54:1053–1060, 1998b) proposed a nonparametric estimator of the survival function, called the iterative Nelson estimator (INE). Their simulation study showed that the INE performed well in overcoming the under-estimation of the survival function from the NPMLE for LT-IC data. In this article, we revisit the problem of inconsistency of the NPMLE. We point out that the inconsistency is caused by the likelihood function of the left-censored observations, where the left-truncated variables are used as the left endpoints of censoring intervals. This can lead to severe underestimation of the survival function if the NPMLE is obtained using Turnbull’s (JAMA 38:290–295, 1976) EM algorithm. To overcome this problem, we propose a modified maximum likelihood estimator (MMLE) based on a modified likelihood function, where the left endpoints of censoring intervals for left-censored observations are the maximum of left-truncated variables and the estimated left endpoint of the support of the left-censored times. Simulation studies show that the MMLE performs well for finite sample and outperforms both the INE and NPMLE.

  相似文献   

13.
This article presents a nonparametric Bayesian procedure for estimating a survival curve in a proportional hazard model when some of the data are censored on the left and some are censored on the right. The method works under the assumption that there is a Dirichlet process prior knowledge on the observable variable. Strong consistency of the estimator is proved and an example is given. To finish some simulation is presented to analyze the estimator.  相似文献   

14.
In oncology, progression-free survival time, which is defined as the minimum of the times to disease progression or death, often is used to characterize treatment and covariate effects. We are motivated by the desire to estimate the progression time distribution on the basis of data from 780 paediatric patients with choroid plexus tumours, which are a rare brain cancer where disease progression always precedes death. In retrospective data on 674 patients, the times to death or censoring were recorded but progression times were missing. In a prospective study of 106 patients, both times were recorded but there were only 20 non-censored progression times and 10 non-censored survival times. Consequently, estimating the progression time distribution is complicated by the problems that, for most of the patients, either the survival time is known but the progression time is not known, or the survival time is right censored and it is not known whether the patient's disease progressed before censoring. For data with these missingness structures, we formulate a family of Bayesian parametric likelihoods and present methods for estimating the progression time distribution. The underlying idea is that estimating the association between the time to progression and subsequent survival time from patients having complete data provides a basis for utilizing covariates and partial event time data of other patients to infer their missing progression times. We illustrate the methodology by analysing the brain tumour data, and we also present a simulation study.  相似文献   

15.
The analysis of time‐to‐event data typically makes the censoring at random assumption, ie, that—conditional on covariates in the model—the distribution of event times is the same, whether they are observed or unobserved (ie, right censored). When patients who remain in follow‐up stay on their assigned treatment, then analysis under this assumption broadly addresses the de jure, or “while on treatment strategy” estimand. In such cases, we may well wish to explore the robustness of our inference to more pragmatic, de facto or “treatment policy strategy,” assumptions about the behaviour of patients post‐censoring. This is particularly the case when censoring occurs because patients change, or revert, to the usual (ie, reference) standard of care. Recent work has shown how such questions can be addressed for trials with continuous outcome data and longitudinal follow‐up, using reference‐based multiple imputation. For example, patients in the active arm may have their missing data imputed assuming they reverted to the control (ie, reference) intervention on withdrawal. Reference‐based imputation has two advantages: (a) it avoids the user specifying numerous parameters describing the distribution of patients' postwithdrawal data and (b) it is, to a good approximation, information anchored, so that the proportion of information lost due to missing data under the primary analysis is held constant across the sensitivity analyses. In this article, we build on recent work in the survival context, proposing a class of reference‐based assumptions appropriate for time‐to‐event data. We report a simulation study exploring the extent to which the multiple imputation estimator (using Rubin's variance formula) is information anchored in this setting and then illustrate the approach by reanalysing data from a randomized trial, which compared medical therapy with angioplasty for patients presenting with angina.  相似文献   

16.
This paper introduces a nonparametric approach for testing the equality of two or more survival distributions based on right censored failure times with missing population marks for the censored observations. The standard log-rank test is not applicable here because the population membership information is not available for the right censored individuals. We propose to use the imputed population marks for the censored observations leading to fractional at-risk sets that can be used in a two sample censored data log-rank test. We demonstrate with a simple example that there could be a gain in power by imputing population marks (the proposed method) for the right censored individuals compared to simply removing them (which also would maintain the right size). Performance of the imputed log-rank tests obtained this way is studied through simulation. We also obtain an asymptotic linear representation of our test statistic. Our testing methodology is illustrated using a real data set.  相似文献   

17.
We give chi-squared goodness-of fit tests for parametric regression models such as accelerated failure time, proportional hazards, generalized proportional hazards, frailty models, transformation models, and models with cross-effects of survival functions. Random right censored data are used. Choice of random grouping intervals as data functions is considered.  相似文献   

18.
Income and wealth data are typically modelled by some variant of the classical Pareto distribution. Often, in practice, the observed data are truncated with respect to some unobserved covariate. In this paper, a hidden truncation formulation of this scenario is proposed and analysed. For this purpose, a bivariate Pareto (IV) distribution is assumed for the variable of interest and the unobserved covariate. Some important distributional properties of the resulting model as well as associated inferential methods are studied. An example is used finally to illustrate the results developed here. In this case, it is noted that hidden truncation on the left does not result in any new model, but the hidden truncation on the right does. The properties and fit of such a model pose a challenging problem and that is what is focused here in this work.  相似文献   

19.
The purpose of this paper is to present a semi-parametric estimation of a survival function when analyzing incomplete and doubly censored data. Under the assumption that the chance of censoring is not related to the individual's survivorship, we propose a consistent estimation of survival. The derived estimator treats the uncensored observations nonparametrically and uses parametric models for both right and left censored data. Some asymptotic properties and simulation studies are also presented in order to analyze the behavior of the proposed estimator.  相似文献   

20.
The linear regression model for right censored data, also known as the accelerated failure time model using the logarithm of survival time as the response variable, is a useful alternative to the Cox proportional hazards model. Empirical likelihood as a non‐parametric approach has been demonstrated to have many desirable merits thanks to its robustness against model misspecification. However, the linear regression model with right censored data cannot directly benefit from the empirical likelihood for inferences mainly because of dependent elements in estimating equations of the conventional approach. In this paper, we propose an empirical likelihood approach with a new estimating equation for linear regression with right censored data. A nested coordinate algorithm with majorization is used for solving the optimization problems with non‐differentiable objective function. We show that the Wilks' theorem holds for the new empirical likelihood. We also consider the variable selection problem with empirical likelihood when the number of predictors can be large. Because the new estimating equation is non‐differentiable, a quadratic approximation is applied to study the asymptotic properties of penalized empirical likelihood. We prove the oracle properties and evaluate the properties with simulated data. We apply our method to a Surveillance, Epidemiology, and End Results small intestine cancer dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号