首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Oiler, Gomez & Calle (2004) give a constant sum condition for processes that generate interval‐censored lifetime data. They show that in models satisfying this condition, it is possible to estimate non‐parametrically the lifetime distribution based on a well‐known simplified likelihood. The author shows that this constant‐sum condition is equivalent to the existence of an observation process that is independent of lifetimes and which gives the same probability distribution for the observed data as the underlying true process.  相似文献   

2.
The authors consider the empirical likelihood method for the regression model of mean quality‐adjusted lifetime with right censoring. They show that an empirical log‐likelihood ratio for the vector of the regression parameters is asymptotically a weighted sum of independent chi‐squared random variables. They adjust this empirical log‐likelihood ratio so that the limiting distribution is a standard chi‐square and construct corresponding confidence regions. Simulation studies lead them to conclude that empirical likelihood methods outperform the normal approximation methods in terms of coverage probability. They illustrate their methods with a data example from a breast cancer clinical trial study.  相似文献   

3.
Many methods have been developed in the literature for regression analysis of current status data with noninformative censoring and also some approaches have been proposed for semiparametric regression analysis of current status data with informative censoring. However, the existing approaches for the latter situation are mainly on specific models such as the proportional hazards model and the additive hazard model. Corresponding to this, in this paper, we consider a general class of semiparametric linear transformation models and develop a sieve maximum likelihood estimation approach for the inference. In the method, the copula model is employed to describe the informative censoring or relationship between the failure time of interest and the censoring time, and Bernstein polynomials are used to approximate the nonparametric functions involved. The asymptotic consistency and normality of the proposed estimators are established, and an extensive simulation study is conducted and indicates that the proposed approach works well for practical situations. In addition, an illustrative example is provided.  相似文献   

4.
Empirical Likelihood for Censored Linear Regression   总被引:5,自引:0,他引:5  
In this paper we investigate the empirical likelihood method in a linear regression model when the observations are subject to random censoring. An empirical likelihood ratio for the slope parameter vector is defined and it is shown that its limiting distribution is a weighted sum of independent chi-square distributions. This reduces to the empirical likelihood to the linear regression model first studied by Owen (1991) if there is no censoring present. Some simulation studies are presented to compare the empirical likelihood method with the normal approximation based method proposed in Lai et al. (1995). It was found that the empirical likelihood method performs much better than the normal approximation method.  相似文献   

5.
The authors study the empirical likelihood method for linear regression models. They show that when missing responses are imputed using least squares predictors, the empirical log‐likelihood ratio is asymptotically a weighted sum of chi‐square variables with unknown weights. They obtain an adjusted empirical log‐likelihood ratio which is asymptotically standard chi‐square and hence can be used to construct confidence regions. They also obtain a bootstrap empirical log‐likelihood ratio and use its distribution to approximate that of the empirical log‐likelihood ratio. A simulation study indicates that the proposed methods are comparable in terms of coverage probabilities and average lengths of confidence intervals, and perform better than a normal approximation based method.  相似文献   

6.
Mean survival time is often of inherent interest in medical and epidemiologic studies. In the presence of censoring and when covariate effects are of interest, Cox regression is the strong default, but mostly due to convenience and familiarity. When survival times are uncensored, covariate effects can be estimated as differences in mean survival through linear regression. Tobit regression can validly be performed through maximum likelihood when the censoring times are fixed (ie, known for each subject, even in cases where the outcome is observed). However, Tobit regression is generally inapplicable when the response is subject to random right censoring. We propose Tobit regression methods based on weighted maximum likelihood which are applicable to survival times subject to both fixed and random censoring times. Under the proposed approach, known right censoring is handled naturally through the Tobit model, with inverse probability of censoring weighting used to overcome random censoring. Essentially, the re‐weighting data are intended to represent those that would have been observed in the absence of random censoring. We develop methods for estimating the Tobit regression parameter, then the population mean survival time. A closed form large‐sample variance estimator is proposed for the regression parameter estimator, with a semiparametric bootstrap standard error estimator derived for the population mean. The proposed methods are easily implementable using standard software. Finite‐sample properties are assessed through simulation. The methods are applied to a large cohort of patients wait‐listed for kidney transplantation.  相似文献   

7.
This paper considers the design of accelerated life test (ALT) sampling plans under Type I progressive interval censoring with random removals. We assume that the lifetime of products follows a Weibull distribution. Two levels of constant stress higher than the use condition are used. The sample size and the acceptability constant that satisfy given levels of producer's risk and consumer's risk are found. In particular, the optimal stress level and the allocation proportion are obtained by minimizing the generalized asymptotic variance of the maximum likelihood estimators of the model parameters. Furthermore, for validation purposes, a Monte Carlo simulation is conducted to assess the true probability of acceptance for the derived sampling plans.  相似文献   

8.
This paper develops an objective Bayesian analysis method for estimating unknown parameters of the half-logistic distribution when a sample is available from the progressively Type-II censoring scheme. Noninformative priors such as Jeffreys and reference priors are derived. In addition, derived priors are checked to determine whether they satisfy probability-matching criteria. The Metropolis–Hasting algorithm is applied to generate Markov chain Monte Carlo samples from these posterior density functions because marginal posterior density functions of each parameter cannot be expressed in an explicit form. Monte Carlo simulations are conducted to investigate frequentist properties of estimated models under noninformative priors. For illustration purposes, a real data set is presented, and the quality of models under noninformative priors is evaluated through posterior predictive checking.  相似文献   

9.
10.
The authors derive the asymptotic mean and bias of Kendall's tau and Spearman's rho in the presence of left censoring in the bivariate Gaussian copula model. They show that tie corrections for left‐censoring brings the value of these coefficients closer to zero. They also present a bias reduction method and illustrate it through two applications.  相似文献   

11.
In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates.  相似文献   

12.
For the past several decades, nonparametric and semiparametric modeling for conventional right-censored survival data has been investigated intensively under a noninformative censoring mechanism. However, these methods may not be applicable for analyzing right-censored survival data that arise from prevalent cohorts when the failure times are subject to length-biased sampling. This review article is intended to provide a summary of some newly developed methods as well as established methods for analyzing length-biased data.  相似文献   

13.
This paper considers the Bayesian model selection problem in life-time models using type-II censored data. In particular, the intrinsic Bayes factors are calculated for log-normal, exponential, and Weibull lifetime models using noninformative priors under type-II censoring. Numerical examples are given to illustrate our results.  相似文献   

14.
For the lifetime (or negative) exponential distribution, the trimmed likelihood estimator has been shown to be explicit in the form of a β‐trimmed mean which is representable as an estimating functional that is both weakly continuous and Fréchet differentiable and hence qualitatively robust at the parametric model. It also has high efficiency at the model. The robustness is in contrast to the maximum likelihood estimator (MLE) involving the usual mean which is not robust to contamination in the upper tail of the distribution. When there is known right censoring, it may be perceived that the MLE which is the most asymptotically efficient estimator may be protected from the effects of ‘outliers’ due to censoring. We demonstrate that this is not the case generally, and in fact, based on the functional form of the estimators, suggest a hybrid defined estimator that incorporates the best features of both the MLE and the β‐trimmed mean. Additionally, we study the pure trimmed likelihood estimator for censored data and show that it can be easily calculated and that the censored observations are not always trimmed. The different trimmed estimators are compared by a modest simulation study.  相似文献   

15.
We consider a regression analysis of longitudinal data in the presence of outcome‐dependent observation times and informative censoring. Existing approaches commonly require a correct specification of the joint distribution of longitudinal measurements, the observation time process, and informative censoring time under the joint modeling framework and can be computationally cumbersome due to the complex form of the likelihood function. In view of these issues, we propose a semiparametric joint regression model and construct a composite likelihood function based on a conditional order statistics argument. As a major feature of our proposed methods, the aforementioned joint distribution is not required to be specified, and the random effect in the proposed joint model is treated as a nuisance parameter. Consequently, the derived composite likelihood bypasses the need to integrate over the random effect and offers the advantage of easy computation. We show that the resulting estimators are consistent and asymptotically normal. We use simulation studies to evaluate the finite‐sample performance of the proposed method and apply it to a study of weight loss data that motivated our investigation.  相似文献   

16.
Bayesian alternatives to classical tests for several testing problems are considered. One-sided and two-sided sets of hypotheses are tested concerning an exponential parameter, a Binomial proportion, and a normal mean. Hierarchical Bayes and noninformative Bayes procedures are compared with the appropriate classical procedure, either the uniformly most powerful test or the likelihood ratio test, in the different situations. The hierarchical prior employed is the conjugate prior at the first stage with the mean being the test parameter and a noninformative prior at the second stage for the hyper parameter(s) of the first stage prior. Fair comparisons are attempted in which fair means the likelihood of making a type I error is approximately the same for the different testing procedures; once this condition is satisfied, the power of the different tests are compared, the larger the power, the better the test. This comparison is difficult in the two-sided case due to the unsurprising discrepancy between Bayesian and classical measures of evidence that have been discussed for years. The hierarchical Bayes tests appear to compete well with the typical classical test in the one-sided cases.  相似文献   

17.
Markers, which are prognostic longitudinal variables, can be used to replace some of the information lost due to right censoring. They may also be used to remove or reduce bias due to informative censoring. In this paper, the authors propose novel methods for using markers to increase the efficiency of log‐rank tests and hazard ratio estimation, as well as parametric estimation. They propose a «plug‐in» methodology that consists of writing the test statistic or estimate of interest as a functional of Kaplan–Meier estimators. The latter are then replaced by an efficient estimator of the survival curve that incorporates information from markers. Using simulations, the authors show that the resulting estimators and tests can be up to 30% more efficient than the usual procedures, provided that the marker is highly prognostic and that the frequency of censoring is high.  相似文献   

18.
Length‐biased sampling data are often encountered in the studies of economics, industrial reliability, epidemiology, genetics and cancer screening. The complication of this type of data is due to the fact that the observed lifetimes suffer from left truncation and right censoring, where the left truncation variable has a uniform distribution. In the Cox proportional hazards model, Huang & Qin (Journal of the American Statistical Association, 107, 2012, p. 107) proposed a composite partial likelihood method which not only has the simplicity of the popular partial likelihood estimator, but also can be easily performed by the standard statistical software. The accelerated failure time model has become a useful alternative to the Cox proportional hazards model. In this paper, by using the composite partial likelihood technique, we study this model with length‐biased sampling data. The proposed method has a very simple form and is robust when the assumption that the censoring time is independent of the covariate is violated. To ease the difficulty of calculations when solving the non‐smooth estimating equation, we use a kernel smoothed estimation method (Heller; Journal of the American Statistical Association, 102, 2007, p. 552). Large sample results and a re‐sampling method for the variance estimation are discussed. Some simulation studies are conducted to compare the performance of the proposed method with other existing methods. A real data set is used for illustration.  相似文献   

19.
The censoring principle is shown to imply a principle similar to the likelihood principle. Further, it is demonstrated that the censoring principle and a weak form of the conditionality principle jointly imply the likelihood principle.  相似文献   

20.
Exact confidence interval estimation for accelerated life regression models with censored smallest extreme value (or Weibull) data is often impractical. This paper evaluates the accuracy of approximate confidence intervals based on the asymptotic normality of the maximum likelihood estimator, the asymptotic X2distribution of the likelihood ratio statistic, mean and variance correction to the likelihood ratio statistic, and the so-called Bartlett correction to the likelihood ratio statistic. The Monte Carlo evaluations under various degrees of time censoring show that uncorrected likelihood ratio intervals are very accurate in situations with heavy censoring. The benefits of mean and variance correction to the likelihood ratio statistic are only realized with light or no censoring. Bartlett correction tends to result in conservative intervals. Intervals based on the asymptotic normality of maximum likelihood estimators are anticonservative and should be used with much caution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号