首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In biostatistical applications interest often focuses on the estimation of the distribution of time T between two consecutive events. If the initial event time is observed and the subsequent event time is only known to be larger or smaller than an observed point in time, then the data is described by the well understood singly censored current status model, also known as interval censored data, case I. Jewell et al. (1994) extended this current status model by allowing the initial time to be unobserved, but with its distribution over an observed interval ' A, B ' known to be uniformly distributed; the data is referred to as doubly censored current status data. These authors used this model to handle application in AIDS partner studies focusing on the NPMLE of the distribution G of T . The model is a submodel of the current status model, but the distribution G is essentially the derivative of the distribution of interest F in the current status model. In this paper we establish that the NPMLE of G is uniformly consistent and that the resulting estimators for the n 1/2-estimable parameters are efficient. We propose an iterative weighted pool-adjacent-violator-algorithm to compute the estimator. It is also shown that, without smoothness assumptions, the NPMLE of F converges at rate n −2/5 in L 2-norm while the NPMLE of F in the non-parametric current status data model converges at rate n −1/3 in L 2-norm, which shows that there is a substantial gain in using the submodel information.  相似文献   

2.
In biostatistical applications interest often focuses on the estimation of the distribution of time T between two consecutive events. If the initial event time is observed and the subsequent event time is only known to be larger or smaller than an observed monitoring time C, then the data conforms to the well understood singly-censored current status model, also known as interval censored data, case I. Additional covariates can be used to allow for dependent censoring and to improve estimation of the marginal distribution of T. Assuming a wrong model for the conditional distribution of T, given the covariates, will lead to an inconsistent estimator of the marginal distribution. On the other hand, the nonparametric maximum likelihood estimator of FT requires splitting up the sample in several subsamples corresponding with a particular value of the covariates, computing the NPMLE for every subsample and then taking an average. With a few continuous covariates the performance of the resulting estimator is typically miserable. In van der Laan, Robins (1996) a locally efficient one-step estimator is proposed for smooth functionals of the distribution of T, assuming nothing about the conditional distribution of T, given the covariates, but assuming a model for censoring, given the covariates. The estimators are asymptotically linear if the censoring mechanism is estimated correctly. The estimator also uses an estimator of the conditional distribution of T, given the covariates. If this estimate is consistent, then the estimator is efficient and if it is inconsistent, then the estimator is still consistent and asymptotically normal. In this paper we show that the estimators can also be used to estimate the distribution function in a locally optimal way. Moreover, we show that the proposed estimator can be used to estimate the distribution based on interval censored data (T is now known to lie between two observed points) in the presence of covariates. The resulting estimator also has a known influence curve so that asymptotic confidence intervals are directly available. In particular, one can apply our proposal to the interval censored data without covariates. In Geskus (1992) the information bound for interval censored data with two uniformly distributed monitoring times at the uniform distribution (for T has been computed. We show that the relative efficiency of our proposal w.r.t. this optimal bound equals 0.994, which is also reflected in finite sample simulations. Finally, the good practical performance of the estimator is shown in a simulation study. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

3.
In this article we discuss Bayesian estimation of Kumaraswamy distributions based on three different types of censored samples. We obtain Bayes estimates of the model parameters using two different types of loss functions (LINEX and Quadratic) under each censoring scheme (left censoring, singly type-II censoring, and doubly type-II censoring) using Monte Carlo simulation study with posterior risk plots for each different choices of the model parameters. Also, detailed discussion regarding elicitation of the hyperparameters under the dependent prior setup is discussed. If one of the shape parameters is known then closed form expressions of the Bayes estimates corresponding to posterior risk under both the loss functions are available. To provide the efficacy of the proposed method, a simulation study is conducted and the performance of the estimation is quite interesting. For illustrative purpose, real-life data are considered.  相似文献   

4.
We derive an identity for nonparametric maximum likelihood estimators (NPMLE) and regularized MLEs in censored data models which expresses the standardized maximum likelihood estimator in terms of the standardized empirical process. This identity provides an effective starting point in proving both consistency and efficiency of NPMLE and regularized MLE. The identity and corresponding method for proving efficiency is illustrated for the NPMLE in the univariate right-censored data model, the regularized MLE in the current status data model and for an implicit NPMLE based on a mixture of right-censored and current status data. Furthermore, a general algorithm for estimation of the limiting variance of the NPMLE is provided. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

5.
Double censoring often occurs in registry studies when left censoring is present in addition to right censoring. In this work, we examine estimation of Aalen's nonparametric regression coefficients based on doubly censored data. We propose two estimation techniques. The first type of estimators, including ordinary least squared (OLS) estimator and weighted least squared (WLS) estimators, are obtained using martingale arguments. The second type of estimator, the maximum likelihood estimator (MLE), is obtained via expectation-maximization (EM) algorithms that treat the survival times of left censored observations as missing. Asymptotic properties, including the uniform consistency and weak convergence, are established for the MLE. Simulation results demonstrate that the MLE is more efficient than the OLS and WLS estimators.  相似文献   

6.
The scaled (two-parameter) Type I generalized logistic distribution (GLD) is considered with the known shape parameter. The ML method does not yield an explicit estimator for the scale parameter even in complete samples. In this article, we therefore construct a new linear estimator for scale parameter, based on complete and doubly Type-II censored samples, by making linear approximations to the intractable terms of the likelihood equation using least-squares (LS) method, a new approach of linearization. We call this as linear approximate maximum likelihood estimator (LAMLE). We also construct LAMLE based on Taylor series method of linear approximation and found that this estimator is slightly biased than that based on the LS method. A Monte Carlo simulation is used to investigate the performance of LAMLE and found that it is almost as efficient as MLE, though biased than MLE. We also compare unbiased LAMLE with BLUE based on the exact variances of the estimators and interestingly this new unbiased LAMLE is found just as efficient as the BLUE in both complete and Type-II censored samples. Since MLE is known as asymptotically unbiased, in large samples we compare unbiased LAMLE with MLE and found that this estimator is almost as efficient as MLE. We have also discussed interval estimation of the scale parameter from complete and Type-II censored samples. Finally, we present some numerical examples to illustrate the construction of the new estimators developed here.  相似文献   

7.
In this paper we introduce a new three-parameter exponential-type distribution. The new distribution is quite flexible and can be used effectively in modeling survival data and reliability problems. It can have constant, decreasing, increasing, upside-down bathtub and bathtub-shaped hazard rate functions. It also generalizes some well-known distributions. We discuss maximum likelihood estimation of the model parameters for complete sample and for censored sample. Additionally, we formulate a new cure rate survival model by assuming that the number of competing causes of the event of interest has the Poisson distribution and the time to this event follows the proposed distribution. Maximum likelihood estimation of the model parameters of the new cure rate survival model is discussed for complete sample and censored sample. Two applications to real data are provided to illustrate the flexibility of the new model in practice.  相似文献   

8.
Generalizations of current status data with applications   总被引:2,自引:0,他引:2  
In estimation of a survival function, current status data arises when the only information available on individuals is their survival status at a single monitoring time. Here, we briefly review extensions of this form of data structure in two directions: (i) doubly censored current status data, where there is incomplete information on the origin of the failure time random variable, and (ii) current status information on more complicated stochastic processes. Simple examples of these data forms are presented for motivation.  相似文献   

9.
The maximum likelihood (ML) estimation of the location and scale parameters of an exponential distribution based on singly and doubly censored samples is given. When the sample is multiply censored (some middle observations being censored), however, the ML method does not admit explicit solutions. In this case we present a simple approximation to the likelihood equation and derive explicit estimators which are linear functions of order statistics. Finally, we present some examples to illustrate this method of estimation.  相似文献   

10.
This paper deals with the estimation of the parameters of doubly truncated and singly truncated normal distributions when truncation points are known. We derive, for these families, a necessary and sufficient condition for the maximum likelihood estimator(MLE) to be finite. Furthermore, the probability of the MLE being infinite is positive. A simulation study for single truncation is carried out to compare the modified maximum likelihood estimator, and the mixed estimator.  相似文献   

11.
The currently existing estimation methods and goodness-of-fit tests for the Cox model mainly deal with right censored data, but they do not have direct extension to other complicated types of censored data, such as doubly censored data, interval censored data, partly interval-censored data, bivariate right censored data, etc. In this article, we apply the empirical likelihood approach to the Cox model with complete sample, derive the semiparametric maximum likelihood estimators (SPMLE) for the Cox regression parameter and the baseline distribution function, and establish the asymptotic consistency of the SPMLE. Via the functional plug-in method, these results are extended in a unified approach to doubly censored data, partly interval-censored data, and bivariate data under univariate or bivariate right censoring. For these types of censored data mentioned, the estimation procedures developed here naturally lead to Kolmogorov-Smirnov goodness-of-fit tests for the Cox model. Some simulation results are presented.  相似文献   

12.
This paper proposes a semi-parametric modelling and estimating method for analysing censored survival data. The proposed method uses the empirical likelihood function to describe the information in data, and formulates estimating equations to incorporate knowledge of the underlying distribution and regression structure. The method is more flexible than the traditional methods such as the parametric maximum likelihood estimation (MLE), Cox's (1972) proportional hazards model, accelerated life test model, quasi-likelihood (Wedderburn, 1974) and generalized estimating equations (Liang & Zeger, 1986). This paper shows the existence and uniqueness of the proposed semi-parametric maximum likelihood estimates (SMLE) with estimating equations. The method is validated with known cases studied in the literature. Several finite sample simulation and large sample efficiency studies indicate that when the sample size is larger than 100 the SMLE is compatible with the parametric MLE; and in all case studies, the SMLE is about 15% better than the parametric MLE with a mis-specified underlying distribution.  相似文献   

13.
The paper considers the goodness of fit tests with right censored data or doubly censored data. The Fredholm Integral Equation (FIE) method proposed by Ren (1993) is implemented in the simulation studies to estimate the null distribution of the Cramér-von Mises test statistics and the asymptotic covariance function of the self-consistent estimator for the lifetime distribution with right censored data or doubly censored data. We show that for fixed alternatives, the bootstrap method does not estimate the null distribution consistently for doubly censored data. For the right censored case, a comparison between the performance of FIE and the η out of η bootstrap shows that FIE gives better estimation for the null distribution. The application of FIE to a set of right censored Channing House data and to a set of doubly censored breast cancer data is presented.  相似文献   

14.
Censoring of a longitudinal outcome often occurs when data are collected in a biomedical study and where the interest is in the survival and or longitudinal experiences of a study population. In the setting considered herein, we encountered upper and lower censored data as the result of restrictions imposed on measurements from a kinetic model producing “biologically implausible” kidney clearances. The goal of this paper is to outline the use of a joint model to determine the association between a censored longitudinal outcome and a time to event endpoint. This paper extends Guo and Carlin's [6] paper to accommodate censored longitudinal data, in a commercially available software platform, by linking a mixed effects Tobit model to a suitable parametric survival distribution. Our simulation results showed that our joint Tobit model outperforms a joint model made up of the more naïve or “fill-in” method for the longitudinal component. In this case, the upper and/or lower limits of censoring are replaced by the limit of detection. We illustrated the use of this approach with example data from the hemodialysis (HEMO) study [3] and examined the association between doubly censored kidney clearance values and survival.  相似文献   

15.
We formulate a new cure rate survival model by assuming that the number of competing causes of the event of interest has the Poisson distribution, and the time to this event has the generalized linear failure rate distribution. A new distribution to analyze lifetime data is defined from the proposed cure rate model, and its quantile function as well as a general expansion for the moments is derived. We estimate the parameters of the model with cure rate in the presence of covariates for censored observations using maximum likelihood and derive the observed information matrix. We obtain the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and present some ways to perform global influence analysis. The usefulness of the proposed cure rate survival model is illustrated in an application to real data.  相似文献   

16.
Various types of failure, censored and accelerated life tests, are commonly employed for life testing in some manufacturing industries and products that are highly reliable. In this article, we consider the tampered failure rate model as one of such types that relate the distribution under use condition to the distribution under accelerated condition. It is assumed that the lifetimes of products under use condition have generalized Pareto distribution as a lifetime model. Some estimation methods such as graphical, moments, probability weighted moments, and maximum likelihood estimation methods for the parameters are discussed based on progressively type-I censored data. The determination of optimal stress change time is discussed under two different criteria of optimality. Finally, a Monte Carlo simulation study is carried out to examine the performance of the estimation methods and the optimality criteria.  相似文献   

17.
In the present article we propose the modified lambda family (MLF) which is the Freimer, Mudholkar, Kollia, and Lin (FMKL) parametrization of generalized lambda distribution (GLD) as a model for censored data. The expressions for probability weighted moments of MLF are derived and used to estimate the parameters of the distribution. We modified the estimation technique using probability weighted moments. It is shown that the distribution provides reasonable fit to a real censored data.  相似文献   

18.
Various solutions to the parameter estimation problem of a recently introduced multivariate Pareto distribution are developed and exemplified numerically. Namely, a density of the aforementioned multivariate Pareto distribution with respect to a dominating measure, rather than the corresponding Lebesgue measure, is specified and then employed to investigate the maximum likelihood estimation (MLE) approach. Also, in an attempt to fully enjoy the common shock origins of the multivariate model of interest, an adapted variant of the expectation-maximization (EM) algorithm is formulated and studied. The method of moments is discussed as a convenient way to obtain starting values for the numerical optimization procedures associated with the MLE and EM methods.  相似文献   

19.
For models with random effects or missing data, the likelihood function is sometimes intractable analytically but amenable to Monte Carlo approximation. To get a good approximation, the parameter value that drives the simulations should be sufficiently close to the maximum likelihood estimate (MLE) which unfortunately is unknown. Introducing a working prior distribution, we express the likelihood function as a posterior expectation and approximate it using posterior simulations. If the sample size is large, the sample information is likely to outweigh the prior specification and the posterior simulations will be concentrated around the MLE automatically, leading to good approximation of the likelihood near the MLE. For smaller samples, we propose to use the current posterior as the next prior distribution to make the posterior simulations closer to the MLE and hence improve the likelihood approximation. By using the technique of data duplication, we can simulate from the sharpened posterior distribution without actually updating the prior distribution. The suggested method works well in several test cases. A more complex example involving censored spatial data is also discussed.  相似文献   

20.
In this paper we address the problem of estimating a vector of regression parameters in the Weibull censored regression model. Our main objective is to provide natural adaptive estimators that significantly improve upon the classical procedures in the situation where some of the predictors may or may not be associated with the response. In the context of two competing Weibull censored regression models (full model and candidate submodel), we consider an adaptive shrinkage estimation strategy that shrinks the full model maximum likelihood estimate in the direction of the submodel maximum likelihood estimate. We develop the properties of these estimators using the notion of asymptotic distributional risk. The shrinkage estimators are shown to have higher efficiency than the classical estimators for a wide class of models. Further, we consider a LASSO type estimation strategy and compare the relative performance with the shrinkage estimators. Monte Carlo simulations reveal that when the true model is close to the candidate submodel, the shrinkage strategy performs better than the LASSO strategy when, and only when, there are many inactive predictors in the model. Shrinkage and LASSO strategies are applied to a real data set from Veteran's administration (VA) lung cancer study to illustrate the usefulness of the procedures in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号