首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

In the context of failure time data, over the long run, dependent observations that might be censored are commonly encountered in practice. The main objective of this paper is to make inference about the common marginal distribution of the failure times. To this end, one nonparametric estimator, namely, the Nelson-Aalen estimator is modified to incorporate the dependence among the observations. The modified estimator is the weighted moving average (WMA) version of the existing estimator used for independent data. It has been shown that the new version is better in the sense of minimizing the one-step ahead forecast errors. Also, the new estimator can be used as a crude measure for checking independence among observations.  相似文献   

2.
A sequential procedure is constructed to provide a fixed‐accuracy estimator for the number of faults in a system. This paper focuses on the case when faults are homogeneous. However, the method can be adapted to other models by choosing a more robust estimator. The accuracy of the estimator depends on the failure intensity, the length of testing period and the total number of faults in the system. Simulations illustrate the performance of the proposed procedure. The method is applied to an information system failure dataset.  相似文献   

3.
A monotonic. pointwise unbiased and uniformly consistent estimator for the survival function of failure time under the random censorship model is proposed. This estimator is closely related to the Kaplan-Meier. the Nelson-Aalen. and the reduced sample estimator. Large sample properties of the new estimator are discussed.  相似文献   

4.
We propose correcting for non-compliance in randomized trials by estimating the parameters of a class of semi-parametric failure time models, the rank preserving structural failure time models, using a class of rank estimators. These models are the structural or strong version of the “accelerated failure time model with time-dependent covariates” of Cox and Oakes (1984). In this paper we develop a large sample theory for these estimators, derive the optimal estimator within this class, and briefly consider the construction of “partially adaptive” estimators whose efficiency may approach that of the optimal estimator. We show that in the absence of censoring the optimal estimator attains the semiparametric efficiency bound for the model.  相似文献   

5.
Bagai and Prakasa Rao [Analysis of survival data with two dependent competing risks. Biometr J. 1992;7:801–814] considered a competing risks model with two dependent risks. The two risks are initially independent but dependence arises because of the additive effect of an independent risk on the two initially independent risks. They showed that the ratio of failure rates are identifiable in the nonparametric set-up. In this paper, we consider it as a measurement error/deconvolution problem and suggest a nonparametric kernel-type estimator for the ratio of two failure rates. The local error properties of the proposed estimator are studied. Simulation studies show the efficacy of the proposed estimator.  相似文献   

6.
In many clinical studies where time to failure is of primary interest, patients may fail or die from one of many causes where failure time can be right censored. In some circumstances, it might also be the case that patients are known to die but the cause of death information is not available for some patients. Under the assumption that cause of death is missing at random, we compare the Goetghebeur and Ryan (1995, Biometrika, 82, 821–833) partial likelihood approach with the Dewanji (1992, Biometrika, 79, 855–857)partial likelihood approach. We show that the estimator for the regression coefficients based on the Dewanji partial likelihood is not only consistent and asymptotically normal, but also semiparametric efficient. While the Goetghebeur and Ryan estimator is more robust than the Dewanji partial likelihood estimator against misspecification of proportional baseline hazards, the Dewanji partial likelihood estimator allows the probability of missing cause of failure to depend on covariate information without the need to model the missingness mechanism. Tests for proportional baseline hazards are also suggested and a robust variance estimator is derived.  相似文献   

7.
This articleconcerns nonparametric estimation of association between bivariatefailure times. In the presence of independent right censoring,the support for failure time variates may be restricted and measuresof dependence over a finite failure time region may be of particularinterest. To this end, the reciprocal cross ratio function, weightedby the bivariate failure time density, is proposed as a summarymeasure of dependence over a failure time region. This `relativerisk' estimator is shown to be consistent and asymptoticallynormally distributed, with consistent bootstrap variance estimator.A finite-region version of Kendall's tau, which is suitable forcensored failure time data, is also proposed, and correspondingasymptotic distribution theory is noted. The accuracy of theseasymptotic approximations is studied in simulations and an illustrationis provided.  相似文献   

8.
In some applications, the failure time of interest is the time from an originating event to a failure event while both event times are interval censored. We propose fitting Cox proportional hazards models to this type of data using a spline‐based sieve maximum marginal likelihood, where the time to the originating event is integrated out in the empirical likelihood function of the failure time of interest. This greatly reduces the complexity of the objective function compared with the fully semiparametric likelihood. The dependence of the time of interest on time to the originating event is induced by including the latter as a covariate in the proportional hazards model for the failure time of interest. The use of splines results in a higher rate of convergence of the estimator of the baseline hazard function compared with the usual non‐parametric estimator. The computation of the estimator is facilitated by a multiple imputation approach. Asymptotic theory is established and a simulation study is conducted to assess its finite sample performance. It is also applied to analyzing a real data set on AIDS incubation time.  相似文献   

9.
Failure times are often right-censored and left-truncated. In this paper we give a mass redistribution algorithm for right-censored and/or left-truncated failure time data. We show that this algorithm yields the Kaplan-Meier estimator of the survival probability. One application of this algorithm in modeling the subdistribution hazard for competing risks data is studied. We give a product-limit estimator of the cumulative incidence function via modeling the subdistribution hazard. We show by induction that this product-limit estimator is identical to the left-truncated version of Aalen-Johansen (1978) estimator for the cumulative incidence function.  相似文献   

10.
In this paper we study an extension of the Kaplan Meier estimator to the case of dependent failure times. We assume that the failure times follow a Harris recurrent Markov Chain. We prove strong convergence of the estimator and weak convergence to a Gaussian process.  相似文献   

11.
For type I censoring, in addition to the failure times, the number failures is also observed as part of the data. Using this feature of type I singly right-censored data a simple estimator is obtained for the scale parameter of the two parameter Weibull distribution. The exact mean and variance of the estimator are derived and computed for finite sample sizes. Its limiting properties such as asymptotic normality and asymptotic relative efficiency are obtained. The estimator has high efficiency for moderate and heavy censoring. Its use is illustrated by means of an example.  相似文献   

12.
This paper describes the properties of a two-stage estimator of the dependence parameter in the Clayton-Oakes multivariate failure time model. The parameter is estimated from a likelihood function in which the marginal hazard functions are replaced by estimates. The method extends the approach of Shih and Louis (1995) and Genest, Ghoudi and Rivest (1995) to allow the marginal hazard for failure times to follow a stratified Cox (1972) model. The method is computationally simple and under mild regularity conditions produces a consistent, asymptotically normal estimator.  相似文献   

13.
In the literature of point estimation, the Cauchy distribution with location parameter is often cited as an example for the failure of maximum-likelihood method and hence the failure of the likelihood principle in general. Contrary to the above notion, we prove that even in this case the likelihood equation has multiple roots and that the maximum-likelihood estimator (the global maximum) remains as an asymptotically optimal estimator in the Bahadur sense.  相似文献   

14.
Estimation in the presence of censoring is an important problem. In the linear model, the Buckley-James method proceeds iteratively by estimating the censored values than re-estimating the regression coeffi- cients. A large-scale Monte Carlo simulation technique has been developed to test the performance of the Buckley-James (denoted B-J) estimator. One hundred and seventy two randomly generated data sets, each with three thousand replications, based on four failure distributions, four censoring patterns, three sample sizes and four censoring rates have been investigated, and the results are presented. It is found that, except for Type I1 censoring, the B-J estimator is essentially unbiased, even when the data sets with small sample sizes are subjected to a high censoring rate. The variance formula suggested by Buckley and James (1979) is shown to be sensitive to the failure distribution. If the censoring rate is kept constant along the covariate line, the sample variance of the estimator appears to be insensitive to the censoring pattern with a selected failure distribution. Oscillation of the convergence values associated with the B-J estimator is illustrated and thoroughly discussed.  相似文献   

15.
The semiparametric accelerated failure time (AFT) model is not as widely used as the Cox relative risk model due to computational difficulties. Recent developments in least squares estimation and induced smoothing estimating equations for censored data provide promising tools to make the AFT models more attractive in practice. For multivariate AFT models, we propose a generalized estimating equations (GEE) approach, extending the GEE to censored data. The consistency of the regression coefficient estimator is robust to misspecification of working covariance, and the efficiency is higher when the working covariance structure is closer to the truth. The marginal error distributions and regression coefficients are allowed to be unique for each margin or partially shared across margins as needed. The initial estimator is a rank-based estimator with Gehan’s weight, but obtained from an induced smoothing approach with computational ease. The resulting estimator is consistent and asymptotically normal, with variance estimated through a multiplier resampling method. In a large scale simulation study, our estimator was up to three times as efficient as the estimateor that ignores the within-cluster dependence, especially when the within-cluster dependence was strong. The methods were applied to the bivariate failure times data from a diabetic retinopathy study.  相似文献   

16.
This paper gives the results of a new simulation study for the familiar calibration problem and the less familiar inverse median estimation problem. The latter arises when one wishes to estimate from a linear regression analysis the value of the independent variable corresponding to a specified value of the median of the dependent variable. For example, from the results of a regression analysis between stress and time to failure, one might wish to estimate the stress at which the median time to failure is 10,000 hours. In the study, the mean square error, Pitman closeness, and probability of overestimation are compared for both the calibration problem and the inverse median estimation problem for (1) the classical estimator, (2) the inverse estimator, and (3) a modified version of an estimator proposed by Naszodi (1978) for both a small sample and a moderately large sample situation.  相似文献   

17.
Summary.  A representation is developed that expresses the bivariate survivor function as a function of the hazard function for truncated failure time variables. This leads to a class of nonparametric survivor function estimators that avoid negative mass. The transformation from hazard function to survivor function is weakly continuous and compact differentiable, so that such properties as strong consistency, weak convergence to a Gaussian process and bootstrap applicability for a hazard function estimator are inherited by the corresponding survivor function estimator. The set of point mass assignments for a survivor function estimator is readily obtained by using a simple matrix calculation on the set of hazard rate estimators. Special cases arise from a simple empirical hazard rate estimator, and from an empirical hazard rate estimator following the redistribution of singly censored observations within strips. The latter is shown to equal van der Laan's repaired nonparametric maximum likelihood estimator, for which a Greenwood-like variance estimator is given. Simulation studies are presented to compare the moderate sample performance of various nonparametric survivor function estimators.  相似文献   

18.
ABSTRACT

The paper deals with an improvement of the well-known Kaplan–Meier estimator of survival function when the censoring mechanism is random and independent of the failure times. Small sample size properties of the new estimator, as well as the original Kaplan–Meier estimator are inspected by means of Monte Carlo simulations. It follows from the simulations that the proposed estimator prevails with respect to some basic statistical characteristics.  相似文献   

19.
Abstract.  The Nelson–Aalen estimator is well known to be an asymptotically efficient estimator of the cumulative hazard function, see Andersen et al. ( Statistical models based on counting processes , Springer-Verlag, New York, 1993) among many others. In this paper, we show that the efficiency of the Nelson–Aalen estimator can be considerably improved by using more information in the estimation process than the traditional Nelson–Aalen estimator uses. While our approach results in a biased estimator, the variance improvement is substantial. By optimizing the balance between the bias loss and the variance improvement, we obtain results on the efficiency gain. Several examples for known failure time distributions are used to illustrate these ideas.  相似文献   

20.
This article introduces a novel non parametric penalized likelihood hazard estimation when the censoring time is dependent on the failure time for each subject under observation. More specifically, we model this dependence using a copula, and the method of maximum penalized likelihood (MPL) is adopted to estimate the hazard function. We do not consider covariates in this article. The non negatively constrained MPL hazard estimation is obtained using a multiplicative iterative algorithm. The consistency results and the asymptotic properties of the proposed hazard estimator are derived. The simulation studies show that our MPL estimator under dependent censoring with an assumed copula model provides a better accuracy than the MPL estimator under independent censoring if the sign of dependence is correctly specified in the copula function. The proposed method is applied to a real dataset, with a sensitivity analysis performed over various values of correlation between failure and censoring times.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号