首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A nested case–control (NCC) study is an efficient cohort-sampling design in which a subset of controls are sampled from the risk set at each event time. Since covariate measurements are taken only for the sampled subjects, time and efforts of conducting a full scale cohort study can be saved. In this paper, we consider fitting a semiparametric accelerated failure time model to failure time data from a NCC study. We propose to employ an efficient induced smoothing procedure for rank-based estimating method for regression parameters estimation. For variance estimation, we propose to use an efficient resampling method that utilizes the robust sandwich form. We extend our proposed methods to a generalized NCC study that allows a sampling of cases. Finite sample properties of the proposed estimators are investigated via an extensive stimulation study. An application to a tumor study illustrates the utility of the proposed method in routine data analysis.  相似文献   

2.
ABSTRACT

In the present paper, we aim at providing plug-in-type empirical estimators that enable us to quantify the contribution of each operational or/and non-functioning state to the failures of a system described by a semi-Markov model. In the discrete-time and finite state space semi-Markov framework, we study different conditional versions of an important reliability measure for random repairable systems, the failure occurrence rate, which is based on counting processes. The identification of potential failure contributors through the conditional counterparts of the failure occurrence rate is of paramount importance since it could lead to corrective actions that minimize the occurrence of the more important failure modes and therefore improve the reliability of the system. The aforementioned estimators are characterized by appealing asymptotic properties such as strong consistency and asymptotic normality. We further obtain detailed analytical expressions for the covariance matrices of the random vectors describing the conditional failure occurrence rates. As particular cases we present the failure occurrence rates for hidden (semi-) Markov models. We illustrate our results by means of a simulated study. Different applications are presented based on wind, earthquake and vibration data.  相似文献   

3.
Correlated survival data arise frequently in biomedical and epidemiologic research, because each patient may experience multiple events or because there exists clustering of patients or subjects, such that failure times within the cluster are correlated. In this paper, we investigate the appropriateness of the semi-parametric Cox regression and of the generalized estimating equations as models for clustered failure time data that arise from an epidemiologic study in veterinary medicine. The semi-parametric approach is compared with a proposed fully parametric frailty model. The frailty component is assumed to follow a gamma distribution. Estimates of the fixed covariates effects were obtained by maximizing the likelihood function, while an estimate of the variance component ( frailty parameter) was obtained from a profile likelihood construction.  相似文献   

4.
In the prospective study of a finely stratified population, one individual from each stratum is chosen at random for the “treatment” group and one for the “non-treatment” group. For each individual the probability of failure is a logistic function of parameters designating the stratum, the treatment and a covariate. Uniformly most powerful unbiased tests for the treatment effect are given. These tests are generally cumbersome but, if the covariate is dichotomous, the tests and confidence intervals are simple. Readily usable (but non-optimal) tests are also proposed for poly-tomous covariates and factorial designs. These are then adapted to retrospective studies (in which one “success” and one “failure” per stratum are sampled). Tests for retrospective studies with a continuous “treatment” score are also proposed.  相似文献   

5.
This paper examines modeling and inference questions for experiments in which different subsets of a set of k possibly dependent components are tested in r different environments. In each environment, the failure times of the set of components on test is assumed to be governed by a particular type of multivariate exponential (MVE) distribution. For any given component tested in several environments, it is assumed that its marginal failure rate varies from one environment to another via a change of scale between the environments, resulting in a joint MVE model which links in a natural way the applicable MVE distributions describing component behavior in each fixed environment. This study thus extends the work of Proschan and Sullo (1976) to multiple environments and the work of Kvam and Samaniego (1993) to dependent data. The problem of estimating model parameters via the method of maximum likelihood is examined in detail. First, necessary and sufficient conditions for the identifiability of model parameters are established. We then treat the derivation of the MLE via a numerically-augmented application of the EM algorithm. The feasibility of the estimation method is demonstrated in an example in which the likelihood ratio test of the hypothesis of equal component failure rates within any given environment is carried out.  相似文献   

6.
When the subjects in a study possess different demographic and disease characteristics and are exposed to more than one types of failure, a practical problem is to assess the covariate effects on each type of failure as well as on all-cause failure. The most widely used method is to employ the Cox models on each cause-specific hazard and the all-cause hazard. It has been pointed out that this method causes the problem of internal inconsistency. To solve such a problem, the additive hazard models have been advocated. In this paper, we model each cause-specific hazard with the additive hazard model that includes both constant and time-varying covariate effects. We illustrate that the covariate effect on all-cause failure can be estimated by the sum of the effects on all competing risks. Using data from a longitudinal study on breast cancer patients, we show that the proposed method gives simple interpretation of the final results, when the primary covariate effect is constant in the additive manner on each cause-specific hazard. Based on the given additive models on the cause-specific hazards, we derive the inferences for the adjusted survival and cumulative incidence functions.  相似文献   

7.
This paper applies methodology of Finkelstein and Schoenfeld [Stat. Med. 13 (1994) 1747.] to consider new treatment strategies in a synthetic clinical trial. The methodology is an approach for estimating survival functions as a composite of subdistributions defined by an auxiliary event which is intermediate to the failure. The subdistributions are usually calculated utilizing all subjects in a study, by taking into account the path determined by each individual's auxiliary event. However, the method can be used to get a composite estimate of failure from different subpopulations of patients. We utilize this application of the methodology to test a new treatment strategy, that changes therapy at later stages of disease, by combining subdistributions from different treatment arms of a clinical trial that was conducted to test therapies for prevention of pneumocystis carinii pneumonia.  相似文献   

8.
We consider a life testing situation in which systems are subject to failure from independent competing risks. Following a failure, immediate (stage-1) procedures are used in an attempt to reach a definitive diagnosis. If these procedures fail to result in a diagnosis, this phenomenon is called masking. Stage-2 procedures, such as failure analysis or autopsy, provide definitive diagnosis for a sample of the masked cases. We show how stage-1 and stage-2 information can be combined to provide statistical inference about (a) survival functions of the individual risks, (b) the proportions of failures associated with individual risks and (c) probability, for a specified masked case, that each of the masked competing risks is responsible for the failure. Our development is based on parametric distributional assumptions and the special case for which the failure times for the competing risks have a Weibull distribution is discussed in detail.  相似文献   

9.

In this paper, we extend the vertical modeling approach for the analysis of survival data with competing risks to incorporate a cure fraction in the population, that is, a proportion of the population for which none of the competing events can occur. The proposed method has three components: the proportion of cure, the risk of failure, irrespective of the cause, and the relative risk of a certain cause of failure, given a failure occurred. Covariates may affect each of these components. An appealing aspect of the method is that it is a natural extension to competing risks of the semi-parametric mixture cure model in ordinary survival analysis; thus, causes of failure are assigned only if a failure occurs. This contrasts with the existing mixture cure model for competing risks of Larson and Dinse, which conditions at the onset on the future status presumably attained. Regression parameter estimates are obtained using an EM-algorithm. The performance of the estimators is evaluated in a simulation study. The method is illustrated using a melanoma cancer data set.

  相似文献   

10.
Abstract

Recently, the study of the lifetime of systems in reliability and survival analysis in the presence of several causes of failure (competing risks) has attracted attention in the literature. In this paper, series and parallel systems with exponential lifetime for each item of the system are considered. Several causes of failure independently affect lifetime distributions and observations of failure times of the systems are considered under progressive Type-II censored scheme. For series systems, the maximum likelihood estimates of parameters are computed and confidence intervals for parameters of the model are obtained using Fisher information matrix. For parallel systems, the generalized EM algorithm which uses the Newton-Raphson algorithm inside the EM algorithm is used to compute the maximum likelihood estimates of parameters. Also, the standard errors of the maximum likelihood estimates are computed by using the supplemented EM algorithm. The simulation study confirms the good performance of the introduced approach.  相似文献   

11.
Multivariate failure time data arise when the sample consists of clusters and each cluster contains several possibly dependent failure times. The Clayton–Oakes model (Clayton, 1978; Oakes, 1982) for multivariate failure times characterizes the intracluster dependence parametrically but allows arbitrary specification of the marginal distributions. In this paper, we discuss estimation in the Clayton–Oakes model when the marginal distributions are modeled to follow the Cox (1972) proportional hazards regression model. Parameter estimation is based on an approximate generalized maximum likelihood estimator. We illustrate the model's application with example datasets.  相似文献   

12.
This article presents a generalization of the imperfect sequential preventive maintenance (PM) policy with minimal repair. As failures occur, the system experiences one of two types of failures: a Type-I failure (minor), rectified by a minimal repair; or a Type-II failure (catastrophic) that calls for an unplanned maintenance. In each maintenance period, the system is maintained following the occurrence of a Type-II failure or at age, whichever takes place first. At the Nth maintenance, the system is replaced rather than maintained. The imperfect PM model adopted in this study incorporates with improvement factors in the hazard-rate function. Taking age-dependent minimal repair costs into consideration, the objective consists of finding the optimal PM and replacement schedule that minimize the expected cost per unit time over an infinite time-horizon.  相似文献   

13.
ABSTRACT

This paper proposes preventive replacement policies for an operating system which may continuously works for N jobs with random working times and is imperfectly maintained upon failure. As a failure occurs, the system suffers one of the two types of failures based on some random mechanism: type-I (repairable or minor) failure is rectified by a minimal repair, or type-II (non repairable or catastrophic) failure is removed by a corrective replacement. A notation of preventive replacement last model is considered in which the system is replaced before any type-II failure at an operating time T or at number N of working times, whichever occurs last. Comparisons between such a preventive replacement last and the conventional replacement first are discussed in detail. For each model, the optimal schedule of preventive replacement that minimizes the mean cost rate is presented theoretically and determined numerically. Because the framework and analysis are general, the proposed models extend several existing results.  相似文献   

14.
This paper discusses regression analysis of clustered interval-censored failure time data, which often occur in medical follow-up studies among other areas. For such data, sometimes the failure time may be related to the cluster size, the number of subjects within each cluster or we have informative cluster sizes. For the problem, we present a within-cluster resampling method for the situation where the failure time of interest can be described by a class of linear transformation models. In addition to the establishment of the asymptotic properties of the proposed estimators of regression parameters, an extensive simulation study is conducted for the assessment of the finite sample properties of the proposed method and suggests that it works well in practical situations. An application to the example that motivated this study is also provided.  相似文献   

15.
In this paper, two new statistics based on comparison of the theoretical and empirical distribution functions are proposed to test exponentiality. Critical values are determined by means of Monte Carlo simulations for various sample sizes and different significance levels. Through an extensive simulation study, 50 selected exponentiality tests are studied for a wide collection of alternative distributions. From the empirical power study, it is concluded that, firstly, one of our proposals is preferable for IFR (increasing failure rate) and UFR (unimodal failure rate) alternatives, whereas the other one is preferable for DFR (decreasing failure rate) and BFR (bathtub failure rate) alternatives and, secondly, the new tests can be considered serious and powerful competitors to other existing proposals, since they have the same (or higher) level of performance than the best tests in the statistical literature.  相似文献   

16.
Multivariate failure time data also referred to as correlated or clustered failure time data, often arise in survival studies when each study subject may experience multiple events. Statistical analysis of such data needs to account for intracluster dependence. In this article, we consider a bivariate proportional hazards model using vector hazard rate, in which the covariates under study have different effect on two components of the vector hazard rate function. Estimation of the parameters as well as base line hazard function are discussed. Properties of the estimators are investigated. We illustrated the method using two real life data. A simulation study is reported to assess the performance of the estimator.  相似文献   

17.
In this paper, a class of tests is developed for comparing the cause-specific hazard rates of m competing risks simultaneously in K ( 2) groups. The data available for a unit are the failure time of the unit along with the identifier of the risk claiming the failure. In practice, the failure time data are generally right censored. The tests are based on the difference between the weighted averages of the cause-specific hazard rates corresponding to each risk. No assumption regarding the dependence of the competing risks is made. It is shown that the proposed test statistic has asymptotically chi-squared distribution. The proposed test is shown to be optimal for a specific type of local alternatives. The choice of weight function is also discussed. A simulation study is carried out using multivariate Gumbel distribution to compare the optimal weight function with a proposed weight function which is to be used in practice. Also, the proposed test is applied to real data on the termination of an intrauterine device.An erratum to this article can be found at  相似文献   

18.
In dental implant research studies, events such as implant complications including pain or infection may be observed recurrently before failure events, i.e. the death of implants. It is natural to assume that recurrent events and failure events are correlated to each other, since they happen on the same implant (subject) and complication times have strong effects on the implant survival time. On the other hand, each patient may have more than one implant. Therefore these recurrent events or failure events are clustered since implant complication times or failure times within the same patient (cluster) are likely to be correlated. The overall implant survival times and recurrent complication times are both interesting to us. In this paper, a joint modelling approach is proposed for modelling complication events and dental implant survival times simultaneously. The proposed method uses a frailty process to model the correlation within cluster and the correlation within subjects. We use Bayesian methods to obtain estimates of the parameters. Performance of the joint models are shown via simulation studies and data analysis.  相似文献   

19.
A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.  相似文献   

20.
In applications, multivariate failure time data appears when each study subject may potentially experience several types of failures or recurrences of a certain phenomenon, or failure times may be clustered. Three types of marginal accelerated failure time models dealing with multiple events data, recurrent events data and clustered events data are considered. We propose a unified empirical likelihood inferential procedure for the three types of models based on rank estimation method. The resulting log-empirical likelihood ratios are shown to possess chi-squared limiting distributions. The properties can be applied to do tests and construct confidence regions without the need to solve the rank estimating equations nor to estimate the limiting variance-covariance matrices. The related computation is easy to implement. The proposed method is illustrated by extensive simulation studies and a real example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号