首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Yu  Jichang  Zhou  Haibo  Cai  Jianwen 《Lifetime data analysis》2021,27(1):15-37
Lifetime Data Analysis - Outcome-dependent sampling designs such as the case–control or case–cohort design are widely used in epidemiological studies for their outstanding...  相似文献   

2.
Zhou  Qingning  Cai  Jianwen  Zhou  Haibo 《Lifetime data analysis》2020,26(1):85-108
Lifetime Data Analysis - We propose a two-stage outcome-dependent sampling design and inference procedure for studies that concern interval-censored failure time outcomes. This design enhances the...  相似文献   

3.
Summary.  We consider estimation of the causal effect of a treatment on an outcome from observational data collected in two phases. In the first phase, a simple random sample of individuals is drawn from a population. On these individuals, information is obtained on treatment, outcome and a few low dimensional covariates. These individuals are then stratified according to these factors. In the second phase, a random subsample of individuals is drawn from each stratum, with known stratum-specific selection probabilities. On these individuals, a rich set of covariates is collected. In this setting, we introduce five estimators: simple inverse weighted; simple doubly robust; enriched inverse weighted; enriched doubly robust; locally efficient. We evaluate the finite sample performance of these estimators in a simulation study. We also use our methodology to estimate the causal effect of trauma care on in-hospital mortality by using data from the National Study of Cost and Outcomes of Trauma.  相似文献   

4.
Summary.  A frequent problem in longitudinal studies is that subjects may miss scheduled visits or be assessed at self-selected points in time. As a result, observed outcome data may be highly unbalanced and the availability of the data may be directly related to the outcome measure and/or some auxiliary factors that are associated with the outcome. If the follow-up visit and outcome processes are correlated, then marginal regression analyses will produce biased estimates. Building on the work of Robins, Rotnitzky and Zhao, we propose a class of inverse intensity-of-visit process-weighted estimators in marginal regression models for longitudinal responses that may be observed in continuous time. This allows us to handle arbitrary patterns of missing data as embedded in a subject's visit process. We derive the large sample distribution for our inverse visit-intensity-weighted estimators and investigate their finite sample behaviour by simulation. Our approach is illustrated with a data set from a health services research study in which homeless people with mental illness were randomized to three different treatments and measures of homelessness (as percentage days homeless in the past 3 months) and other auxiliary factors were recorded at follow-up times that are not fixed by design.  相似文献   

5.
Many existing approaches to analysing interval-censored data lack flexibility or efficiency. In this paper, we propose an efficient, easy to implement approach on accelerated failure time model with a logarithm transformation of the failure time and flexible specifications on the error distribution. We use exact inference for the Dirichlet process without approximation in imputation. Our algorithm can be implemented with simple Gibbs sampling which produces exact posterior distributions on the features of interest. Simulation and real data analysis demonstrate the advantage of our method compared to some other methods.  相似文献   

6.
In this article, we analyze interval censored failure time data with competing risks. A new estimator for the cumulative incidence function is derived using an approximate likelihood and a test statistic to compare two samples is then obtained by extending Sun's test statistic. Small sample properties of the proposed methods are examined by conducting simulations and a cohort dataset from AIDS patients is analyzed as a real example.  相似文献   

7.
Selection of appropriate predictors for right censored time to event data is very often encountered by the practitioners. We consider the ?1 penalized regression or “least absolute shrinkage and selection operator” as a tool for predictor selection in association with accelerated failure time model. The choice of the penalizing parameter λ is crucial to identify the correct set of covariates. In this paper, we propose an information theory-based method to choose λ under log-normal distribution. Furthermore, an efficient algorithm is discussed in the same context. The performance of the proposed λ and the algorithm is illustrated through simulation studies and a real data analysis. The convergence of the algorithm is also discussed.  相似文献   

8.
In many biomedical studies, it is common that due to budget constraints, the primary covariate is only collected in a randomly selected subset from the full study cohort. Often, there is an inexpensive auxiliary covariate for the primary exposure variable that is readily available for all the cohort subjects. Valid statistical methods that make use of the auxiliary information to improve study efficiency need to be developed. To this end, we develop an estimated partial likelihood approach for correlated failure time data with auxiliary information. We assume a marginal hazard model with common baseline hazard function. The asymptotic properties for the proposed estimators are developed. The proof of the asymptotic results for the proposed estimators is nontrivial since the moments used in estimating equation are not martingale-based and the classical martingale theory is not sufficient. Instead, our proofs rely on modern empirical process theory. The proposed estimator is evaluated through simulation studies and is shown to have increased efficiency compared to existing methods. The proposed method is illustrated with a data set from the Framingham study.  相似文献   

9.
Failure time data occur in many areas and in various censoring forms and many models have been proposed for their regression analysis such as the proportional hazards model and the proportional odds model. Another choice that has been discussed in the literature is a general class of semiparmetric transformation models, which include the two models above and many others as special cases. In this paper, we consider this class of models when one faces a general type of censored data, case K informatively interval-censored data, for which there does not seem to exist an established inference procedure. For the problem, we present a two-step estimation procedure that is quite flexible and can be easily implemented, and the consistency and asymptotic normality of the proposed estimators of regression parameters are established. In addition, an extensive simulation study is conducted and suggests that the proposed procedure works well for practical situations. An application is also provided.  相似文献   

10.
Length-biased sampling appears in many observational studies, including epidemiological studies, labor economics and cancer screening trials. To accommodate sampling bias, which can lead to substantial estimation bias if ignored, we propose a class of doubly-weighted rank-based estimating equations under the accelerated failure time model. The general weighting structures considered in our estimating equations allow great flexibility and include many existing methods as special cases. Different approaches for constructing estimating equations are investigated, and the estimators are shown to be consistent and asymptotically normal. Moreover, we propose efficient computational procedures to solve the estimating equations and to estimate the variances of the estimators. Simulation studies show that the proposed estimators outperform the existing estimators. Moreover, real data from a dementia study and a Spanish unemployment duration study are analyzed to illustrate the proposed method.  相似文献   

11.
Sometimes it is appropriate to model the survival and failure time data by a non-monotonic failure rate distribution. This may be desirable when the course of disease is such that mortality reaches a peak after some finite period and then slowly declines.In this paper we study Burr, type XII model whose failure rate exhibits the above behavior. The location of the critical points (at which the monotonicity changes) for both the failure rate and the mean residual life function (MRLF) are studied. A procedure is described for estimating these critical points. Necessary and sufficient conditions for the existence and uniqueness of the maximum likelihood estimates are provided and it is shown that the conditions provided by Wingo (1993) are not sufficient. A data set pertaining to fibre failure strengths is analyzed and the maximum likelihood estimates of the critical points are obtained.  相似文献   

12.
Joint modeling of degradation and failure time data   总被引:1,自引:0,他引:1  
This paper surveys some approaches to model the relationship between failure time data and covariate data like internal degradation and external environmental processes. These models which reflect the dependency between system state and system reliability include threshold models and hazard-based models. In particular, we consider the class of degradation–threshold–shock models (DTS models) in which failure is due to the competing causes of degradation and trauma. For this class of reliability models we express the failure time in terms of degradation and covariates. We compute the survival function of the resulting failure time and derive the likelihood function for the joint observation of failure times and degradation data at discrete times. We consider a special class of DTS models where degradation is modeled by a process with stationary independent increments and related to external covariates through a random time scale and extend this model class to repairable items by a marked point process approach. The proposed model class provides a rich conceptual framework for the study of degradation–failure issues.  相似文献   

13.
We propose a class of additive transformation risk models for clustered failure time data. Our models are motivated by the usual additive risk model for independent failure times incorporating a frailty with mean one and constant variability which is a natural generalization of the additive risk model from univariate failure time to multivariate failure time. An estimating equation approach based on the marginal hazards function is proposed. Under the assumption that cluster sizes are completely random, we show the resulting estimators of the regression coefficients are consistent and asymptotically normal. We also provide goodness-of-fit test statistics for choosing the transformation. Simulation studies and real data analysis are conducted to examine the finite-sample performance of our estimators.  相似文献   

14.
Semiparametric accelerated failure time (AFT) models directly relate the expected failure times to covariates and are a useful alternative to models that work on the hazard function or the survival function. For case-cohort data, much less development has been done with AFT models. In addition to the missing covariates outside of the sub-cohort in controls, challenges from AFT model inferences with full cohort are retained. The regression parameter estimator is hard to compute because the most widely used rank-based estimating equations are not smooth. Further, its variance depends on the unspecified error distribution, and most methods rely on computationally intensive bootstrap to estimate it. We propose fast rank-based inference procedures for AFT models, applying recent methodological advances to the context of case-cohort data. Parameters are estimated with an induced smoothing approach that smooths the estimating functions and facilitates the numerical solution. Variance estimators are obtained through efficient resampling methods for nonsmooth estimating functions that avoids full blown bootstrap. Simulation studies suggest that the recommended procedure provides fast and valid inferences among several competing procedures. Application to a tumor study demonstrates the utility of the proposed method in routine data analysis.  相似文献   

15.
Interval-censored failure time data and panel count data are two types of incomplete data that commonly occur in event history studies and many methods have been developed for their analysis separately (Sun in The statistical analysis of interval-censored failure time data. Springer, New York, 2006; Sun and Zhao in The statistical analysis of panel count data. Springer, New York, 2013). Sometimes one may be interested in or need to conduct their joint analysis such as in the clinical trials with composite endpoints, for which it does not seem to exist an established approach in the literature. In this paper, a sieve maximum likelihood approach is developed for the joint analysis and in the proposed method, Bernstein polynomials are used to approximate unknown functions. The asymptotic properties of the resulting estimators are established and in particular, the proposed estimators of regression parameters are shown to be semiparametrically efficient. In addition, an extensive simulation study was conducted and the proposed method is applied to a set of real data arising from a skin cancer study.  相似文献   

16.
Two- or multi-phase study designs are often used in settings involving failure times. In most studies, whether or not certain covariates are measured on an individual depends on their failure time and status. For example, when failures are rare, case–cohort or case–control designs are used to increase the number of failures relative to a random sample of the same size. Another scenario is where certain covariates are expensive to measure, so they are obtained only for selected individuals in a cohort. This paper considers such situations and focuses on cases where we wish to test hypotheses of no association between failure time and expensive covariates. Efficient score tests based on maximum likelihood are developed and shown to have a simple form for a wide class of models and sampling designs. Some numerical comparisons of study designs are presented.  相似文献   

17.
The purpose of this paper is to account for informative sampling in fitting time series models, and in particular an autoregressive model of order one, for longitudinal survey data. The idea behind the proposed approach is to extract the model holding for the sample data as a function of the model in the population and the first-order inclusion probabilities, and then fit the sample model using maximum-likelihood, pseudo-maximum-likelihood and estimating equations methods. A new test for sampling ignorability is proposed based on the Kullback–Leibler information measure. Also, we investigate the issue of the sensitivity of the sample model to incorrect specification of the conditional expectations of the sample inclusion probabilities. The simulation study carried out shows that the sample-likelihood-based method produces better estimators than the pseudo-maximum-likelihood method, and that sensitivity to departures from the assumed model is low. Also, we find that both the conventional t-statistic and the Kullback–Leibler information statistic for testing of sampling ignorability perform well under both informative and noninformative sampling designs.  相似文献   

18.

This article discusses regression analysis of right-censored failure time data where there may exist a cured subgroup, and also covariate effects may be varying with time, a phenomena that often occurs in many medical studies. To address the problem, we discuss a class of varying coefficient transformation models along with a logistic model for the cured subgroup. For inference, a sieve maximum likelihood approach is developed with the use of spline functions, and the asymptotic properties of the proposed estimators are established. The proposed method can be easily implemented, and the conducted simulation study suggests that the proposed method works well in practical situations. An illustrative example is provided.

  相似文献   

19.
Proportional hazards frailty models use a random effect, so called frailty, to construct association for clustered failure time data. It is customary to assume that the random frailty follows a gamma distribution. In this paper, we propose a graphical method for assessing adequacy of the proportional hazards frailty models. In particular, we focus on the assessment of the gamma distribution assumption for the frailties. We calculate the average of the posterior expected frailties at several followup time points and compare it at these time points to 1, the known mean frailty. Large discrepancies indicate lack of fit. To aid in assessing the goodness of fit, we derive and estimate the standard error of the mean of the posterior expected frailties at each time point examined. We give an example to illustrate the proposed methodology and perform sensitivity analysis by simulations.  相似文献   

20.
Bivariate failure time data is widely used in survival analysis, for example, in twins study. This article presents a class of chi2-type tests for independence between pairs of failure times after adjusting for covariates. A bivariate accelerated failure time model is proposed for the joint distribution of bivariate failure times while leaving the dependence structures for related failure times completely unspecified. Theoretical properties of the proposed tests are derived and variance estimates of the test statistics are obtained using a resampling technique. Simulation studies show that the proposed tests are appropriate for practical use. Two examples including the study of infection in catheters for patients on dialysis and the diabetic retinopathy study are also given to illustrate the methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号