首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Competing risks are common in clinical cancer research, as patients are subject to multiple potential failure outcomes, such as death from the cancer itself or from complications arising from the disease. In the analysis of competing risks, several regression methods are available for the evaluation of the relationship between covariates and cause-specific failures, many of which are based on Cox’s proportional hazards model. Although a great deal of research has been conducted on estimating competing risks, less attention has been devoted to linear regression modeling, which is often referred to as the accelerated failure time (AFT) model in survival literature. In this article, we address the use and interpretation of linear regression analysis with regard to the competing risks problem. We introduce two types of AFT modeling framework, where the influence of a covariate can be evaluated in relation to either a cause-specific hazard function, referred to as cause-specific AFT (CS-AFT) modeling in this study, or the cumulative incidence function of a particular failure type, referred to as crude-risk AFT (CR-AFT) modeling. Simulation studies illustrate that, as in hazard-based competing risks analysis, these two models can produce substantially different effects, depending on the relationship between the covariates and both the failure type of principal interest and competing failure types. We apply the AFT methods to data from non-Hodgkin lymphoma patients, where the dataset is characterized by two competing events, disease relapse and death without relapse, and non-proportionality. We demonstrate how the data can be analyzed and interpreted, using linear competing risks regression models.  相似文献   

2.
Smoothed Gehan rank estimation methods are widely used in accelerated failure time (AFT) models with/without clusters. However, most methods are sensitive to outliers in the covariates. In order to solve this problem, we propose robust approaches based on the smoothed Gehan rank estimation methods for the AFT model, allowing for clusters by employing two different weight functions. Simulation studies show that the proposed methods outperform existing smoothed rank estimation methods regarding their biases and standard deviations when there are outliers in the covariates. The proposed methods are also applied to a real dataset from the “Major cardiovascular interventions” study.  相似文献   

3.
In stratified case-cohort designs, samplings of case-cohort samples are conducted via a stratified random sampling based on covariate information available on the entire cohort members. In this paper, we extended the work of Kang & Cai (2009) to a generalized stratified case-cohort study design for failure time data with multiple disease outcomes. Under this study design, we developed weighted estimating procedures for model parameters in marginal multiplicative intensity models and for the cumulative baseline hazard function. The asymptotic properties of the estimators are studied using martingales, modern empirical process theory, and results for finite population sampling.  相似文献   

4.
We introduce a general class of semiparametric hazard regression models, called extended hazard (EH) models, that are designed to accommodate various survival schemes with time-dependent covariates. The EH model contains both the Cox model and the accelerated failure time (AFT) model as its subclasses so that we can use this nested structure to perform model selection between the Cox model and the AFT model. A class of estimating equations using counting process and martingale techniques is developed to estimate the regression parameters of the proposed model. The performance of the estimating procedure and the impact of model misspecification are assessed through simulation studies. Two data examples, Stanford heart transplant data and Mediterranean fruit flies, egg-laying data, are used to demonstrate the usefulness of the EH model.  相似文献   

5.
The case-cohort design brings cost reduction in large cohort studies. In this paper, we consider a nonlinear quantile regression model for censored competing risks under the case-cohort design. Two different estimation equations are constructed with or without the covariates information of other risks included, respectively. The large sample properties of the estimators are obtained. The asymptotic covariances are estimated by using a fast resampling method, which is useful to consider further inferences. The finite sample performance of the proposed estimators is assessed by simulation studies. Also a real example is used to demonstrate the application of the proposed methods.  相似文献   

6.
Under the case-cohort design introduced by Prentice (Biometrica 73:1–11, 1986), the covariate histories are ascertained only for the subjects who experience the event of interest (i.e., the cases) during the follow-up period and for a relatively small random sample from the original cohort (i.e., the subcohort). The case-cohort design has been widely used in clinical and epidemiological studies to assess the effects of covariates on failure times. Most statistical methods developed for the case-cohort design use the proportional hazards model, and few methods allow for time-varying regression coefficients. In addition, most methods disregard data from subjects outside of the subcohort, which can result in inefficient inference. Addressing these issues, this paper proposes an estimation procedure for the semiparametric additive hazards model with case-cohort/two-phase sampling data, allowing the covariates of interest to be missing for cases as well as for non-cases. A more flexible form of the additive model is considered that allows the effects of some covariates to be time varying while specifying the effects of others to be constant. An augmented inverse probability weighted estimation procedure is proposed. The proposed method allows utilizing the auxiliary information that correlates with the phase-two covariates to improve efficiency. The asymptotic properties of the proposed estimators are established. An extensive simulation study shows that the augmented inverse probability weighted estimation is more efficient than the widely adopted inverse probability weighted complete-case estimation method. The method is applied to analyze data from a preventive HIV vaccine efficacy trial.  相似文献   

7.
In the competing risks analysis, most inferences have been developed based on continuous failure time data. However, failure times are sometimes observed as being discrete. We propose nonparametric inferences for the cumulative incidence function for pure discrete data with competing risks. When covariate information is available, we propose semiparametric inferences for direct regression modelling of the cumulative incidence function for grouped discrete failure time data with competing risks. Simulation studies show that the procedures perform well. The proposed methods are illustrated with a study of contraceptive use in Indonesia.  相似文献   

8.
Nested case-control and case-cohort studies are useful for studying associations between covariates and time-to-event when some covariates are expensive to measure. Full covariate information is collected in the nested case-control or case-cohort sample only, while cheaply measured covariates are often observed for the full cohort. Standard analysis of such case-control samples ignores any full cohort data. Previous work has shown how data for the full cohort can be used efficiently by multiple imputation of the expensive covariate(s), followed by a full-cohort analysis. For large cohorts this is computationally expensive or even infeasible. An alternative is to supplement the case-control samples with additional controls on which cheaply measured covariates are observed. We show how multiple imputation can be used for analysis of such supersampled data. Simulations show that this brings efficiency gains relative to a traditional analysis and that the efficiency loss relative to using the full cohort data is not substantial.  相似文献   

9.
The accelerated failure time (AFT) model is an important regression tool to study the association between failure time and covariates. In this paper, we propose a robust weighted generalized M (GM) estimation for the AFT model with right-censored data by appropriately using the Kaplan–Meier weights in the GM–type objective function to estimate the regression coefficients and scale parameter simultaneously. This estimation method is computationally simple and can be implemented with existing software. Asymptotic properties including the root-n consistency and asymptotic normality are established for the resulting estimator under suitable conditions. We further show that the method can be readily extended to handle a class of nonlinear AFT models. Simulation results demonstrate satisfactory finite sample performance of the proposed estimator. The practical utility of the method is illustrated by a real data example.  相似文献   

10.
Recognizing that the efficiency in relative risk estimation for the Cox proportional hazards model is largely constrained by the total number of cases, Prentice (1986) proposed the case-cohort design in which covariates are measured on all cases and on a random sample of the cohort. Subsequent to Prentice, other methods of estimation and sampling have been proposed for these designs. We formalize an approach to variance estimation suggested by Barlow (1994), and derive a robust variance estimator based on the influence function. We consider the applicability of the variance estimator to all the proposed case-cohort estimators, and derive the influence function when known sampling probabilities in the estimators are replaced by observed sampling fractions. We discuss the modifications required when cases are missing covariate information. The missingness may occur by chance, and be completely at random; or may occur as part of the sampling design, and depend upon other observed covariates. We provide an adaptation of S-plus code that allows estimating influence function variances in the presence of such missing covariates. Using examples from our current case-cohort studies on esophageal and gastric cancer, we illustrate how our results our useful in solving design and analytic issues that arise in practice.  相似文献   

11.
The problem of estimating standard errors for diagnostic accuracy measures might be challenging for many complicated models. We can address such a problem by using the Bootstrap methods to blunt its technical edge with resampled empirical distributions. We consider two cases where bootstrap methods can successfully improve our knowledge of the sampling variability of the diagnostic accuracy estimators. The first application is to make inference for the area under the ROC curve resulted from a functional logistic regression model which is a sophisticated modelling device to describe the relationship between a dichotomous response and multiple covariates. We consider using this regression method to model the predictive effects of multiple independent variables on the occurrence of a disease. The accuracy measures, such as the area under the ROC curve (AUC) are developed from the functional regression. Asymptotical results for the empirical estimators are provided to facilitate inferences. The second application is to test the difference of two weighted areas under the ROC curve (WAUC) from a paired two sample study. The correlation between the two WAUC complicates the asymptotic distribution of the test statistic. We then employ the bootstrap methods to gain satisfactory inference results. Simulations and examples are supplied in this article to confirm the merits of the bootstrap methods.  相似文献   

12.
This paper deals with the regression analysis of failure time data when there are censoring and multiple types of failures. We propose a semiparametric generalization of a parametric mixture model of Larson & Dinse (1985), for which the marginal probabilities of the various failure types are logistic functions of the covariates. Given the type of failure, the conditional distribution of the time to failure follows a proportional hazards model. A marginal like lihood approach to estimating regression parameters is suggested, whereby the baseline hazard functions are eliminated as nuisance parameters. The Monte Carlo method is used to approximate the marginal likelihood; the resulting function is maximized easily using existing software. Some guidelines for choosing the number of Monte Carlo replications are given. Fixing the regression parameters at their estimated values, the full likelihood is maximized via an EM algorithm to estimate the baseline survivor functions. The methods suggested are illustrated using the Stanford heart transplant data.  相似文献   

13.
A flexible Bayesian semiparametric accelerated failure time (AFT) model is proposed for analyzing arbitrarily censored survival data with covariates subject to measurement error. Specifically, the baseline error distribution in the AFT model is nonparametrically modeled as a Dirichlet process mixture of normals. Classical measurement error models are imposed for covariates subject to measurement error. An efficient and easy-to-implement Gibbs sampler, based on the stick-breaking formulation of the Dirichlet process combined with the techniques of retrospective and slice sampling, is developed for the posterior calculation. An extensive simulation study is conducted to illustrate the advantages of our approach.  相似文献   

14.
Motivated by an application, we consider the statistical inference of varying-coefficient regression models in which some covariates are not observed, but ancillary variables are available to remit them. Due to the attenuation, the usual local polynomial estimation of the coefficient functions is not consistent. We propose a corrected local polynomial estimation for the unknown coefficient functions by calibrating the error-prone covariates. It is shown that the resulting estimators are consistent and asymptotically normal. In addition, we develop a wild bootstrap test for the goodness of fit of models. Some simulations are conducted to demonstrate the finite sample performances of the proposed estimation and test procedures. An example of application on a real data from Duchenne muscular dystrophy study is also illustrated.  相似文献   

15.
The semiparametric accelerated failure time (AFT) model is not as widely used as the Cox relative risk model due to computational difficulties. Recent developments in least squares estimation and induced smoothing estimating equations for censored data provide promising tools to make the AFT models more attractive in practice. For multivariate AFT models, we propose a generalized estimating equations (GEE) approach, extending the GEE to censored data. The consistency of the regression coefficient estimator is robust to misspecification of working covariance, and the efficiency is higher when the working covariance structure is closer to the truth. The marginal error distributions and regression coefficients are allowed to be unique for each margin or partially shared across margins as needed. The initial estimator is a rank-based estimator with Gehan’s weight, but obtained from an induced smoothing approach with computational ease. The resulting estimator is consistent and asymptotically normal, with variance estimated through a multiplier resampling method. In a large scale simulation study, our estimator was up to three times as efficient as the estimateor that ignores the within-cluster dependence, especially when the within-cluster dependence was strong. The methods were applied to the bivariate failure times data from a diabetic retinopathy study.  相似文献   

16.
Abstract.  Conventional bootstrap- t intervals for density functions based on kernel density estimators exhibit poor coverages due to failure of the bootstrap to estimate the bias correctly. The problem can be resolved by either estimating the bias explicitly or undersmoothing the kernel density estimate to undermine its bias asymptotically. The resulting bias-corrected intervals have an optimal coverage error of order arbitrarily close to second order for a sufficiently smooth density function. We investigated the effects on coverage error of both bias-corrected intervals when the nominal coverage level is calibrated by the iterated bootstrap. In either case, an asymptotic reduction of coverage error is possible provided that the bias terms are handled using an extra round of smoothed bootstrapping. Under appropriate smoothness conditions, the optimal coverage error of the iterated bootstrap- t intervals has order arbitrarily close to third order. Examples of both simulated and real data are reported to illustrate the iterated bootstrap procedures.  相似文献   

17.
Nonparametric estimation and inferences of conditional distribution functions with longitudinal data have important applications in biomedical studies, such as epidemiological studies and longitudinal clinical trials. Estimation approaches without any structural assumptions may lead to inadequate and numerically unstable estimators in practice. We propose in this paper a nonparametric approach based on time-varying parametric models for estimating the conditional distribution functions with a longitudinal sample. Our model assumes that the conditional distribution of the outcome variable at each given time point can be approximated by a parametric model after local Box–Cox transformation. Our estimation is based on a two-step smoothing method, in which we first obtain the raw estimators of the conditional distribution functions at a set of disjoint time points, and then compute the final estimators at any time by smoothing the raw estimators. Applications of our two-step estimation method have been demonstrated through a large epidemiological study of childhood growth and blood pressure. Finite sample properties of our procedures are investigated through a simulation study. Application and simulation results show that smoothing estimation from time-variant parametric models outperforms the existing kernel smoothing estimator by producing narrower pointwise bootstrap confidence band and smaller root mean squared error.  相似文献   

18.
Several survival regression models have been developed to assess the effects of covariates on failure times. In various settings, including surveys, clinical trials and epidemiological studies, missing data may often occur due to incomplete covariate data. Most existing methods for lifetime data are based on the assumption of missing at random (MAR) covariates. However, in many substantive applications, it is important to assess the sensitivity of key model inferences to the MAR assumption. The index of sensitivity to non-ignorability (ISNI) is a local sensitivity tool to measure the potential sensitivity of key model parameters to small departures from the ignorability assumption, needless of estimating a complicated non-ignorable model. We extend this sensitivity index to evaluate the impact of a covariate that is potentially missing, not at random in survival analysis, using parametric survival models. The approach will be applied to investigate the impact of missing tumor grade on post-surgical mortality outcomes in individuals with pancreas-head cancer in the Surveillance, Epidemiology, and End Results data set. For patients suffering from cancer, tumor grade is an important risk factor. Many individuals in these data with pancreas-head cancer have missing tumor grade information. Our ISNI analysis shows that the magnitude of effect for most covariates (with significant effect on the survival time distribution), specifically surgery and tumor grade as some important risk factors in cancer studies, highly depends on the missing mechanism assumption of the tumor grade. Also a simulation study is conducted to evaluate the performance of the proposed index in detecting sensitivity of key model parameters.  相似文献   

19.
Abstract.  Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.  相似文献   

20.
Alternative methods of estimating properties of unknown distributions include the bootstrap and the smoothed bootstrap. In the standard bootstrap setting, Johns (1988) introduced an importance resam¬pling procedure that results in more accurate approximation to the bootstrap estimate of a distribution function or a quantile. With a suitable “exponential tilting” similar to that used by Johns, we derived a smoothed version of importance resampling in the framework of the smoothed bootstrap. Smoothed importance resampling procedures were developed for the estimation of distribution functions of the Studentized mean, the Studentized variance, and the correlation coefficient. Implementation of these procedures are presented via simulation results which concentrate on the problem of estimation of distribution functions of the Studentized mean and Studentized variance for different sample sizes and various pre-specified smoothing bandwidths for the normal data; additional simulations were conducted for the estimation of quantiles of the distribution of the Studentized mean under an optimal smoothing bandwidth when the original data were simulated from three different parent populations: lognormal, t(3) and t(10). These results suggest that in cases where it is advantageous to use the smoothed bootstrap rather than the standard bootstrap, the amount of resampling necessary might be substantially reduced by the use of importance resampling methods and the efficiency gains depend on the bandwidth used in the kernel density estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号