首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
ABSTRACT

In this article, causal inference in randomized studies with recurrent events data and all-or-none compliance is considered. We use the counting process to analyze the recurrent events data and propose a causal proportional intensity model. The maximum likelihood approach is adopted to estimate the parameters of the proposed causal model. To overcome the computational difficulties created by the mixture structure of the problem, we develop an expectation-maximization (EM) algorithm. The resulting estimators are shown to be consistent and asymptotically normal. We further estimate the complier average causal effect (CACE), which is defined as the difference of the average numbers of recurrence between treatment and control groups within the complier class. The corresponding inferential procedures are established. Some simulation studies are conducted to assess the finite sample performance of the proposed approach.  相似文献   

2.
Data analysis for randomized trials including multi-treatment arms is often complicated by subjects who do not comply with their treatment assignment. We discuss here methods of estimating treatment efficacy for randomized trials involving multi-treatment arms subject to non-compliance. One treatment effect of interest in the presence of non-compliance is the complier average causal effect (CACE) (Angrist et al. 1996), which is defined as the treatment effect for subjects who would comply regardless of the assigned treatment. Following the idea of principal stratification (Frangakis & Rubin 2002), we define principal compliance (Little et al. 2009) in trials with three treatment arms, extend CACE and define causal estimands of interest in this setting. In addition, we discuss structural assumptions needed for estimation of causal effects and the identifiability problem inherent in this setting from both a Bayesian and a classical statistical perspective. We propose a likelihood-based framework that models potential outcomes in this setting and a Bayes procedure for statistical inference. We compare our method with a method of moments approach proposed by Cheng & Small (2006) using a hypothetical data set, and further illustrate our approach with an application to a behavioral intervention study (Janevic et al. 2003).  相似文献   

3.
Motivated by a potential-outcomes perspective, the idea of principal stratification has been widely recognized for its relevance in settings susceptible to posttreatment selection bias such as randomized clinical trials where treatment received can differ from treatment assigned. In one such setting, we address subtleties involved in inference for causal effects when using a key covariate to predict membership in latent principal strata. We show that when treatment received can differ from treatment assigned in both study arms, incorporating a stratum-predictive covariate can make estimates of the "complier average causal effect" (CACE) derive from observations in the two treatment arms with different covariate distributions. Adopting a Bayesian perspective and using Markov chain Monte Carlo for computation, we develop posterior checks that characterize the extent to which incorporating the pretreatment covariate endangers estimation of the CACE. We apply the method to analyze a clinical trial comparing two treatments for jaw fractures in which the study protocol allowed surgeons to overrule both possible randomized treatment assignments based on their clinical judgment and the data contained a key covariate (injury severity) predictive of treatment received.  相似文献   

4.
In this article, we consider the inclusion of random effects in both the survival function for at-risk subjects and the cure probability assuming a bivariate normal distribution for those effects in each cluster. For parameter estimation, we implemented the restricted maximum likelihood (REML) approach. We consider Weibull and Piecewise Exponential distributions to model the survival function for non-cured individuals. Simulation studies are performed, and based on a real database we evaluate the performance of our proposed model. Effect of different follow-up times and the effect of considering independent random effects instead of bivariate random effects are also studied.  相似文献   

5.
The last decade saw enormous progress in the development of causal inference tools to account for noncompliance in randomized clinical trials. With survival outcomes, structural accelerated failure time (SAFT) models enable causal estimation of effects of observed treatments without making direct assumptions on the compliance selection mechanism. The traditional proportional hazards model has however rarely been used for causal inference. The estimator proposed by Loeys and Goetghebeur (2003, Biometrics vol. 59 pp. 100–105) is limited to the setting of all or nothing exposure. In this paper, we propose an estimation procedure for more general causal proportional hazards models linking the distribution of potential treatment-free survival times to the distribution of observed survival times via observed (time-constant) exposures. Specifically, we first build models for observed exposure-specific survival times. Next, using the proposed causal proportional hazards model, the exposure-specific survival distributions are backtransformed to their treatment-free counterparts, to obtain – after proper mixing – the unconditional treatment-free survival distribution. Estimation of the parameter(s) in the causal model is then based on minimizing a test statistic for equality in backtransformed survival distributions between randomized arms.  相似文献   

6.
Abstract.  We propose a Bayesian semiparametric model for survival data with a cure fraction. We explicitly consider a finite cure time in the model, which allows us to separate the cured and the uncured populations. We take a mixture prior of a Markov gamma process and a point mass at zero to model the baseline hazard rate function of the entire population. We focus on estimating the cure threshold after which subjects are considered cured. We can incorporate covariates through a structure similar to the proportional hazards model and allow the cure threshold also to depend on the covariates. For illustration, we undertake simulation studies and a full Bayesian analysis of a bone marrow transplant data set.  相似文献   

7.
Semiparametric transformation model has been extensively investigated in the literature. The model, however, has little dealt with survival data with cure fraction. In this article, we consider a class of semi-parametric transformation models, where an unknown transformation of the survival times with cure fraction is assumed to be linearly related to the covariates and the error distributions are parametrically specified by an extreme value distribution with unknown parameters. Estimators for the coefficients of covariates are obtained from pseudo Z-estimator procedures allowing censored observations. We show that the estimators are consistent and asymptotically normal. The bootstrap estimation of the variances of the estimators is also investigated.  相似文献   

8.
Survival data with nonnegligible cure fractions are commonly encountered in clinical cancer clinical research. Recently, several authors (e.g. Kuk and Chen, Biometrika 79 (1992) 531; Maller and Zhou, Journal of Applied Probability, 30 (1993) 602; Peng and Dear, Biometrics, 56 (2000) 237; Sy and Taylor, Biometrics 56 (2000) 227) have proposed to use semiparametric cure models to analyze such data. Much of the existing work has been emphasized on cure detections and regression techniques. In contrast, this project focuses on the hypothesis testing in the presence of a cure fraction. Specifically, our interest lies in detecting whether there exists survival differences among noncured patients between treatment arms. For this purpose, we investigate the use of a modified Cramér-von Mises statistic for two-sample survival comparisons within the framework of cure models. Such a test has been studied by Tamura et al., (Statistics in Medicine 19, 2000, 2169) using bootstrap procedure. We will focus on developing asymptotic theory and convergent algorithms in this paper. We show that the limiting distributions of the Cramér-von Mises statistic under the null hypothesis can be represented by stochastic integrals and a weighted noncentral chi-squares. Both representations lead to concrete numerical schemes for computing the limiting distributions. The algorithms can be easily implemented for data analysis and significantly reduce computing time compared to the bootstrap approach. For illustrative purposes, we apply the proposed test to a published clinical trial.  相似文献   

9.
In survival data analysis it is frequent the occurrence of a significant amount of censoring to the right indicating that there may be a proportion of individuals in the study for which the event of interest will never happen. This fact is not considered by the ordinary survival theory. Consequently, the survival models with a cure fraction have been receiving a lot of attention in the recent years. In this article, we consider the standard mixture cure rate model where a fraction p 0 of the population is of individuals cured or immune and the remaining 1 ? p 0 are not cured. We assume an exponential distribution for the survival time and an uniform-exponential for the censoring time. In a simulation study, the impact caused by the informative uniform-exponential censoring on the coverage probabilities and lengths of asymptotic confidence intervals is analyzed by using the Fisher information and observed information matrices.  相似文献   

10.
Non-mixture cure models (NMCMs) are derived from a simplified representation of the biological process that takes place after treatment for cancer. These models are intended to represent the time from the end of treatment to the time of first recurrence of cancer in studies when a proportion of those treated are completely cured. However, for many studies overall survival is also of interest. A two-stage NMCM that estimates the overall survival from a combination of two cure models, one from end of treatment to first recurrence and one from first recurrence to death, is proposed. The model is applied to two studies of Ewing's tumor in young patients. Caution needs to be exercised when extrapolating from cure models fitted to short follow-up times, but these data and associated simulations show how, when follow-up is limited, a two-stage model can give more stable estimates of the cure fraction than a one-stage model applied directly to overall survival.  相似文献   

11.
As the treatments of cancer progress, a certain number of cancers are curable if diagnosed early. In population‐based cancer survival studies, cure is said to occur when mortality rate of the cancer patients returns to the same level as that expected for the general cancer‐free population. The estimates of cure fraction are of interest to both cancer patients and health policy makers. Mixture cure models have been widely used because the model is easy to interpret by separating the patients into two distinct groups. Usually parametric models are assumed for the latent distribution for the uncured patients. The estimation of cure fraction from the mixture cure model may be sensitive to misspecification of latent distribution. We propose a Bayesian approach to mixture cure model for population‐based cancer survival data, which can be extended to county‐level cancer survival data. Instead of modeling the latent distribution by a fixed parametric distribution, we use a finite mixture of the union of the lognormal, loglogistic, and Weibull distributions. The parameters are estimated using the Markov chain Monte Carlo method. Simulation study shows that the Bayesian method using a finite mixture latent distribution provides robust inference of parameter estimates. The proposed Bayesian method is applied to relative survival data for colon cancer patients from the Surveillance, Epidemiology, and End Results (SEER) Program to estimate the cure fractions. The Canadian Journal of Statistics 40: 40–54; 2012 © 2012 Statistical Society of Canada  相似文献   

12.
Clustered survival data arise often in clinical trial design, where the correlated subunits from the same cluster are randomized to different treatment groups. Under such design, we consider the problem of constructing confidence interval for the difference of two median survival time given the covariates. We use Cox gamma frailty model to account for the within-cluster correlation. Based on the conditional confidence intervals, we can identify the possible range of covariates over which the two groups would provide different median survival times. The associated coverage probability and the expected length of the proposed interval are investigated via a simulation study. The implementation of the confidence intervals is illustrated using a real data set.  相似文献   

13.
We propose a prior probability model for two distributions that are ordered according to a stochastic precedence constraint, a weaker restriction than the more commonly utilized stochastic order constraint. The modeling approach is based on structured Dirichlet process mixtures of normal distributions. Full inference for functionals of the stochastic precedence constrained mixture distributions is obtained through a Markov chain Monte Carlo posterior simulation method. A motivating application involves study of the discriminatory ability of continuous diagnostic tests in epidemiologic research. Here, stochastic precedence provides a natural restriction for the distributions of test scores corresponding to the non-infected and infected groups. Inference under the model is illustrated with data from a diagnostic test for Johne’s disease in dairy cattle. We also apply the methodology to the comparison of survival distributions associated with two distinct conditions, and illustrate with analysis of data on survival time after bone marrow transplantation for treatment of leukemia.  相似文献   

14.
In this paper, we consider two well-known parametric long-term survival models, namely, the Bernoulli cure rate model and the promotion time (or Poisson) cure rate model. Assuming the long-term survival probability to depend on a set of risk factors, the main contribution is in the development of the stochastic expectation maximization (SEM) algorithm to determine the maximum likelihood estimates of the model parameters. We carry out a detailed simulation study to demonstrate the performance of the proposed SEM algorithm. For this purpose, we assume the lifetimes due to each competing cause to follow a two-parameter generalized exponential distribution. We also compare the results obtained from the SEM algorithm with those obtained from the well-known expectation maximization (EM) algorithm. Furthermore, we investigate a simplified estimation procedure for both SEM and EM algorithms that allow the objective function to be maximized to split into simpler functions with lower dimensions with respect to model parameters. Moreover, we present examples where the EM algorithm fails to converge but the SEM algorithm still works. For illustrative purposes, we analyze a breast cancer survival data. Finally, we use a graphical method to assess the goodness-of-fit of the model with generalized exponential lifetimes.  相似文献   

15.
Sample size calculation is a critical issue in clinical trials because a small sample size leads to a biased inference and a large sample size increases the cost. With the development of advanced medical technology, some patients can be cured of certain chronic diseases, and the proportional hazards mixture cure model has been developed to handle survival data with potential cure information. Given the needs of survival trials with potential cure proportions, a corresponding sample size formula based on the log-rank test statistic for binary covariates has been proposed by Wang et al. [25]. However, a sample size formula based on continuous variables has not been developed. Herein, we presented sample size and power calculations for the mixture cure model with continuous variables based on the log-rank method and further modified it by Ewell's method. The proposed approaches were evaluated using simulation studies for synthetic data from exponential and Weibull distributions. A program for calculating necessary sample size for continuous covariates in a mixture cure model was implemented in R.  相似文献   

16.
In this paper, we present a Bayesian analysis of double seasonal autoregressive moving average models. We first consider the problem of estimating unknown lagged errors in the moving average part using non linear least squares method, and then using natural conjugate and Jeffreys’ priors we approximate the marginal posterior distributions to be multivariate t and gamma distributions for the model coefficients and precision, respectively. We evaluate the proposed Bayesian methodology using simulation study, and apply to real-world hourly electricity load data sets.  相似文献   

17.
In this paper, we investigate different procedures for testing the equality of two mean survival times in paired lifetime studies. We consider Owen’s M-test and Q-test, a likelihood ratio test, the paired t-test, the Wilcoxon signed rank test and a permutation test based on log-transformed survival times in the comparative study. We also consider the paired t-test, the Wilcoxon signed rank test and a permutation test based on original survival times for the sake of comparison. The size and power characteristics of these tests are studied by means of Monte Carlo simulations under a frailty Weibull model. For less skewed marginal distributions, the Wilcoxon signed rank test based on original survival times is found to be desirable. Otherwise, the M-test and the likelihood ratio test are the best choices in terms of power. In general, one can choose a test procedure based on information about the correlation between the two survival times and the skewness of the marginal survival distributions.  相似文献   

18.
Summary.  The cure fraction (the proportion of patients who are cured of disease) is of interest to both patients and clinicians and is a useful measure to monitor trends in survival of curable disease. The paper extends the non-mixture and mixture cure fraction models to estimate the proportion cured of disease in population-based cancer studies by incorporating a finite mixture of two Weibull distributions to provide more flexibility in the shape of the estimated relative survival or excess mortality functions. The methods are illustrated by using public use data from England and Wales on survival following diagnosis of cancer of the colon where interest lies in differences between age and deprivation groups. We show that the finite mixture approach leads to improved model fit and estimates of the cure fraction that are closer to the empirical estimates. This is particularly so in the oldest age group where the cure fraction is notably lower. The cure fraction is broadly similar in each deprivation group, but the median survival of the 'uncured' is lower in the more deprived groups. The finite mixture approach overcomes some of the limitations of the more simplistic cure models and has the potential to model the complex excess hazard functions that are seen in real data.  相似文献   

19.
Abstract

We propose a cure rate survival model by assuming that the number of competing causes of the event of interest follows the negative binomial distribution and the time to the event of interest has the Birnbaum-Saunders distribution. Further, the new model includes as special cases some well-known cure rate models published recently. We consider a frequentist analysis for parameter estimation of the negative binomial Birnbaum-Saunders model with cure rate. Then, we derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes. We illustrate the usefulness of the proposed model in the analysis of a real data set from the medical area.  相似文献   

20.
In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log‐rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan–Meier estimators of survival curves using an IPTW log‐rank test for multi‐valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号