首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Graphical representation of survival curves is often used to illustrate associations between exposures and time-to-event outcomes. However, when exposures are time-dependent, calculation of survival probabilities is not straightforward. Our aim was to develop a method to estimate time-dependent survival probabilities and represent them graphically. Cox models with time-dependent indicators to represent state changes were fitted, and survival probabilities were plotted using pre-specified times of state changes. Time-varying hazard ratios for the state change were also explored. The method was applied to data from the Adult-to-Adult Living Donor Liver Transplantation Cohort Study (A2ALL). Survival curves showing a ‘split’ at a pre-specified time t allow for the qualitative comparison of survival probabilities between patients with similar baseline covariates who do and do not experience a state change at time t. Time since state change interactions can be visually represented to reflect changing hazard ratios over time. A2ALL study results showed differences in survival probabilities among those who did not receive a transplant, received a living donor transplant, and received a deceased donor transplant. These graphical representations of survival curves with time-dependent indicators improve upon previous methods and allow for clinically meaningful interpretation.  相似文献   

2.
In an attempt to identify similarities between methods for estimating a mean function with different types of response or observation processes, we explore a general theoretical framework for nonparametric estimation of the mean function of a response process subject to incomplete observations. Special cases of the response process include quantitative responses and discrete state processes such as survival processes, counting processes and alternating binary processes. The incomplete data are assumed to arise from a general response-independent observation process, which includes right- censoring, interval censoring, periodic observation, and mixtures of these as special cases. We explore two criteria for defining nonparametric estimators, one based on the sample mean of available data and the other inspired by the construction of Kaplan-Meier (or product-limit) estimator [J. Am. Statist. Assoc. 53 (1958) 457] for right-censored survival data. We show that under regularity conditions the estimated mean functions resulting from both criteria are consistent and converge weakly to Gaussian processes, and provide consistent estimators of their covariance functions. We then evaluate these general criteria for specific responses and observation processes, and show how they lead to familiar estimators for some response and observation processes and new estimators for others. We illustrate the latter with data from an recently completed AIDS clinical trial.  相似文献   

3.
There has been much recent work on Bayesian approaches to survival analysis, incorporating features such as flexible baseline hazards, time-dependent covariate effects, and random effects. Some of the proposed methods are quite complicated to implement, and we argue that as good or better results can be obtained via simpler methods. In particular, the normal approximation to the log-gamma distribution yields easy and efficient computational methods in the face of simple multivariate normal priors for baseline log-hazards and time-dependent covariate effects. While the basic method applies to piecewise-constant hazards and covariate effects, it is easy to apply importance sampling to consider smoother functions.  相似文献   

4.
Rejoinder     
Kaplan–Meier graphs of survival and other timed events are an extremely effective way to summarize outcome data from some types of clinical studies, but information on dates and the interaction among events is lost. A new mode for presenting such data is proposed, an Eventchart, that aids in looking at the study as a whole by simultaneously displaying the timing of 2–3 different types of events, including censoring and time-dependent covariate events. The chart helps to monitor accrual and can be used to “guesstimate” the amount of information resulting from additional enrollment and/or follow-up. Eventcharts are designed to supplement, not replace, quantitative methods of analysis or conventional survival graphs. Data from bone marrow transplantation and AIDS studies are used to illustrate the method. Terminology is suggested for describing the concepts and distinguishing among the complex of events studied in modern clinical trials, including timed events, loss and date censoring, and time-braided events.  相似文献   

5.
Current survival techniques do not provide a good method for handling clinical trials with a large percent of censored observations. This research proposes using time-dependent surrogates of survival as outcome variables, in conjunction with observed survival time, to improve the precision in comparing the relative effects of two treatments on the distribution of survival time. This is in contrast to the standard method used today which uses the marginal density of survival time, T. only, or the marginal density of a surrogate, X, only, therefore, ignoring some available information. The surrogate measure, X, may be a fixed value or a time-dependent variable, X(t). X is a summary measure of some of the covariates measured throughout the trial that provide additional information on a subject's survival time. It is possible to model these time-dependent covariate values and relate the parameters in the model to the parameters in the distribution of T given X. The result is that three new models are available for the analysis of clinical trials. All three models use the joint density of survival time and a surrogate measure. Given one of three different assumed mechanisms of the potential treatment effect, each of the three methods improves the precision of the treatment estimate.  相似文献   

6.
This paper presents a Bayesian non-parametric approach to survival analysis based on arbitrarily right censored data. The analysis is based on posterior predictive probabilities using a Polya tree prior distribution on the space of probability measures on [0, ∞). In particular we show that the estimate generalizes the classical Kaplanndash;Meier non-parametric estimator, which is obtained in the limiting case as the weight of prior information tends to zero.  相似文献   

7.
The rapid increase in the number of AIDS cases during the 1980s and the spread of the disease from the high-risk groups into the general population has created widespread concern. In particular, assessing the accuracy of the screening tests used to detect antibodies to the HIV (AIDS) virus in donated blood and determining the prevalance of the disease in the population are fundamental statistical problems. Because the prevalence of AIDS varies widely by geographic region and data on the number of infected blood donors are published regularly, Bayesian methods, which utilize prior results and update them as new data become available, are quite useful. In this paper we develop a Bayesian procedure for estimating the prevalence of a rare disease, the sensitivity and specificity of the screening tests, and the predictive value of a positive or negative screening test. We apply the procedure to data on blood donors in the United States and in Canada. Our results augment those described in Gastwirth (1987) using classical methods. Indeed, we show that the inclusion of sound prior knowledge into the statistical analysis does not yield sufficiently precise estimates of the predictive value of a positive test. Hence confirmatory testing is needed to obtain reliable estimates. The emphasis of the Bayesian predictive paradigm on prediction intervals for future data yields a valuable insight. We demonstrate that using them might have detected a decline in the specificity of the most frequently used screening test earlier than it apparently was.  相似文献   

8.
Abstract

It is one of the important issues in survival analysis to compare two hazard rate functions to evaluate treatment effect. It is quite common that the two hazard rate functions cross each other at one or more unknown time points, representing temporal changes of the treatment effect. In certain applications, besides survival data, we also have related longitudinal data available regarding some time-dependent covariates. In such cases, a joint model that accommodates both types of data can allow us to infer the association between the survival and longitudinal data and to assess the treatment effect better. In this paper, we propose a modelling approach for comparing two crossing hazard rate functions by joint modelling survival and longitudinal data. Maximum likelihood estimation is used in estimating the parameters of the proposed joint model using the EM algorithm. Asymptotic properties of the maximum likelihood estimators are studied. To illustrate the virtues of the proposed method, we compare the performance of the proposed method with several existing methods in a simulation study. Our proposed method is also demonstrated using a real dataset obtained from an HIV clinical trial.  相似文献   

9.
Abstract.  This paper considers generalized partially linear models. We propose empirical likelihood-based statistics to construct confidence regions for the parametric and non-parametric components. The resulting statistics are shown to be asymptotically chi-square distributed. Finite-sample performance of the proposed statistics is assessed by simulation experiments. The proposed methods are applied to a data set from an AIDS clinical trial.  相似文献   

10.
CD4 and viral load play important roles in HIV/AIDS studies, and the study of their relationship has received much attention with well-known results. However, AIDS datasets are often highly complex in the sense that they typically contain outliers, measurement errors, and missing data. These data complications can greatly affect statistical analysis results, but much of the literature fail to address these issues in data analysis. In this paper, we re-visit the important relationship between CD4 and viral load and propose methods which simultaneously address outliers, measurement errors, and missing data. We find that the strength of the relationship may be severely mis-estimated if measurement errors and outliers are ignored. The proposed methods are general and can be used in other settings, where jointly modelling several different types of longitudinal data is required in the presence of data complications.  相似文献   

11.
Cox regression is widely used to analyze discrete survival time data. Differential endpoint follow-up across sub-cohorts where distribution of a covariate varies may cause typical estimators to be biased or inefficient. We demonstrate that with Cardiovascular Health Study data for incident type 2 diabetes. Two cohorts with extremely different race distribution have differential follow-up for fasting glucose levels. We study various scenarios of Cox regression. We suggest an alternative approach, Poisson generalized estimating equations with an offset to accommodate the differential follow-up. We use simulations to contrast the methods.  相似文献   

12.
We compare the commonly used two-step methods and joint likelihood method for joint models of longitudinal and survival data via extensive simulations. The longitudinal models include LME, GLMM, and NLME models, and the survival models include Cox models and AFT models. We find that the full likelihood method outperforms the two-step methods for various joint models, but it can be computationally challenging when the dimension of the random effects in the longitudinal model is not small. We thus propose an approximate joint likelihood method which is computationally efficient. We find that the proposed approximation method performs well in the joint model context, and it performs better for more “continuous” longitudinal data. Finally, a real AIDS data example shows that patients with higher initial viral load or lower initial CD4 are more likely to drop out earlier during an anti-HIV treatment.  相似文献   

13.
In survival analysis, time-dependent covariates are usually present as longitudinal data collected periodically and measured with error. The longitudinal data can be assumed to follow a linear mixed effect model and Cox regression models may be used for modelling of survival events. The hazard rate of survival times depends on the underlying time-dependent covariate measured with error, which may be described by random effects. Most existing methods proposed for such models assume a parametric distribution assumption on the random effects and specify a normally distributed error term for the linear mixed effect model. These assumptions may not be always valid in practice. In this article, we propose a new likelihood method for Cox regression models with error-contaminated time-dependent covariates. The proposed method does not require any parametric distribution assumption on random effects and random errors. Asymptotic properties for parameter estimators are provided. Simulation results show that under certain situations the proposed methods are more efficient than the existing methods.  相似文献   

14.
Murrayand Tsiatis (1996) described a weighted survival estimate thatincorporates prognostic time-dependent covariate informationto increase the efficiency of estimation. We propose a test statisticbased on the statistic of Pepe and Fleming (1989, 1991) thatincorporates these weighted survival estimates. As in Pepe andFleming, the test is an integrated weighted difference of twoestimated survival curves. This test has been shown to be effectiveat detecting survival differences in crossing hazards settingswhere the logrank test performs poorly. This method uses stratifiedlongitudinal covariate information to get more precise estimatesof the underlying survival curves when there is censored informationand this leads to more powerful tests. Another important featureof the test is that it remains valid when informative censoringis captured by the incorporated covariate. In this case, thePepe-Fleming statistic is known to be biased and should not beused. These methods could be useful in clinical trials with heavycensoring that include collection over time of covariates, suchas laboratory measurements, that are prognostic of subsequentsurvival or capture information related to censoring.  相似文献   

15.
Covariate measurement error problems have been extensively studied in the context of right-censored data but less so for interval-censored data. Motivated by the AIDS Clinical Trial Group 175 study, where the occurrence time of AIDS was examined only at intermittent clinic visits and the baseline covariate CD4 count was measured with error, we describe a semiparametric maximum likelihood method for analyzing mixed case interval-censored data with mismeasured covariates under the proportional hazards model. We show that the estimator of the regression coefficient is asymptotically normal and efficient and provide a very stable and efficient algorithm for computing the estimators. We evaluate the method through simulation studies and illustrate it with AIDS data.  相似文献   

16.
Abstract

Imputation methods for missing data on a time-dependent variable within time-dependent Cox models are investigated in a simulation study. Quality of life (QoL) assessments were removed from the complete simulated datasets, which have a positive relationship between QoL and disease-free survival (DFS) and delayed chemotherapy and DFS, by missing at random and missing not at random (MNAR) mechanisms. Standard imputation methods were applied before analysis. Method performance was influenced by missing data mechanism, with one exception for simple imputation. The greatest bias occurred under MNAR and large effect sizes. It is important to carefully investigate the missing data mechanism.  相似文献   

17.
Quality adjusted survival has been increasingly advocated in clinical trials to be assessed as a synthesis of survival and quality of life. We investigate nonparametric estimation of its expectation for a general multistate process with incomplete follow-up data. Upon establishing a representation of expected quality adjusted survival through marginal distributions of a set of defined events, we propose two estimators for expected quality adjusted survival. Expressed as functions of Nelson-Aalen estimators, the two estimators are strongly consistent and asymptotically normal. We derive their asymptotic variances and propose sample-based variance estimates, along with evaluation of asymptotic relative efficiency. Monte Carlo studies show that these estimation procedures perform well for practical sample sizes. We illustrate the methods using data from a national, multicenter AIDS clinical trial.  相似文献   

18.
We propose correcting for non-compliance in randomized trials by estimating the parameters of a class of semi-parametric failure time models, the rank preserving structural failure time models, using a class of rank estimators. These models are the structural or strong version of the “accelerated failure time model with time-dependent covariates” of Cox and Oakes (1984). In this paper we develop a large sample theory for these estimators, derive the optimal estimator within this class, and briefly consider the construction of “partially adaptive” estimators whose efficiency may approach that of the optimal estimator. We show that in the absence of censoring the optimal estimator attains the semiparametric efficiency bound for the model.  相似文献   

19.
This paper is motivated from a neurophysiological study of muscle fatigue, in which biomedical researchers are interested in understanding the time-dependent relationships of handgrip force and electromyography measures. A varying coefficient model is appealing here to investigate the dynamic pattern in the longitudinal data. The response variable in the study is continuous but bounded on the standard unit interval (0, 1) over time, while the longitudinal covariates are contaminated with measurement errors. We propose a generalization of varying coefficient models for the longitudinal proportional data with errors-in-covariates. We describe two estimation methods with penalized splines, which are formalized under a Bayesian inferential perspective. The first method is an adaptation of the popular regression calibration approach. The second method is based on a joint likelihood under the hierarchical Bayesian model. A simulation study is conducted to evaluate the efficacy of the proposed methods under different scenarios. The analysis of the neurophysiological data is presented to demonstrate the use of the methods.  相似文献   

20.
In this paper, the Gompertz model is extended to incorporate time-dependent covariates in the presence of interval-, right-, left-censored and uncensored data. Then, its performance at different sample sizes, study periods and attendance probabilities are studied. Following that, the model is compared to a fixed covariate model. Finally, two confidence interval estimation methods, Wald and likelihood ratio (LR), are explored and conclusions are drawn based on the results of the coverage probability study. The results indicate that bias, standard error and root mean square error values of the parameter estimates decrease with the increase in study period, attendance probability and sample size. Also, LR was found to work slightly better than the Wald for parameters of the model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号