首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Yu  Tingting  Wu  Lang  Gilbert  Peter 《Lifetime data analysis》2019,25(2):229-258

In HIV vaccine studies, longitudinal immune response biomarker data are often left-censored due to lower limits of quantification of the employed immunological assays. The censoring information is important for predicting HIV infection, the failure event of interest. We propose two approaches to addressing left censoring in longitudinal data: one that makes no distributional assumptions for the censored data—treating left censored values as a “point mass” subgroup—and the other makes a distributional assumption for a subset of the censored data but not for the remaining subset. We develop these two approaches to handling censoring for joint modelling of longitudinal and survival data via a Cox proportional hazards model fit by h-likelihood. We evaluate the new methods via simulation and analyze an HIV vaccine trial data set, finding that longitudinal characteristics of the immune response biomarkers are highly associated with the risk of HIV infection.

  相似文献   

2.
The shared-parameter model and its so-called hierarchical or random-effects extension are widely used joint modeling approaches for a combination of longitudinal continuous, binary, count, missing, and survival outcomes that naturally occurs in many clinical and other studies. A random effect is introduced and shared or allowed to differ between two or more repeated measures or longitudinal outcomes, thereby acting as a vehicle to capture association between the outcomes in these joint models. It is generally known that parameter estimates in a linear mixed model (LMM) for continuous repeated measures or longitudinal outcomes allow for a marginal interpretation, even though a hierarchical formulation is employed. This is not the case for the generalized linear mixed model (GLMM), that is, for non-Gaussian outcomes. The aforementioned joint models formulated for continuous and binary or two longitudinal binomial outcomes, using the LMM and GLMM, will naturally have marginal interpretation for parameters associated with the continuous outcome but a subject-specific interpretation for the fixed effects parameters relating covariates to binary outcomes. To derive marginally meaningful parameters for the binary models in a joint model, we adopt the marginal multilevel model (MMM) due to Heagerty [13] and Heagerty and Zeger [14] and formulate a joint MMM for two longitudinal responses. This enables to (1) capture association between the two responses and (2) obtain parameter estimates that have a population-averaged interpretation for both outcomes. The model is applied to two sets of data. The results are compared with those obtained from the existing approaches such as generalized estimating equations, GLMM, and the model of Heagerty [13]. Estimates were found to be very close to those from single analysis per outcome but the joint model yields higher precision and allows for quantifying the association between outcomes. Parameters were estimated using maximum likelihood. The model is easy to fit using available tools such as the SAS NLMIXED procedure.  相似文献   

3.
We compare the commonly used two-step methods and joint likelihood method for joint models of longitudinal and survival data via extensive simulations. The longitudinal models include LME, GLMM, and NLME models, and the survival models include Cox models and AFT models. We find that the full likelihood method outperforms the two-step methods for various joint models, but it can be computationally challenging when the dimension of the random effects in the longitudinal model is not small. We thus propose an approximate joint likelihood method which is computationally efficient. We find that the proposed approximation method performs well in the joint model context, and it performs better for more “continuous” longitudinal data. Finally, a real AIDS data example shows that patients with higher initial viral load or lower initial CD4 are more likely to drop out earlier during an anti-HIV treatment.  相似文献   

4.
In some clinical trials and epidemiologic studies, investigators are interested in knowing whether the variability of a biomarker is independently predictive of clinical outcomes. This question is often addressed via a naïve approach where a sample-based estimate (e.g., standard deviation) is calculated as a surrogate for the “true” variability and then used in regression models as a covariate assumed to be free of measurement error. However, it is well known that the measurement error in covariates causes underestimation of the true association. The issue of underestimation can be substantial when the precision is low because of limited number of measures per subject. The joint analysis of survival data and longitudinal data enables one to account for the measurement error in longitudinal data and has received substantial attention in recent years. In this paper we propose a joint model to assess the predictive effect of biomarker variability. The joint model consists of two linked sub-models, a linear mixed model with patient-specific variance for longitudinal data and a full parametric Weibull distribution for survival data, and the association between two models is induced by a latent Gaussian process. Parameters in the joint model are estimated under Bayesian framework and implemented using Markov chain Monte Carlo (MCMC) methods with WinBUGS software. The method is illustrated in the Ocular Hypertension Treatment Study to assess whether the variability of intraocular pressure is an independent risk of primary open-angle glaucoma. The performance of the method is also assessed by simulation studies.  相似文献   

5.
This paper extends the analysis of the bivariate Seemingly Unrelated Regression (SUN) Tobit model by modeling its nonlinear dependence structure through the Clayton copula. The ability to capture/model the lower tail dependence of the SUN Tobit model where some data are censored (generally, left-censored at zero) is an useful feature of the Clayton copula. We propose a modified version of the (classical) Inference Function for Margins (IFS) method by Joe and XP [H. Joe and J.J. XP, The estimation method of inference functions for margins for multivariate models, Tech. Rep. 166, Department of Statistics, University of British Columbia, 1996], which we refer to as Modified Inference Function for Margins (MIFF) method, to obtain the (point) estimates of the marginal and Clayton copula parameters. More specifically, we employ the (frequenting) data augmentation technique at the second stage of the IFS method (the first stage of the MIFF method is equivalent to the first stage of the IFS method) to generate the censored observations and then estimate the Clayton copula parameter. This process (data augmentation and copula parameter estimation) is repeated until convergence. Such modification at the second stage of the usual estimation method is justified in order to obtain continuous marginal distributions, which ensures the uniqueness of the resulting Clayton copula, as stated by Solar's [A. Solar, Fonctions de répartition à n dimensions et leurs marges, Publ. de l'Institut de Statistique de l'Université de Paris 8 (1959), pp. 229–231] theorem; and also to provide an unbiased estimate of the association parameter (the IFS method provides a biased estimate of the Clayton copula parameter in the presence of censored observations in both margins). Since the usual asymptotic approach, that is the computation of the asymptotic covariance matrix of the parameter estimates, is troublesome in this case, we also propose the use of resampling procedures (bootstrap methods, such as standard normal and percentile, by Efron and Tibshirani [B. Efron and R.J. Tibshirani, An Introduction to the Bootstrap, Chapman & Hall, New York, 1993] to obtain confidence intervals for the model parameters.  相似文献   

6.
This article studies a general joint model for longitudinal measurements and competing risks survival data. The model consists of a linear mixed effects sub-model for the longitudinal outcome, a proportional cause-specific hazards frailty sub-model for the competing risks survival data, and a regression sub-model for the variance–covariance matrix of the multivariate latent random effects based on a modified Cholesky decomposition. The model provides a useful approach to adjust for non-ignorable missing data due to dropout for the longitudinal outcome, enables analysis of the survival outcome with informative censoring and intermittently measured time-dependent covariates, as well as joint analysis of the longitudinal and survival outcomes. Unlike previously studied joint models, our model allows for heterogeneous random covariance matrices. It also offers a framework to assess the homogeneous covariance assumption of existing joint models. A Bayesian MCMC procedure is developed for parameter estimation and inference. Its performances and frequentist properties are investigated using simulations. A real data example is used to illustrate the usefulness of the approach.  相似文献   

7.
Abstract

It is one of the important issues in survival analysis to compare two hazard rate functions to evaluate treatment effect. It is quite common that the two hazard rate functions cross each other at one or more unknown time points, representing temporal changes of the treatment effect. In certain applications, besides survival data, we also have related longitudinal data available regarding some time-dependent covariates. In such cases, a joint model that accommodates both types of data can allow us to infer the association between the survival and longitudinal data and to assess the treatment effect better. In this paper, we propose a modelling approach for comparing two crossing hazard rate functions by joint modelling survival and longitudinal data. Maximum likelihood estimation is used in estimating the parameters of the proposed joint model using the EM algorithm. Asymptotic properties of the maximum likelihood estimators are studied. To illustrate the virtues of the proposed method, we compare the performance of the proposed method with several existing methods in a simulation study. Our proposed method is also demonstrated using a real dataset obtained from an HIV clinical trial.  相似文献   

8.
Stochastic ordering of survival functions is a useful concept in many areas of statistics, especially in nonparametric and order restricted inferences. In this paper we introduce an algorithm to compute maximum likelihood estimates of survival functions where both upper and lower bounds are given. The algorithm allows censored survival data. In a simulation study, we found that the proposed estimates are more efficient than the unrestricted Kaplan-Meier product limit estimates both with and without censored observations.  相似文献   

9.
Summary.  The main statistical problem in many epidemiological studies which involve repeated measurements of surrogate markers is the frequent occurrence of missing data. Standard likelihood-based approaches like the linear random-effects model fail to give unbiased estimates when data are non-ignorably missing. In human immunodeficiency virus (HIV) type 1 infection, two markers which have been widely used to track progression of the disease are CD4 cell counts and HIV–ribonucleic acid (RNA) viral load levels. Repeated measurements of these markers tend to be informatively censored, which is a special case of non-ignorable missingness. In such cases, we need to apply methods that jointly model the observed data and the missingness process. Despite their high correlation, longitudinal data of these markers have been analysed independently by using mainly random-effects models. Touloumi and co-workers have proposed a model termed the joint multivariate random-effects model which combines a linear random-effects model for the underlying pattern of the marker with a log-normal survival model for the drop-out process. We extend the joint multivariate random-effects model to model simultaneously the CD4 cell and viral load data while adjusting for informative drop-outs due to disease progression or death. Estimates of all the model's parameters are obtained by using the restricted iterative generalized least squares method or a modified version of it using the EM algorithm as a nested algorithm in the case of censored survival data taking also into account non-linearity in the HIV–RNA trend. The method proposed is evaluated and compared with simpler approaches in a simulation study. Finally the method is applied to a subset of the data from the 'Concerted action on seroconversion to AIDS and death in Europe' study.  相似文献   

10.
Identifying cost-effective decisions that can take into account of medical cost and health outcome is an important issue under very limited resources. Analyzing medical costs has been challenged owing to skewness of cost distributions, heterogeneity across samples and censoring. When censoring is due to administrative reasons, the total cost might be related to the survival time since longer survivals are likely to be censored and the corresponding total cost will be censored as well. This paper uses the general linear model for the longitudinal data to model the repeated medical cost data and the weighted estimating equation is used to find more accurate estimates for the parameter. Furthermore, the asymptotic properties for the proposed model are discussed. Simulations are used to evaluate the performance of estimators under various scenarios. Finally, the proposed model is implemented on the data extracted from National Health Insurance database for patients with the colorectal cancer.  相似文献   

11.
In recent years, joint analysis of longitudinal measurements and survival data has received much attention. However, previous work has primarily focused on a single failure type for the event time. In this article, we consider joint modeling of repeated measurements and competing risks failure time data to allow for more than one distinct failure type in the survival endpoint so we fit a cause-specific hazards sub-model to allow for competing risks, with a separate latent association between longitudinal measurements and each cause of failure. Besides, previous work does not focus on the hypothesis to test a separate latent association between longitudinal measurements and each cause of failure. In this article, we derive a score test to identify longitudinal biomarkers or surrogates for a time to event outcome in competing risks data. With a carefully chosen definition of complete data, the maximum likelihood estimation of the cause-specific hazard functions is performed via an EM algorithm. We extend this work and allow random effects to be present in both the longitudinal biomarker and underlying survival function. The random effects in the biomarker are introduced via an explicit term while the random effect in the underlying survival function is introduced by the inclusion of frailty into the model.

We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudinal biomarkers considering heterogeneous baseline hazards in individuals influence the power to detect the association of a longitudinal biomarker and the survival time.  相似文献   


12.
Many applications in public health, medical and biomedical or other studies demand modelling of two or more longitudinal outcomes jointly to get better insight into their joint evolution. In this regard, a joint model for a longitudinal continuous and a count sequence, the latter possibly overdispersed and zero-inflated (ZI), will be specified that assembles aspects coming from each one of them into one single model. Further, a subject-specific random effect is included to account for the correlation in the continuous outcome. For the count outcome, clustering and overdispersion are accommodated through two distinct sets of random effects in a generalized linear model as proposed by Molenberghs et al. [A family of generalized linear models for repeated measures with normal and conjugate random effects. Stat Sci. 2010;25:325–347]; one is normally distributed, the other conjugate to the outcome distribution. The association among the two sequences is captured by correlating the normal random effects describing the continuous and count outcome sequences, respectively. An excessive number of zero counts is often accounted for by using a so-called ZI or hurdle model. ZI models combine either a Poisson or negative-binomial model with an atom at zero as a mixture, while the hurdle model separately handles the zero observations and the positive counts. This paper proposes a general joint modelling framework in which all these features can appear together. We illustrate the proposed method with a case study and examine it further with simulations.  相似文献   

13.
This paper presents an alternative analysis approach to modeling data where a lower detection limit (LOD) and unobserved population heterogeneity exist in a longitudinal data set. Longitudinal data on viral loads in HIV/AIDS studies, for instance, show strong positive skewness and left-censoring. Normalizing such data using a logarithmic transformation seems to be unsuccessful. An alternative to such a transformation is to use a finite mixture model which is suitable for analyzing data which have skewed or multi-modal distributions. There is little work done to simultaneously take into account these features of longitudinal data. This paper develops a growth mixture Tobit model that deals with a LOD and heterogeneity among growth trajectories. The proposed methods are illustrated using simulated and real data from an AIDS clinical study.  相似文献   

14.
The analysis of time‐to‐event data typically makes the censoring at random assumption, ie, that—conditional on covariates in the model—the distribution of event times is the same, whether they are observed or unobserved (ie, right censored). When patients who remain in follow‐up stay on their assigned treatment, then analysis under this assumption broadly addresses the de jure, or “while on treatment strategy” estimand. In such cases, we may well wish to explore the robustness of our inference to more pragmatic, de facto or “treatment policy strategy,” assumptions about the behaviour of patients post‐censoring. This is particularly the case when censoring occurs because patients change, or revert, to the usual (ie, reference) standard of care. Recent work has shown how such questions can be addressed for trials with continuous outcome data and longitudinal follow‐up, using reference‐based multiple imputation. For example, patients in the active arm may have their missing data imputed assuming they reverted to the control (ie, reference) intervention on withdrawal. Reference‐based imputation has two advantages: (a) it avoids the user specifying numerous parameters describing the distribution of patients' postwithdrawal data and (b) it is, to a good approximation, information anchored, so that the proportion of information lost due to missing data under the primary analysis is held constant across the sensitivity analyses. In this article, we build on recent work in the survival context, proposing a class of reference‐based assumptions appropriate for time‐to‐event data. We report a simulation study exploring the extent to which the multiple imputation estimator (using Rubin's variance formula) is information anchored in this setting and then illustrate the approach by reanalysing data from a randomized trial, which compared medical therapy with angioplasty for patients presenting with angina.  相似文献   

15.
The joint modeling of longitudinal and survival data has received extraordinary attention in the statistics literature recently, with models and methods becoming increasingly more complex. Most of these approaches pair a proportional hazards survival with longitudinal trajectory modeling through parametric or nonparametric specifications. In this paper we closely examine one data set previously analyzed using a two parameter parametric model for Mediterranean fruit fly (medfly) egg-laying trajectories paired with accelerated failure time and proportional hazards survival models. We consider parametric and nonparametric versions of these two models, as well as a proportional odds rate model paired with a wide variety of longitudinal trajectory assumptions reflecting the types of analyses seen in the literature. In addition to developing novel nonparametric Bayesian methods for joint models, we emphasize the importance of model selection from among joint and non joint models. The default in the literature is to omit at the outset non joint models from consideration. For the medfly data, a predictive diagnostic criterion suggests that both the choice of survival model and longitudinal assumptions can grossly affect model adequacy and prediction. Specifically for these data, the simple joint model used in by Tseng et al. (Biometrika 92:587–603, 2005) and models with much more flexibility in their longitudinal components are predictively outperformed by simpler analyses. This case study underscores the need for data analysts to compare on the basis of predictive performance different joint models and to include non joint models in the pool of candidates under consideration.  相似文献   

16.
A bayesian approach to dynamic tobit models   总被引:1,自引:0,他引:1  
This paper develops a posterior simulation method for a dynamic Tobit model. The major obstacle rooted in such a problem lies in high dimensional integrals, induced by dependence among censored observations, in the likelihood function. The primary contribution of this study is to develop a practical and efficient sampling scheme for the conditional posterior distributions of the censored (i.e., unobserved) data, so that the Gibbs sampler with the data augmentation algorithm is successfully applied. The substantial differences between this approach and some existing methods are highlighted. The proposed simulation method is investigated by means of a Monte Carlo study and applied to a regression model of Japanese exports of passenger cars to the U.S. subject to a non-tariff trade barrier.  相似文献   

17.
It is well-known that classical Tobit estimator of the parameters of the censored regression (CR) model is inefficient in case of non-normal error terms. In this paper, we propose to use the modified maximum likelihood (MML) estimator under the Jones and Faddy''s skew t-error distribution, which covers a wide range of skew and symmetric distributions, for the CR model. The MML estimators, providing an alternative to the Tobit estimator, are explicitly expressed and they are asymptotically equivalent to the maximum likelihood estimator. A simulation study is conducted to compare the efficiencies of the MML estimators with the classical estimators such as the ordinary least squares, Tobit, censored least absolute deviations and symmetrically trimmed least squares estimators. The results of the simulation study show that the MML estimators work well among the others with respect to the root mean square error criterion for the CR model. A real life example is also provided to show the suitability of the MML methodology.  相似文献   

18.
This paper develops a posterior simulation method for a dynamic Tobit model. The major obstacle rooted in such a problem lies in high dimensional integrals, induced by dependence among censored observations, in the likelihood function. The primary contribution of this study is to develop a practical and efficient sampling scheme for the conditional posterior distributions of the censored (i.e., unobserved) data, so that the Gibbs sampler with the data augmentation algorithm is successfully applied. The substantial differences between this approach and some existing methods are highlighted. The proposed simulation method is investigated by means of a Monte Carlo study and applied to a regression model of Japanese exports of passenger cars to the U.S. subject to a non-tariff trade barrier.  相似文献   

19.
Selection of appropriate predictors for right censored time to event data is very often encountered by the practitioners. We consider the ?1 penalized regression or “least absolute shrinkage and selection operator” as a tool for predictor selection in association with accelerated failure time model. The choice of the penalizing parameter λ is crucial to identify the correct set of covariates. In this paper, we propose an information theory-based method to choose λ under log-normal distribution. Furthermore, an efficient algorithm is discussed in the same context. The performance of the proposed λ and the algorithm is illustrated through simulation studies and a real data analysis. The convergence of the algorithm is also discussed.  相似文献   

20.
Data censoring causes ordinary least-square estimators of linear models to be biased and inconsistent. The Tobit estimator yields consistent estimators in the presence of data censoring if the errors are normally distributed. However, nonnormality or heteroscedasticity results in the Tobit estimators being inconsistent. Various estimators have been proposed for circumventing the normality assumption. Some of these estimators include censored least absolute deviations (CLAD), symmetrically censored least-square (SCLS), and partially adaptive estimators. CLAD and SCLS will be consistent in the presence of heteroscedasticity; however, SCLS performs poorly in the presence of asymmetric errors. This article extends the partially adaptive estimation approach to accommodate possible heteroscedasticity as well as nonnormality. A simulation study is used to investigate the estimators’ relative performance in these settings. The partially adaptive censored regression estimators have little efficiency loss for censored normal errors and appear to outperform the Tobit and semiparametric estimators for nonnormal error distributions and be less sensitive to the presence of heteroscedasticity. An empirical example is considered, which supports these results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号