首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 8 毫秒
1.
In this article, we discuss how to identify longitudinal biomarkers in survival analysis under the accelerated failure time model and also discuss the effectiveness of biomarkers under the accelerated failure time model. Two methods proposed by Shcemper et al. are deployed to measure the efficacy of biomarkers. We use simulations to explore how the factors can influence the power of a score test to detect the association of a longitudinal biomarker and the survival time. These factors include the functional form of the random effects from the longitudinal biomarkers, in the different number of individuals, and time points per individual. The simulations are used to explore how the number of individuals, the number of time points per individual influence the effectiveness of the biomarker to predict survival at the given endpoint under the accelerated failure time model. We illustrate our methods using a prothrombin index as a predictor of survival in liver cirrhosis patients.  相似文献   

2.
We introduce a score test to identify longitudinal biomarkers or surrogates for a time to event outcome. This method is an extension of Henderson et al. (2000 Henderson , R. , Diggle , P. , Dobson , A. ( 2000 ). Joint modelling of longitudinal measurements and event time data . Biostatistics 1 ( 4 ): 465480 .[Crossref], [PubMed] [Google Scholar], 2002 Henderson , R. , Diggle , P. , Dobson , A. ( 2002 ). Identification and efficacy of longitudinal markers for survival . Biostatistics 3 ( 1 ): 3350 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). In this article, a score test is based on a joint likelihood function which combines the likelihood functions of the longitudinal biomarkers and the survival times. Henderson et al. (2000 Henderson , R. , Diggle , P. , Dobson , A. ( 2000 ). Joint modelling of longitudinal measurements and event time data . Biostatistics 1 ( 4 ): 465480 .[Crossref], [PubMed] [Google Scholar], 2002 Henderson , R. , Diggle , P. , Dobson , A. ( 2002 ). Identification and efficacy of longitudinal markers for survival . Biostatistics 3 ( 1 ): 3350 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) assumed that the same random effect exists in the longitudinal component and in the Cox model and then they can derive a score test to determine if a longitudinal biomarker is associated with time to an event. We extend this work and our score test is based on a joint likelihood function which allows other random effects to be present in the survival function.

Considering heterogeneous baseline hazards in individuals, we use simulations to explore how the factors can influence the power of a score test to detect the association of a longitudinal biomarker and the survival time. These factors include the functional form of the random effects from the longitudinal biomarkers, in the different number of individuals, and time points per individual. We illustrate our method using a prothrombin index as a predictor of survival in liver cirrhosis patients.  相似文献   

3.
We propose a latent variable model for informative missingness in longitudinal studies which is an extension of latent dropout class model. In our model, the value of the latent variable is affected by the missingness pattern and it is also used as a covariate in modeling the longitudinal response. So the latent variable links the longitudinal response and the missingness process. In our model, the latent variable is continuous instead of categorical and we assume that it is from a normal distribution. The EM algorithm is used to obtain the estimates of the parameter we are interested in and Gauss–Hermite quadrature is used to approximate the integration of the latent variable. The standard errors of the parameter estimates can be obtained from the bootstrap method or from the inverse of the Fisher information matrix of the final marginal likelihood. Comparisons are made to the mixed model and complete-case analysis in terms of a clinical trial dataset, which is Weight Gain Prevention among Women (WGPW) study. We use the generalized Pearson residuals to assess the fit of the proposed latent variable model.  相似文献   

4.
In this article, we propose an estimation procedure to estimate parameters of joint model when there exists a relationship between cluster size and clustered failure times of subunits within a cluster. We use a joint random effect model of clustered failure times and cluster size. To investigate the possible association, two submodels are connected by a common latent variable. The EM algorithm is applied for the estimation of parameters in the models. Simulation studies are performed to assess the finite sample properties of the estimators. Also, sensitivity tests show the influence of the misspecification of random effect distributions. The methods are applied to a lymphatic filariasis study for adult worm nests.  相似文献   

5.
Accelerated failure time models are useful in survival data analysis, but such models have received little attention in the context of measurement error. In this paper we discuss an accelerated failure time model for bivariate survival data with covariates subject to measurement error. In particular, methods based on the marginal and joint models are considered. Consistency and efficiency of the resultant estimators are investigated. Simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring the measurement error of covariates. As an illustration we apply the proposed methods to analyze a data set arising from the Busselton Health Study (Knuiman et al., 1994 Knuiman , M. W. , Cullent , K. J. , Bulsara , M. K. , Welborn , T. A. , Hobbs , M. S. T. ( 1994 ). Mortality trends, 1965 to 1989, in Busselton, the site of repeated health surveys and interventions . Austral. J. Public Health 18 : 129135 . [CSA] [Crossref], [PubMed] [Google Scholar]).  相似文献   

6.
The accelerated failuretime (AFT) model is an important alternative to the Cox proportionalhazards model (PHM) in survival analysis. For multivariate failuretime data we propose to use frailties to explicitly account forpossible correlations (and heterogeneity) among failure times.An EM-like algorithm analogous to that in the frailty model forthe Cox model is adapted. Through simulation it is shown thatits performance compares favorably with that of the marginalindependence approach. For illustration we reanalyze a real dataset.  相似文献   

7.
Demographic and Health Surveys collect child survival times that are clustered at the family and community levels. It is assumed that each cluster has a specific, unobservable, random frailty that induces an association in the survival times within the cluster. The Cox proportional hazards model, with family and community random frailties acting multiplicatively on the hazard rate, is presented. The estimation of the fixed effect and the association parameters of the modified model is then examined using the Gibbs sampler and the expectation–maximization (EM) algorithm. The methods are compared using child survival data collected in the 1992 Demographic and Health Survey of Malawi. The two methods lead to very similar estimates of fixed effect parameters. However, the estimates of random effect variances from the EM algorithm are smaller than those of the Gibbs sampler. Both estimation methods reveal considerable family variation in the survival of children, and very little variability over the communities.  相似文献   

8.
The gamma frailty model is a natural extension of the Cox proportional hazards model in survival analysis. Because the frailties are unobserved, an E-M approach is often used for estimation. Such an approach is shown to lead to finite sample underestimation of the frailty variance, with the corresponding regression parameters also being underestimated as a result. For the univariate case, we investigate the source of the bias with simulation studies and a complete enumeration. The rank-based E-M approach, we note, only identifies frailty through the order in which failures occur; additional frailty which is evident in the survival times is ignored, and as a result the frailty variance is underestimated. An adaption of the standard E-M approach is suggested, whereby the non-parametric Breslow estimate is replaced by a local likelihood formulation for the baseline hazard which allows the survival times themselves to enter the model. Simulations demonstrate that this approach substantially reduces the bias, even at small sample sizes. The method developed is applied to survival data from the North West Regional Leukaemia Register.  相似文献   

9.
The linear mixed model assumes normality of its two sources of randomness: the random effects and the residual error. Recent research demonstrated that a simple transformation of the response targets normality of both sources simultaneously. However, estimating the transformation can lead to biased estimates of the variance components. Here, we provide guidance regarding this potential bias and propose a correction for it when such bias is substantial. This correction allows for accurate estimation of the random effects when using a transformation to achieve normality. The utility of this approach is demonstrated in a study of sleep-wake behavior in preterm infants.  相似文献   

10.
In this article, we consider a semivarying coefficient model with application to longitudinal data. In order to accommodate the within-group correlation, we apply the block empirical likelihood procedure to semivarying coefficient longitudinal data model, and prove a nonparametric version of Wilks' theorem which can be used to construct the block empirical likelihood confidence region with asymptotically correct coverage probability for the parametric component. In comparison with normal approximations, the proposed method does not require a consistent estimator for the asymptotic covariance matrix, making it easier to conduct inference for the model's parametric component. Simulations demonstrate how the proposed method works.  相似文献   

11.
There are many methods for analyzing longitudinal ordinal response data with random dropout. These include maximum likelihood (ML), weighted estimating equations (WEEs), and multiple imputations (MI). In this article, using a Markov model where the effect of previous response on the current response is investigated as an ordinal variable, the likelihood is partitioned to simplify the use of existing software. Simulated data, generated to present a three-period longitudinal study with random dropout, are used to compare performance of ML, WEE, and MI methods in terms of standardized bias and coverage probabilities. These estimation methods are applied to a real medical data set.  相似文献   

12.
In Bayesian model selection or testingproblems one cannot utilize standard or default noninformativepriors, since these priors are typically improper and are definedonly up to arbitrary constants. Therefore, Bayes factors andposterior probabilities are not well defined under these noninformativepriors, making Bayesian model selection and testing problemsimpossible. We derive the intrinsic Bayes factor (IBF) of Bergerand Pericchi (1996a, 1996b) for the commonly used models in reliabilityand survival analysis using an encompassing model. We also deriveproper intrinsic priors for these models, whose Bayes factors are asymptoticallyequivalent to the respective IBFs. We demonstrate our resultsin three examples.  相似文献   

13.
Semiparametric regression models and estimating covariance functions are very useful in longitudinal study. Unfortunately, challenges arise in estimating the covariance function of longitudinal data collected at irregular time points. In this article, for mean term, a partially linear model is introduced and for covariance structure, a modified Cholesky decomposition approach is proposed to heed the positive-definiteness constraint. We estimate the regression function by using the local linear technique and propose quasi-likelihood estimating equations for both the mean and covariance structures. Moreover, asymptotic normality of the resulting estimators is established. Finally, simulation study and real data analysis are used to illustrate the proposed approach.  相似文献   

14.
When making patient-specific prediction, it is important to compare prediction models to evaluate the gain in prediction accuracy for including additional covariates. We propose two statistical testing methods, the complete data permutation (CDP) and the permutation cross-validation (PCV) for comparing prediction models. We simulate clinical trial settings extensively and show that both methods are robust and achieve almost correct test sizes; the methods have comparable power in moderate to large sample situations, while the CDP is more efficient in computation. The methods are also applied to ovarian cancer clinical trial data.  相似文献   

15.
A common occurrence in clinical trials with a survival end point is missing covariate data. With ignorably missing covariate data, Lipsitz and Ibrahim proposed a set of estimating equations to estimate the parameters of Cox's proportional hazards model. They proposed to obtain parameter estimates via a Monte Carlo EM algorithm. We extend those results to non-ignorably missing covariate data. We present a clinical trials example with three partially observed laboratory markers which are used as covariates to predict survival.  相似文献   

16.
We revisit the problem of estimating the proportion π of true null hypotheses where a large scale of parallel hypothesis tests are performed independently. While the proportion is a quantity of interest in its own right in applications, the problem has arisen in assessing or controlling an overall false discovery rate. On the basis of a Bayes interpretation of the problem, the marginal distribution of the p-value is modeled in a mixture of the uniform distribution (null) and a non-uniform distribution (alternative), so that the parameter π of interest is characterized as the mixing proportion of the uniform component on the mixture. In this article, a nonparametric exponential mixture model is proposed to fit the p-values. As an alternative approach to the convex decreasing mixture model, the exponential mixture model has the advantages of identifiability, flexibility, and regularity. A computation algorithm is developed. The new approach is applied to a leukemia gene expression data set where multiple significance tests over 3,051 genes are performed. The new estimate for π with the leukemia gene expression data appears to be about 10% lower than the other three estimates that are known to be conservative. Simulation results also show that the new estimate is usually lower and has smaller bias than the other three estimates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号