首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The joint models for longitudinal data and time-to-event data have recently received numerous attention in clinical and epidemiologic studies. Our interest is in modeling the relationship between event time outcomes and internal time-dependent covariates. In practice, the longitudinal responses often show non linear and fluctuated curves. Therefore, the main aim of this paper is to use penalized splines with a truncated polynomial basis to parameterize the non linear longitudinal process. Then, the linear mixed-effects model is applied to subject-specific curves and to control the smoothing. The association between the dropout process and longitudinal outcomes is modeled through a proportional hazard model. Two types of baseline risk functions are considered, namely a Gompertz distribution and a piecewise constant model. The resulting models are referred to as penalized spline joint models; an extension of the standard joint models. The expectation conditional maximization (ECM) algorithm is applied to estimate the parameters in the proposed models. To validate the proposed algorithm, extensive simulation studies were implemented followed by a case study. In summary, the penalized spline joint models provide a new approach for joint models that have improved the existing standard joint models.  相似文献   

2.
We propose a semiparametric approach based on proportional hazards and copula method to jointly model longitudinal outcomes and the time-to-event. The dependence between the longitudinal outcomes on the covariates is modeled by a copula-based times series, which allows non-Gaussian random effects and overcomes the limitation of the parametric assumptions in existing linear and nonlinear random effects models. A modified partial likelihood method using estimated covariates at failure times is employed to draw statistical inference. The proposed model and method are applied to analyze a set of progression to AIDS data in a study of the association between the human immunodeficiency virus viral dynamics and the time trend in the CD4/CD8 ratio with measurement errors. Simulations are also reported to evaluate the proposed model and method.  相似文献   

3.
ABSTRACT

Longitudinal studies often entail non-Gaussian primary responses. When dropout occurs, potential non-ignorability of the missingness process may occur, and a joint model for the primary response and a time-to-event may represent an appealing tool to account for dependence between the two processes. As an extension to the GLMJM, recently proposed, and based on Gaussian latent effects, we assume that the random effects follow a smooth, P-spline based density. To estimate model parameters, we adopt a two-step conditional Newton–Raphson algorithm. Since the maximization of the penalized log-likelihood requires numerical integration over the random effect, which is often cumbersome, we opt for a pseudo-adaptive Gaussian quadrature rule to approximate the model likelihood. We discuss the proposed model by analyzing an original dataset on dilated cardiomyopathies and through a simulation study.  相似文献   

4.
A model to accommodate time-to-event ordinal outcomes was proposed by Berridge and Whitehead. Very few studies have adopted this approach, despite its appeal in incorporating several ordered categories of event outcome. More recently, there has been increased interest in utilizing recurrent events to analyze practical endpoints in the study of disease history and to help quantify the changing pattern of disease over time. For example, in studies of heart failure, the analysis of a single fatal event no longer provides sufficient clinical information to manage the disease. Similarly, the grade/frequency/severity of adverse events may be more important than simply prolonged survival in studies of toxic therapies in oncology. We propose an extension of the ordinal time-to-event model to allow for multiple/recurrent events in the case of marginal models (where all subjects are at risk for each recurrence, irrespective of whether they have experienced previous recurrences) and conditional models (subjects are at risk of a recurrence only if they have experienced a previous recurrence). These models rely on marginal and conditional estimates of the instantaneous baseline hazard and provide estimates of the probabilities of an event of each severity for each recurrence over time. We outline how confidence intervals for these probabilities can be constructed and illustrate how to fit these models and provide examples of the methods, together with an interpretation of the results.  相似文献   

5.

Joint models for longitudinal and survival data have gained a lot of attention in recent years, with the development of myriad extensions to the basic model, including those which allow for multivariate longitudinal data, competing risks and recurrent events. Several software packages are now also available for their implementation. Although mathematically straightforward, the inclusion of multiple longitudinal outcomes in the joint model remains computationally difficult due to the large number of random effects required, which hampers the practical application of this extension. We present a novel approach that enables the fitting of such models with more realistic computational times. The idea behind the approach is to split the estimation of the joint model in two steps: estimating a multivariate mixed model for the longitudinal outcomes and then using the output from this model to fit the survival submodel. So-called two-stage approaches have previously been proposed and shown to be biased. Our approach differs from the standard version, in that we additionally propose the application of a correction factor, adjusting the estimates obtained such that they more closely resemble those we would expect to find with the multivariate joint model. This correction is based on importance sampling ideas. Simulation studies show that this corrected two-stage approach works satisfactorily, eliminating the bias while maintaining substantial improvement in computational time, even in more difficult settings.

  相似文献   

6.
We propose a robust estimation procedure for the analysis of longitudinal data including a hidden process to account for unobserved heterogeneity between subjects in a dynamic fashion. We show how to perform estimation by an expectation–maximization-type algorithm in the hidden Markov regression literature. We show that the proposed robust approaches work comparably to the maximum-likelihood estimator when there are no outliers and the error is normal and outperform it when there are outliers or the error is heavy tailed. A real data application is used to illustrate our proposal. We also provide details on a simple criterion to choose the number of hidden states.  相似文献   

7.
When constructing models to summarize clinical data to be used for simulations, it is good practice to evaluate the models for their capacity to reproduce the data. This can be done by means of Visual Predictive Checks (VPC), which consist of several reproductions of the original study by simulation from the model under evaluation, calculating estimates of interest for each simulated study and comparing the distribution of those estimates with the estimate from the original study. This procedure is a generic method that is straightforward to apply, in general. Here we consider the application of the method to time-to-event data and consider the special case when a time-varying covariate is not known or cannot be approximated after event time. In this case, simulations cannot be conducted beyond the end of the follow-up time (event or censoring time) in the original study. Thus, the simulations must be censored at the end of the follow-up time. Since this censoring is not random, the standard KM estimates from the simulated studies and the resulting VPC will be biased. We propose to use inverse probability of censoring weighting (IPoC) method to correct the KM estimator for the simulated studies and obtain unbiased VPCs. For analyzing the Cantos study, the IPoC weighting as described here proved valuable and enabled the generation of VPCs to qualify PKPD models for simulations. Here, we use a generated data set, which allows illustration of the different situations and evaluation against the known truth.  相似文献   

8.
The elderly population in the USA is expected to double in size by the year 2025, making longitudinal health studies of this population of increasing importance. The degree of loss to follow-up in studies of the elderly, which is often because elderly people cannot remain in the study, enter a nursing home or die, make longitudinal studies of this population problematic. We propose a latent class model for analysing multiple longitudinal binary health outcomes with multiple-cause non-response when the data are missing at random and a non-likelihood-based analysis is performed. We extend the estimating equations approach of Robins and co-workers to latent class models by reweighting the multiple binary longitudinal outcomes by the inverse probability of being observed. This results in consistent parameter estimates when the probability of non-response depends on observed outcomes and covariates (missing at random) assuming that the model for non-response is correctly specified. We extend the non-response model so that institutionalization, death and missingness due to failure to locate, refusal or incomplete data each have their own set of non-response probabilities. Robust variance estimates are derived which account for the use of a possibly misspecified covariance matrix, estimation of missing data weights and estimation of latent class measurement parameters. This approach is then applied to a study of lower body function among a subsample of the elderly participating in the 6-year Longitudinal Study of Aging.  相似文献   

9.
Joint models for longitudinal and time-to-event data have been applied in many different fields of statistics and clinical studies. However, the main difficulty these models have to face with is the computational problem. The requirement for numerical integration becomes severe when the dimension of random effects increases. In this paper, a modified two-stage approach has been proposed to estimate the parameters in joint models. In particular, in the first stage, the linear mixed-effects models and best linear unbiased predictorsare applied to estimate parameters in the longitudinal submodel. In the second stage, an approximation of the fully joint log-likelihood is proposed using the estimated the values of these parameters from the longitudinal submodel. Survival parameters are estimated bymaximizing the approximation of the fully joint log-likelihood. Simulation studies show that the approach performs well, especially when the dimension of random effects increases. Finally, we implement this approach on AIDS data.  相似文献   

10.
A random effects model for analyzing mixed longitudinal normal and count outcomes with and without the possibility of non ignorable missing outcomes is presented. The count response is inflated in two points (k and l) and the (k, l)-Hurdle power series is used as its distribution. The new distribution contains, as special submodels, several important distributions which are discussed, such as (k, l)-Hurdle Poisson and (k, l)-Hurdle negative binomial and (k, l)-Hurdle binomial distributions among others. Random effects are used to take into account the correlation between longitudinal outcomes and inflation parameters. A full likelihood-based approach is used to yield maximum likelihood estimates of the model parameters. A simulation study is performed in which for count outcome (k, l)-Hurdle Poisson, (k, l)-Hurdle negative binomial and (k, l)-Hurdle binomial distributions are considered. To illustrate the application of such modelling the longitudinal data of body mass index and the number of joint damage are analyzed.  相似文献   

11.
The development of models and methods for cure rate estimation has recently burgeoned into an important subfield of survival analysis. Much of the literature focuses on the standard mixture model. Recently, process-based models have been suggested. We focus on several models based on first passage times for Wiener processes. Whitmore and others have studied these models in a variety of contexts. Lee and Whitmore (Stat Sci 21(4):501–513, 2006) give a comprehensive review of a variety of first hitting time models and briefly discuss their potential as cure rate models. In this paper, we study the Wiener process with negative drift as a possible cure rate model but the resulting defective inverse Gaussian model is found to provide a poor fit in some cases. Several possible modifications are then suggested, which improve the defective inverse Gaussian. These modifications include: the inverse Gaussian cure rate mixture model; a mixture of two inverse Gaussian models; incorporation of heterogeneity in the drift parameter; and the addition of a second absorbing barrier to the Wiener process, representing an immunity threshold. This class of process-based models is a useful alternative to the standard model and provides an improved fit compared to the standard model when applied to many of the datasets that we have studied. Implementation of this class of models is facilitated using expectation-maximization (EM) algorithms and variants thereof, including the gradient EM algorithm. Parameter estimates for each of these EM algorithms are given and the proposed models are applied to both real and simulated data, where they perform well.  相似文献   

12.
Yuan Ying Zhao 《Statistics》2015,49(6):1348-1365
Various mixed models were developed to capture the features of between- and within-individual variation for longitudinal data under the normality assumption of the random effect and the within-individual random error. However, the normality assumption may be violated in some applications. To this end, this article assumes that the random effect follows a skew-normal distribution and the within-individual error is distributed as a reproductive dispersion model. An expectation conditional maximization (ECME) algorithm together with the Metropolis-Hastings (MH) algorithm within the Gibbs sampler is presented to simultaneously obtain estimates of parameters and random effects. Several diagnostic measures are developed to identify the potentially influential cases and assess the effect of minor perturbation to model assumptions via the case-deletion method and local influence analysis. To reduce the computational burden, we derive the first-order approximations to case-deletion diagnostics. Several simulation studies and a real data example are presented to illustrate the newly developed methodologies.  相似文献   

13.
Joint likelihood approaches have been widely used to handle survival data with time-dependent covariates. In construction of the joint likelihood function for the accelerated failure time (AFT) model, the unspecified baseline hazard function is assumed to be a piecewise constant function in the literature. However, there are usually no close form formulas for the regression parameters, which require numerical methods in the EM iterations. The nonsmooth step function assumption leads to very spiky likelihood function which is very hard to find the globe maximum. Besides, due to nonsmoothness of the likelihood function, direct search methods are conducted for the maximization which are very inefficient and time consuming. To overcome the two disadvantages, we propose a kernel smooth pseudo-likelihood function to replace the nonsmooth step function assumption. The performance of the proposed method is evaluated by simulation studies. A case study of reproductive egg-laying data is provided to demonstrate the usefulness of the new approach.  相似文献   

14.
In haemodialysis patients, vascular access type is of paramount importance. Although recent studies have found that central venous catheter is often associated with poor outcomes and switching to arteriovenous fistula is beneficial, studies have not fully elucidated how the effect of switching of access on outcomes changes over time for patients on dialysis and whether the effect depends on switching time. In this paper, we characterise the switching access type effect on outcomes for haemodialysis patients. This is achieved by using a new class of multiple-index varying-coefficient (MIVC) models. We develop a new estimation procedure for MIVC models based on local linear, profile least-square method and Cholesky decomposition. Monte Carlo simulation studies show excellent finite sample performance. Finally, we analyse the dialysis data using our method.  相似文献   

15.
In the longitudinal studies, the mixture generalized estimation equation (mix-GEE) was proposed to improve the efficiency of the fixed-effects estimator for addressing the working correlation structure misspecification. When the subject-specific effect is one of interests, mixed-effects models were widely used to analyze longitudinal data. However, most of the existing approaches assume a normal distribution for the random effects, and this could affect the efficiency of the fixed-effects estimator. In this article, a conditional mixture generalized estimating equation (cmix-GEE) approach based on the advantage of mix-GEE and conditional quadratic inference function (CQIF) method is developed. The advantage of our new approach is that it does not require the normality assumption for random effects and can accommodate the serial correlation between observations within the same cluster. The feature of our proposed approach is that the estimators of the regression parameters are more efficient than CQIF even if the working correlation structure is not correctly specified. In addition, according to the estimates of some mixture proportions, the true working correlation matrix can be identified. We establish the asymptotic results for the fixed-effects parameter estimators. Simulation studies were conducted to evaluate our proposed method.  相似文献   

16.
This paper introduces a new approach, based on dependent univariate GLMs, for fitting multivariate mixture models. This approach is a multivariate generalization of the method for univariate mixtures presented by Hinde (1982). Its accuracy and efficiency are compared with direct maximization of the log-likelihood. Using a simulation study, we also compare the efficiency of Monte Carlo and Gaussian quadrature methods for approximating the mixture distribution. The new approach with Gaussian quadrature outperforms the alternative methods considered. The work is motivated by the multivariate mixture models which have been proposed for modelling changes of employment states at an individual level. Similar formulations are of interest for modelling movement between other social and economic states and multivariate mixture models also occur in biostatistics and epidemiology.  相似文献   

17.
In this article, the profile maximal likelihood estimate (PMLE) is proposed for non linear mixed models (NLMMs) with longitudinal data where the variance components are estimated by the expectation-maximization (EM) algorithm. Strong consistency and the asymptotic normality of the estimators are derived. A simulation study is conducted where the performance of the PLME and the Fishing scoring estimate (FSE) in literatures are compared. Moreover, a real data is also analyzed to investigate the empirical performance of the procedure.  相似文献   

18.
The medical costs in an ageing society substantially increase when the incidences of chronic diseases, disabilities and inability to live independently are high. Healthy lifestyles not only affect elderly individuals but also influence the entire community. When assessing treatment efficacy, survival and quality of life should be considered simultaneously. This paper proposes the joint likelihood approach for modelling survival and longitudinal binary covariates simultaneously. Because some unobservable information is present in the model, the Monte Carlo EM algorithm and Metropolis-Hastings algorithm are used to find the estimators. Monte Carlo simulations are performed to evaluate the performance of the proposed model based on the accuracy and precision of the estimates. Real data are used to demonstrate the feasibility of the proposed model.  相似文献   

19.
In this article, the label switching problem and the importance of solving it are discussed for frequentist mixture models if a simulation study is used to evaluate the performance of mixture model estimators. Two effective labelling methods are proposed by using true label for each observation. The empirical studies demonstrate that the new proposed methods work well and provide better results than the rule of thumb method of order constraint labelling. In addition, a Monte Carlo study also demonstrates that simple order constraint labelling can sometimes produce severely biased, and possibly meaningless, estimated bias and standard errors.  相似文献   

20.
In a joint analysis of longitudinal quality of life (QoL) scores and relapse-free survival (RFS) times from a clinical trial on early breast cancer conducted by the Canadian Cancer Trials Group, we observed a complicated trajectory of QoL scores and existence of long-term survivors. Motivated by this observation, we proposed in this paper a flexible joint model for the longitudinal measurements and survival times. A partly linear mixed effect model is used to capture the complicated but smooth trajectory of longitudinal measurements and approximated by B-splines and a semiparametric mixture cure model with the B-spline baseline hazard to model survival times with a cure fraction. These two models are linked by shared random effects to explore the dependence between longitudinal measurements and survival times. A semiparametric inference procedure with an EM algorithm is proposed to estimate the parameters in the joint model. The performance of proposed procedures are evaluated by simulation studies and through the application to the analysis of data from the clinical trial which motivated this research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号