首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Event history models typically assume that the entire population is at risk of experiencing the event of interest throughout the observation period. However, there will often be individuals, referred to as long-term survivors, who may be considered a priori to have a zero hazard throughout the study period. In this paper, a discrete-time mixture model is proposed in which the probability of long-term survivorship and the timing of event occurrence are modelled jointly. Another feature of event history data that often needs to be considered is that they may come from a population with a hierarchical structure. For example, individuals may be nested within geographical regions and individuals in the same region may have similar risks of experiencing the event of interest due to unobserved regional characteristics. Thus, the discrete-time mixture model is extended to allow for clustering in the likelihood and timing of an event within regions. The model is further extended to allow for unobserved individual heterogeneity in the hazard of event occurrence. The proposed model is applied in an analysis of contraceptive sterilization in Bangladesh. The results show that a woman's religion and education level affect her probability of choosing sterilization, but not when she gets sterilized. There is also evidence of community-level variation in sterilization timing, but not in the probability of sterilization.  相似文献   

2.
Bayesian nonparametric methods have been applied to survival analysis problems since the emergence of the area of Bayesian nonparametrics. However, the use of the flexible class of Dirichlet process mixture models has been rather limited in this context. This is, arguably, to a large extent, due to the standard way of fitting such models that precludes full posterior inference for many functionals of interest in survival analysis applications. To overcome this difficulty, we provide a computational approach to obtain the posterior distribution of general functionals of a Dirichlet process mixture. We model the survival distribution employing a flexible Dirichlet process mixture, with a Weibull kernel, that yields rich inference for several important functionals. In the process, a method for hazard function estimation emerges. Methods for simulation-based model fitting, in the presence of censoring, and for prior specification are provided. We illustrate the modeling approach with simulated and real data.  相似文献   

3.
Bivariate recurrent event data are observed when subjects are at risk of experiencing two different type of recurrent events. In this paper, our interest is to suggest statistical model when there is a substantial portion of subjects not experiencing recurrent events but having a terminal event. In a context of recurrent event data, zero events can be related with either the risk free group or a terminal event. For simultaneously reflecting both a zero inflation and a terminal event in a context of bivariate recurrent event data, a joint model is implemented with bivariate frailty effects. Simulation studies are performed to evaluate the suggested models. Infection data from AML (acute myeloid leukemia) patients are analyzed as an application.  相似文献   

4.
Failure time models are considered when there is a subpopulation of individuals that is immune, or not susceptible, to an event of interest. Such models are of considerable interest in biostatistics. The most common approach is to postulate a proportion p of immunes or long-term survivors and to use a mixture model [5]. This paper introduces the defective inverse Gaussian model as a cure model and examines the use of the Gibbs sampler together with a data augmentation algorithm to study Bayesian inferences both for the cured fraction and the regression parameters. The results of the Bayesian and likelihood approaches are illustrated on two real data sets.  相似文献   

5.
Process regression methodology is underdeveloped relative to the frequency with which pertinent data arise. In this article, the response-190 is a binary indicator process representing the joint event of being alive and remaining in a specific state. The process is indexed by time (e.g., time since diagnosis) and observed continuously. Data of this sort occur frequently in the study of chronic disease. A general area of application involves a recurrent event with non-negligible duration (e.g., hospitalization and associated length of hospital stay) and subject to a terminating event (e.g., death). We propose a semiparametric multiplicative model for the process version of the probability of being alive and in the (transient) state of interest. Under the proposed methods, the regression parameter is estimated through a procedure that does not require estimating the baseline probability. Unlike the majority of process regression methods, the proposed methods accommodate multiple sources of censoring. In particular, we derive a computationally convenient variant of inverse probability of censoring weighting based on the additive hazards model. We show that the regression parameter estimator is asymptotically normal, and that the baseline probability function estimator converges to a Gaussian process. Simulations demonstrate that our estimators have good finite sample performance. We apply our method to national end-stage liver disease data. The Canadian Journal of Statistics 48: 222–237; 2020 © 2019 Statistical Society of Canada  相似文献   

6.
Summary.  The cure fraction (the proportion of patients who are cured of disease) is of interest to both patients and clinicians and is a useful measure to monitor trends in survival of curable disease. The paper extends the non-mixture and mixture cure fraction models to estimate the proportion cured of disease in population-based cancer studies by incorporating a finite mixture of two Weibull distributions to provide more flexibility in the shape of the estimated relative survival or excess mortality functions. The methods are illustrated by using public use data from England and Wales on survival following diagnosis of cancer of the colon where interest lies in differences between age and deprivation groups. We show that the finite mixture approach leads to improved model fit and estimates of the cure fraction that are closer to the empirical estimates. This is particularly so in the oldest age group where the cure fraction is notably lower. The cure fraction is broadly similar in each deprivation group, but the median survival of the 'uncured' is lower in the more deprived groups. The finite mixture approach overcomes some of the limitations of the more simplistic cure models and has the potential to model the complex excess hazard functions that are seen in real data.  相似文献   

7.
In the course of hypertension, cardiovascular disease events (e.g. stroke, heart failure) occur frequently and recurrently. The scientific interest in such study may lie in the estimation of treatment effect while accounting for the correlation among event times. The correlation among recurrent event times comes from two sources: subject-specific heterogeneity (e.g. varied lifestyles, genetic variations, and other unmeasurable effects) and event dependence (i.e. event incidences may change the risk of future recurrent events). Moreover, event incidences may change the disease progression so that there may exist event-varying covariate effects (the covariate effects may change after each event) and event effect (the effect of prior events on the future events). In this article, we propose a Bayesian regression model that not only accommodates correlation among recurrent events from both sources, but also explicitly characterizes the event-varying covariate effects and event effect. This model is especially useful in quantifying how the incidences of events change the effects of covariates and risk of future events. We compare the proposed model with several commonly used recurrent event models and apply our model to the motivating lipid-lowering trial (LLT) component of the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) (ALLHAT-LLT).  相似文献   

8.
Intermediate clinical events,surrogate markers and survival   总被引:1,自引:0,他引:1  
This paper investigates one- and two-sample problems comparing survival times when an individual may experience an intermediate event prior to death or reaching some well defined endpoint. The intermediate event may be polychotomous. Patients experiencing the intermediate event may have an altered survival distribution after the intermediate event. Score tests are derived for testing if the occurrence of the intermediate event actually alters survival. These models have implications for evaluating therapies without randomization as well as strengthening the log rank test for comparing two survival distributions. The exact distribution of the score tests can be found by conditioning on both the waiting time and occurrence of the intermedate event.Deceased  相似文献   

9.
Time‐to‐event data are common in clinical trials to evaluate survival benefit of a new drug, biological product, or device. The commonly used parametric models including exponential, Weibull, Gompertz, log‐logistic, log‐normal, are simply not flexible enough to capture complex survival curves observed in clinical and medical research studies. On the other hand, the nonparametric Kaplan Meier (KM) method is very flexible and successful on catching the various shapes in the survival curves but lacks ability in predicting the future events such as the time for certain number of events and the number of events at certain time and predicting the risk of events (eg, death) over time beyond the span of the available data from clinical trials. It is obvious that neither the nonparametric KM method nor the current parametric distributions can fulfill the needs in fitting survival curves with the useful characteristics for predicting. In this paper, a full parametric distribution constructed as a mixture of three components of Weibull distribution is explored and recommended to fit the survival data, which is as flexible as KM for the observed data but have the nice features beyond the trial time, such as predicting future events, survival probability, and hazard function.  相似文献   

10.
Dead recoveries of marked animals are commonly used to estimate survival probabilities. Band‐recovery models can be parameterized either by r (the probability of recovering a band conditional on death of the animal) or by f (the probability that an animal will be killed, retrieved, and have its band reported). The T parametrization can be implemented in a capture‐recapture framework with two states (alive and newly dead), mortality being the transition probability between the two states. The authors show here that the f parametrization can also be implemented in a multistate framework by imposing simple constraints on some parameters. They illustrate it using data on the mallard and the snow goose. However, they mention that because it does not entirely separate the individual survival and encounter processes, the f parametrization must be used with care on reduced models, or in the presence of estimates at the boundary of the parameter space. As they show, a multistate framework allows the use of powerful software for model fitting or testing the goodness‐of‐fit of models; it also affords the implementation of complex models such as those based on mixture of information or uncertain states  相似文献   

11.
Marginal Means/Rates Models for Multiple Type Recurrent Event Data   总被引:3,自引:0,他引:3  
Recurrent events are frequently observed in biomedical studies, and often more than one type of event is of interest. Follow-up time may be censored due to loss to follow-up or administrative censoring. We propose a class of semi-parametric marginal means/rates models, with a general relative risk form, for assessing the effect of covariates on the censored event processes of interest. We formulate estimating equations for the model parameters, and examine asymptotic properties of the parameter estimators. Finite sample properties of the regression coefficients are examined through simulations. The proposed methods are applied to a retrospective cohort study of risk factors for preschool asthma.  相似文献   

12.
This paper considers fitting generalized linear models to binary data in nonstandard settings such as case–control samples, studies with misclassified responses and misspecified models. We develop simple methods for fitting models to case–control data and show that a closure property holds for generalized linear models in the nonstandard settings, i.e. if the responses follow a generalized linear model in the population of interest, then so will the observed response in the non-standard setting, but with a modified link function. These results imply that we can analyse data and study problems in the non-standard settings by using classical generalized linear model methods such as the iteratively reweighted least squares algorithm. Example data illustrate the results.  相似文献   

13.
Summary.  In many longitudinal studies, a subject's response profile is closely associated with his or her risk of experiencing a related event. Examples of such event risks include recurrence of disease, relapse, drop-out and non-compliance. When evaluating the effect of a treatment, it is sometimes of interest to consider the joint process consisting of both the response and the risk of an associated event. Motivated by a prevention of depression study among patients with malignant melanoma, we examine a joint model that incorporates the risk of discontinuation into the analysis of serial depression measures. We present a maximum likelihood estimator for the mean response and event risk vectors. We test hypotheses about functions of mean depression and withdrawal risk profiles from our joint model, predict depression from updated patient histories, characterize associations between components of the joint process and estimate the probability that a patient's depression and risk of withdrawal exceed specified levels. We illustrate the application of our joint model by using the depression data.  相似文献   

14.
Marginal Regression of Gaps Between Recurrent Events   总被引:1,自引:0,他引:1  
Recurrent event data typically exhibit the phenomenon of intra-individual correlation, owing to not only observed covariates but also random effects. In many applications, the population may be reasonably postulated as a heterogeneous mixture of individual renewal processes, and the inference of interest is the effect of individual-level covariates. In this article, we suggest and investigate a marginal proportional hazards model for gaps between recurrent events. A connection is established between observed gap times and clustered survival data with informative cluster size. We subsequently construct a novel and general inference procedure for the latter, based on a functional formulation of standard Cox regression. Large-sample theory is established for the proposed estimators. Numerical studies demonstrate that the procedure performs well with practical sample sizes. Application to the well-known bladder tumor data is given as an illustration.  相似文献   

15.
We use the additive risk model of Aalen (Aalen, 1980) as a model for the rate of a counting process. Rather than specifying the intensity, that is the instantaneous probability of an event conditional on the entire history of the relevant covariates and counting processes, we present a model for the rate function, i.e., the instantaneous probability of an event conditional on only a selected set of covariates. When the rate function for the counting process is of Aalen form we show that the usual Aalen estimator can be used and gives almost unbiased estimates. The usual martingale based variance estimator is incorrect and an alternative estimator should be used. We also consider the semi-parametric version of the Aalen model as a rate model (McKeague and Sasieni, 1994) and show that the standard errors that are computed based on an assumption of intensities are incorrect and give a different estimator. Finally, we introduce and implement a test-statistic for the hypothesis of a time-constant effect in both the non-parametric and semi-parametric model. A small simulation study was performed to evaluate the performance of the new estimator of the standard error.  相似文献   

16.
Abstract

We propose a cure rate survival model by assuming that the number of competing causes of the event of interest follows the negative binomial distribution and the time to the event of interest has the Birnbaum-Saunders distribution. Further, the new model includes as special cases some well-known cure rate models published recently. We consider a frequentist analysis for parameter estimation of the negative binomial Birnbaum-Saunders model with cure rate. Then, we derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes. We illustrate the usefulness of the proposed model in the analysis of a real data set from the medical area.  相似文献   

17.
In this article, an alternative estimation approach is proposed to fit linear mixed effects models where the random effects follow a finite mixture of normal distributions. This heterogeneity linear mixed model is an interesting tool since it relaxes the classical normality assumption and is also perfectly suitable for classification purposes, based on longitudinal profiles. Instead of fitting directly the heterogeneity linear mixed model, we propose to fit an equivalent mixture of linear mixed models under some restrictions which is computationally simpler. Unlike the former model, the latter can be maximized analytically using an EM-algorithm and the obtained parameter estimates can be easily used to compute the parameter estimates of interest.  相似文献   

18.
Cancer immunotherapy often reflects the improvement in both short-term risk reduction and long-term survival. In this scenario, a mixture cure model can be used for the trial design. However, the hazard functions based on the mixture cure model between two groups will ultimately crossover. Thus, the conventional assumption of proportional hazards may be violated and study design using standard log-rank test (LRT) could lose power if the main interest is to detect the improvement of long-term survival. In this paper, we propose a change sign weighted LRT for the trial design. We derived a sample size formula for the weighted LRT, which can be used for designing cancer immunotherapy trials to detect both short-term risk reduction and long-term survival. Simulation studies are conducted to compare the efficiency between the standard LRT and the change sign weighted LRT.  相似文献   

19.
In this article, we propose a parametric model for the distribution of time to first event when events are overdispersed and can be properly fitted by a Negative Binomial distribution. This is a very common situation in medical statistics, when the occurrence of events is summarized as a count for each patient and the simple Poisson model is not adequate to account for overdispersion of data. In this situation, studying the time of occurrence of the first event can be of interest. From the Negative Binomial distribution of counts, we derive a new parametric model for time to first event and apply it to fit the distribution of time to first relapse in multiple sclerosis (MS). We develop the regression model with methods for covariate estimation. We show that, as the Negative Binomial model properly fits relapse counts data, this new model matches quite perfectly the distribution of time to first relapse, as tested in two large datasets of MS patients. Finally we compare its performance, when fitting time to first relapse in MS, with other models widely used in survival analysis (the semiparametric Cox model and the parametric exponential, Weibull, log-logistic and log-normal models).  相似文献   

20.
In some applications, the failure time of interest is the time from an originating event to a failure event while both event times are interval censored. We propose fitting Cox proportional hazards models to this type of data using a spline‐based sieve maximum marginal likelihood, where the time to the originating event is integrated out in the empirical likelihood function of the failure time of interest. This greatly reduces the complexity of the objective function compared with the fully semiparametric likelihood. The dependence of the time of interest on time to the originating event is induced by including the latter as a covariate in the proportional hazards model for the failure time of interest. The use of splines results in a higher rate of convergence of the estimator of the baseline hazard function compared with the usual non‐parametric estimator. The computation of the estimator is facilitated by a multiple imputation approach. Asymptotic theory is established and a simulation study is conducted to assess its finite sample performance. It is also applied to analyzing a real data set on AIDS incubation time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号