首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 781 毫秒
1.
Summary.  Repeated measures and repeated events data have a hierarchical structure which can be analysed by using multilevel models. A growth curve model is an example of a multilevel random-coefficients model, whereas a discrete time event history model for recurrent events can be fitted as a multilevel logistic regression model. The paper describes extensions to the basic growth curve model to handle auto-correlated residuals, multiple-indicator latent variables and correlated growth processes, and event history models for correlated event processes. The multilevel approach to the analysis of repeated measures data is contrasted with structural equation modelling. The methods are illustrated in analyses of children's growth, changes in social and political attitudes, and the interrelationship between partnership transitions and childbearing.  相似文献   

2.
Frequently in clinical and epidemiologic studies, the event of interest is recurrent (i.e., can occur more than once per subject). When the events are not of the same type, an analysis which accounts for the fact that events fall into different categories will often be more informative. Often, however, although event times may always be known, information through which events are categorized may potentially be missing. Complete‐case methods (whose application may require, for example, that events be censored when their category cannot be determined) are valid only when event categories are missing completely at random. This assumption is rather restrictive. The authors propose two multiple imputation methods for analyzing multiple‐category recurrent event data under the proportional means/rates model. The use of a proper or improper imputation technique distinguishes the two approaches. Both methods lead to consistent estimation of regression parameters even when the missingness of event categories depends on covariates. The authors derive the asymptotic properties of the estimators and examine their behaviour in finite samples through simulation. They illustrate their approach using data from an international study on dialysis.  相似文献   

3.
The analysis of recurrent event data in clinical trials presents a number of difficulties. The statistician is faced with issues of event dependency, composite endpoints, unbalanced follow‐up times and informative dropout. It is not unusual, therefore, for statisticians charged with responsibility for providing reliable and valid analyses to need to derive new methods specific to the clinical indication under investigation. One method is proposed that appears to have possible advantages over those that are often used in the analysis of recurrent event data in clinical trials. Based on an approach that counts periods of time with events instead of single event counts, the proposed method makes an adjustment for patient time on study and incorporates heterogeneity by estimating an individual per‐patient risk of experiencing a morbid event. Monte Carlo simulations demonstrate that, with use of a real clinical study data, the proposed method consistently outperforms other measures of morbidity. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

4.
Recurrent events involve the occurrences of the same type of event repeatedly over time and are commonly encountered in longitudinal studies. Examples include seizures in epileptic studies or occurrence of cancer tumors. In such studies, interest lies in the number of events that occur over a fixed period of time. One considerable challenge in analyzing such data arises when a large proportion of patients discontinues before the end of the study, for example, because of adverse events, leading to partially observed data. In this situation, data are often modeled using a negative binomial distribution with time‐in‐study as offset. Such an analysis assumes that data are missing at random (MAR). As we cannot test the adequacy of MAR, sensitivity analyses that assess the robustness of conclusions across a range of different assumptions need to be performed. Sophisticated sensitivity analyses for continuous data are being frequently performed. However, this is less the case for recurrent event or count data. We will present a flexible approach to perform clinically interpretable sensitivity analyses for recurrent event data. Our approach fits into the framework of reference‐based imputations, where information from reference arms can be borrowed to impute post‐discontinuation data. Different assumptions about the future behavior of dropouts dependent on reasons for dropout and received treatment can be made. The imputation model is based on a flexible model that allows for time‐varying baseline intensities. We assess the performance in a simulation study and provide an illustration with a clinical trial in patients who suffer from bladder cancer. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
In some exceptional circumstances, as in very rare diseases, nonrandomized one‐arm trials are the sole source of evidence to demonstrate efficacy and safety of a new treatment. The design of such studies needs a sound methodological approach in order to provide reliable information, and the determination of the appropriate sample size still represents a critical step of this planning process. As, to our knowledge, no method exists for sample size calculation in one‐arm trials with a recurrent event endpoint, we propose here a closed sample size formula. It is derived assuming a mixed Poisson process, and it is based on the asymptotic distribution of the one‐sample robust nonparametric test recently developed for the analysis of recurrent events data. The validity of this formula in managing a situation with heterogeneity of event rates, both in time and between patients, and time‐varying treatment effect was demonstrated with exhaustive simulation studies. Moreover, although the method requires the specification of a process for events generation, it seems to be robust under erroneous definition of this process, provided that the number of events at the end of the study is similar to the one assumed in the planning phase. The motivating clinical context is represented by a nonrandomized one‐arm study on gene therapy in a very rare immunodeficiency in children (ADA‐SCID), where a major endpoint is the recurrence of severe infections. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
The product limit or Kaplan‐Meier (KM) estimator is commonly used to estimate the survival function in the presence of incomplete time to event. Application of this method assumes inherently that the occurrence of an event is known with certainty. However, the clinical diagnosis of an event is often subject to misclassification due to assay error or adjudication error, by which the event is assessed with some uncertainty. In the presence of such errors, the true distribution of the time to first event would not be estimated accurately using the KM method. We develop a method to estimate the true survival distribution by incorporating negative predictive values and positive predictive values, into a KM‐like method of estimation. This allows us to quantify the bias in the KM survival estimates due to the presence of misclassified events in the observed data. We present an unbiased estimator of the true survival function and its variance. Asymptotic properties of the proposed estimators are provided, and these properties are examined through simulations. We demonstrate our methods using data from the Viral Resistance to Antiviral Therapy of Hepatitis C study.  相似文献   

7.
Recurrent event data arise commonly in medical and public health studies. The analysis of such data has received extensive research attention and various methods have been developed in the literature. Depending on the focus of scientific interest, the methods may be broadly classified as intensity‐based counting process methods, mean function‐based estimating equation methods, and the analysis of times to events or times between events. These methods and models cover a wide variety of practical applications. However, there is a critical assumption underlying those methods–variables need to be correctly measured. Unfortunately, this assumption is frequently violated in practice. It is quite common that some covariates are subject to measurement error. It is well known that covariate measurement error can substantially distort inference results if it is not properly taken into account. In the literature, there has been extensive research concerning measurement error problems in various settings. However, with recurrent events, there is little discussion on this topic. It is the objective of this paper to address this important issue. In this paper, we develop inferential methods which account for measurement error in covariates for models with multiplicative intensity functions or rate functions. Both likelihood‐based inference and robust inference based on estimating equations are discussed. The Canadian Journal of Statistics 40: 530–549; 2012 © 2012 Statistical Society of Canada  相似文献   

8.
Clinical trials in severely diseased populations often suffer from a high dropout rate that is related to the investigated target morbidity. These dropouts can bias estimates and treatment comparisons, particularly in the event of an imbalance. Methods to describe such selective dropout are presented that use the time in study distribution to generate so‐called population evolution charts. These charts show the development of a distribution of a covariate or the target morbidity measure as it changes as a result of the dropout process during the follow‐up time. The selectiveness of the dropout process with respect to a variable can be inferred from the change in its distribution. Different types of selective dropout are described with real data from several studies in metastatic bone disease, where marked effects can be seen. A general strategy to cope with selective dropout seems to be the inclusion of dropout events into the endpoint. Within a time‐to‐event analysis framework this simple approach can lead to valid conclusions and still retains conservative elements. Morbidity measures that are based on (recurrent) event counts react differently in the presence of selective dropout. They differ mainly in the way dropout is included. One simple measure achieves good performance under selective dropout by introducing a non‐specific penalty for premature study termination. The use of a prespecified scoring system to assign a weight for each works well. This simple and transparent approach performs well even in the presence of unbalanced selective dropout. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
Abstract.  Censored recurrent event data frequently arise in biomedical studies. Often, the events are not homogenous, and may be categorized. We propose semiparametric regression methods for analysing multiple-category recurrent event data and consider the setting where event times are always known, but the information used to categorize events may be missing. Application of existing methods after censoring events of unknown category (i.e. 'complete-case' methods) produces consistent estimators only when event types are missing completely at random, an assumption which will frequently fail in practice. We propose methods, based on weighted estimating equations, which are applicable when event category missingness is missing at random. Parameter estimators are shown to be consistent and asymptotically normal. Finite sample properties are examined through simulations and the proposed methods are applied to an end-stage renal disease data set obtained from a national organ failure registry.  相似文献   

10.
Joint modelling of event counts and survival times   总被引:2,自引:0,他引:2  
Summary.  In studies of recurrent events, such as epileptic seizures, there can be a large amount of information about a cohort over a period of time, but current methods for these data are often unable to utilize all of the available information. The paper considers data which include post-treatment survival times for individuals experiencing recurring events, as well as a measure of the base-line event rate, in the form of a pre-randomization event count. Standard survival analysis may treat this pre-randomization count as a covariate, but the paper proposes a parametric joint model based on an underlying Poisson process, which will give a more precise estimate of the treatment effect.  相似文献   

11.
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time‐to‐event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time‐to‐event outcomes could be extended in a clinically meaningful and interpretable way to stress‐test the assumption of ignorable censoring. We focus on a ‘tipping point’ approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi‐parametric, and non‐parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time‐to‐event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter‐term surrogate endpoints as substitutes for costly long‐term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long‐term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time‐to‐event surrogates for a time‐to‐event true endpoint. This two‐stage meta‐analytic copula method has been extensively studied for time‐to‐event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two‐stage meta‐analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use.  相似文献   

13.
We propose a mixture model that combines a discrete-time survival model for analyzing the correlated times between recurrent events, e.g. births, with a logistic regression model for the probability of never experiencing the event of interest, i.e., being a long-term survivor. The proposed survival model incorporates both observed and unobserved heterogeneity in the probability of experiencing the event of interest. We use Gibbs sampling for the fitting of such mixture models, which leads to a computationally intensive solution to the problem of fitting survival models for multiple event time data with long-term survivors. We illustrate our Bayesian approach through an analysis of Hutterite birth histories.  相似文献   

14.
For clinical trials with time‐to‐event as the primary endpoint, the clinical cutoff is often event‐driven and the log‐rank test is the most commonly used statistical method for evaluating treatment effect. However, this method relies on the proportional hazards assumption in that it has the maximal power in this circumstance. In certain disease areas or populations, some patients can be curable and never experience the events despite a long follow‐up. The event accumulation may dry out after a certain period of follow‐up and the treatment effect could be reflected as the combination of improvement of cure rate and the delay of events for those uncurable patients. Study power depends on both cure rate improvement and hazard reduction. In this paper, we illustrate these practical issues using simulation studies and explore sample size recommendations, alternative ways for clinical cutoffs, and efficient testing methods with the highest study power possible.  相似文献   

15.
Summary.  The paper investigates the life-cycle relationship of work and family life in Britain based on the British Household Panel Survey. Using hazard regression techniques we estimate a five-equation model, which includes birth events, union formation, union dissolution, employment and non-employment events. We find that transitions in and out of employment for men are relatively independent of other transitions. In contrast, there are strong links between employment of females, having children and union formation. By undertaking a detailed microsimulations analysis, we show that different levels of labour force participation by females do not necessarily lead to large changes in fertility events. Changes in union formation and fertility events, in contrast, have larger effects on employment.  相似文献   

16.
Time‐to‐event data have been extensively studied in many areas. Although multiple time scales are often observed, commonly used methods are based on a single time scale. Analysing time‐to‐event data on two time scales can offer a more extensive insight into the phenomenon. We introduce a non‐parametric Bayesian intensity model to analyse two‐dimensional point process on Lexis diagrams. After a simple discretization of the two‐dimensional process, we model the intensity by a one‐dimensional piecewise constant hazard functions parametrized by the change points and corresponding hazard levels. Our prior distribution incorporates a built‐in smoothing feature in two dimensions. We implement posterior simulation using the reversible jump Metropolis–Hastings algorithm and demonstrate the applicability of the method using both simulated and empirical survival data. Our approach outperforms commonly applied models by borrowing strength in two dimensions.  相似文献   

17.
In this paper, we investigate the performance of different parametric and nonparametric approaches for analyzing overdispersed person–time–event rates in the clinical trial setting. We show that the likelihood‐based parametric approach may not maintain the right size for the tested overdispersed person–time–event data. The nonparametric approaches may use an estimator as either the mean of the ratio of number of events over follow‐up time within each subjects or the ratio of the mean of the number of events over the mean follow‐up time in all the subjects. Among these, the ratio of the means is a consistent estimator and can be studied analytically. Asymptotic properties of all estimators were studied through numerical simulations. This research shows that the nonparametric ratio of the mean estimator is to be recommended in analyzing overdispersed person–time data. When sample size is small, some resampling‐based approaches can yield satisfactory results. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.

We study models for recurrent events with special emphasis on the situation where a terminal event acts as a competing risk for the recurrent events process and where there may be gaps between periods during which subjects are at risk for the recurrent event. We focus on marginal analysis of the expected number of events and show that an Aalen–Johansen type estimator proposed by Cook and Lawless is applicable in this situation. A motivating example deals with psychiatric hospital admissions where we supplement with analyses of the marginal distribution of time to the competing event and the marginal distribution of the time spent in hospital. Pseudo-observations are used for the latter purpose.

  相似文献   

19.
In survival and reliability studies, panel count data arise when we investigate a recurrent event process and each study subject is observed only at discrete time points. If recurrent events of several types are possible, we obtain panel count data with competing risks. Such data arise frequently from transversal studies on recurrent events in demography, epidemiology and reliability experiments where the individuals cannot be observed continuously. In the present paper, we propose an isotonic regression estimator for the cause specific mean function of the underlying recurrent event process of a competing risks panel count data. Further, a nonparametric test is proposed to compare the cause specific mean functions of the panel count competing risks data. Asymptotic properties of the proposed estimator and test statistic are studied. A simulation study is conducted to assess the finite sample behaviour of the proposed estimator and test statistic. Finally, the procedures developed are applied to a real data arising from skin cancer chemo prevention trial.  相似文献   

20.
Abstract: The authors address the problem of estimating an inter‐event distribution on the basis of count data. They derive a nonparametric maximum likelihood estimate of the inter‐event distribution utilizing the EM algorithm both in the case of an ordinary renewal process and in the case of an equilibrium renewal process. In the latter case, the iterative estimation procedure follows the basic scheme proposed by Vardi for estimating an inter‐event distribution on the basis of time‐interval data; it combines the outputs of the E‐step corresponding to the inter‐event distribution and to the length‐biased distribution. The authors also investigate a penalized likelihood approach to provide the proposed estimation procedure with regularization capabilities. They evaluate the practical estimation procedure using simulated count data and apply it to real count data representing the elongation of coffee‐tree leafy axes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号