共查询到20条相似文献,搜索用时 15 毫秒
1.
Oliver N. Keene James H. Roger Benjamin F. Hartley Michael G. Kenward 《Pharmaceutical statistics》2014,13(4):258-264
Statistical analyses of recurrent event data have typically been based on the missing at random assumption. One implication of this is that, if data are collected only when patients are on their randomized treatment, the resulting de jure estimator of treatment effect corresponds to the situation in which the patients adhere to this regime throughout the study. For confirmatory analysis of clinical trials, sensitivity analyses are required to investigate alternative de facto estimands that depart from this assumption. Recent publications have described the use of multiple imputation methods based on pattern mixture models for continuous outcomes, where imputation for the missing data for one treatment arm (e.g. the active arm) is based on the statistical behaviour of outcomes in another arm (e.g. the placebo arm). This has been referred to as controlled imputation or reference‐based imputation. In this paper, we use the negative multinomial distribution to apply this approach to analyses of recurrent events and other similar outcomes. The methods are illustrated by a trial in severe asthma where the primary endpoint was rate of exacerbations and the primary analysis was based on the negative binomial model. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
2.
Control‐based imputation for sensitivity analyses in informative censoring for recurrent event data 下载免费PDF全文
Fei Gao Guanghan F. Liu Donglin Zeng Lei Xu Bridget Lin Guoqing Diao Gregory Golm Joseph F. Heyse Joseph G. Ibrahim 《Pharmaceutical statistics》2017,16(6):424-432
In clinical trials, missing data commonly arise through nonadherence to the randomized treatment or to study procedure. For trials in which recurrent event endpoints are of interests, conventional analyses using the proportional intensity model or the count model assume that the data are missing at random, which cannot be tested using the observed data alone. Thus, sensitivity analyses are recommended. We implement the control‐based multiple imputation as sensitivity analyses for the recurrent event data. We model the recurrent event using a piecewise exponential proportional intensity model with frailty and sample the parameters from the posterior distribution. We impute the number of events after dropped out and correct the variance estimation using a bootstrap procedure. We apply the method to an application of sitagliptin study. 相似文献
3.
Randomized controlled trials (RCTs) are the gold standard for evaluation of the efficacy and safety of investigational interventions. If every patient in an RCT were to adhere to the randomized treatment, one could simply analyze the complete data to infer the treatment effect. However, intercurrent events (ICEs) including the use of concomitant medication for unsatisfactory efficacy, treatment discontinuation due to adverse events, or lack of efficacy may lead to interventions that deviate from the original treatment assignment. Therefore, defining the appropriate estimand (the appropriate parameter to be estimated) based on the primary objective of the study is critical prior to determining the statistical analysis method and analyzing the data. The International Council for Harmonisation (ICH) E9 (R1), adopted on November 20, 2019, provided five strategies to define the estimand: treatment policy, hypothetical, composite variable, while on treatment, and principal stratum. In this article, we propose an estimand using a mix of strategies in handling ICEs. This estimand is an average of the “null” treatment difference for those with ICEs potentially related to safety and the treatment difference for the other patients if they would complete the assigned treatments. Two examples from clinical trials evaluating antidiabetes treatments are provided to illustrate the estimation of this proposed estimand and to compare it with the estimates for estimands using hypothetical and treatment policy strategies in handling ICEs. 相似文献
4.
Recurrent events involve the occurrences of the same type of event repeatedly over time and are commonly encountered in longitudinal studies. Examples include seizures in epileptic studies or occurrence of cancer tumors. In such studies, interest lies in the number of events that occur over a fixed period of time. One considerable challenge in analyzing such data arises when a large proportion of patients discontinues before the end of the study, for example, because of adverse events, leading to partially observed data. In this situation, data are often modeled using a negative binomial distribution with time‐in‐study as offset. Such an analysis assumes that data are missing at random (MAR). As we cannot test the adequacy of MAR, sensitivity analyses that assess the robustness of conclusions across a range of different assumptions need to be performed. Sophisticated sensitivity analyses for continuous data are being frequently performed. However, this is less the case for recurrent event or count data. We will present a flexible approach to perform clinically interpretable sensitivity analyses for recurrent event data. Our approach fits into the framework of reference‐based imputations, where information from reference arms can be borrowed to impute post‐discontinuation data. Different assumptions about the future behavior of dropouts dependent on reasons for dropout and received treatment can be made. The imputation model is based on a flexible model that allows for time‐varying baseline intensities. We assess the performance in a simulation study and provide an illustration with a clinical trial in patients who suffer from bladder cancer. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
5.
Frequently in clinical and epidemiologic studies, the event of interest is recurrent (i.e., can occur more than once per subject). When the events are not of the same type, an analysis which accounts for the fact that events fall into different categories will often be more informative. Often, however, although event times may always be known, information through which events are categorized may potentially be missing. Complete‐case methods (whose application may require, for example, that events be censored when their category cannot be determined) are valid only when event categories are missing completely at random. This assumption is rather restrictive. The authors propose two multiple imputation methods for analyzing multiple‐category recurrent event data under the proportional means/rates model. The use of a proper or improper imputation technique distinguishes the two approaches. Both methods lead to consistent estimation of regression parameters even when the missingness of event categories depends on covariates. The authors derive the asymptotic properties of the estimators and examine their behaviour in finite samples through simulation. They illustrate their approach using data from an international study on dialysis. 相似文献
6.
Andrew Atkinson Michael G. Kenward Tim Clayton James R. Carpenter 《Pharmaceutical statistics》2019,18(6):645-658
The analysis of time‐to‐event data typically makes the censoring at random assumption, ie, that—conditional on covariates in the model—the distribution of event times is the same, whether they are observed or unobserved (ie, right censored). When patients who remain in follow‐up stay on their assigned treatment, then analysis under this assumption broadly addresses the de jure, or “while on treatment strategy” estimand. In such cases, we may well wish to explore the robustness of our inference to more pragmatic, de facto or “treatment policy strategy,” assumptions about the behaviour of patients post‐censoring. This is particularly the case when censoring occurs because patients change, or revert, to the usual (ie, reference) standard of care. Recent work has shown how such questions can be addressed for trials with continuous outcome data and longitudinal follow‐up, using reference‐based multiple imputation. For example, patients in the active arm may have their missing data imputed assuming they reverted to the control (ie, reference) intervention on withdrawal. Reference‐based imputation has two advantages: (a) it avoids the user specifying numerous parameters describing the distribution of patients' postwithdrawal data and (b) it is, to a good approximation, information anchored, so that the proportion of information lost due to missing data under the primary analysis is held constant across the sensitivity analyses. In this article, we build on recent work in the survival context, proposing a class of reference‐based assumptions appropriate for time‐to‐event data. We report a simulation study exploring the extent to which the multiple imputation estimator (using Rubin's variance formula) is information anchored in this setting and then illustrate the approach by reanalysing data from a randomized trial, which compared medical therapy with angioplasty for patients presenting with angina. 相似文献
7.
Yun Wang Wenda Tu Yoonhee Kim Susie Sinks Jiwei He Alex Cambon Roberto Crackel Kiya Hamilton Anna Kettermann Jennifer Clark 《Pharmaceutical statistics》2023,22(4):650-670
The International Council for Harmonization (ICH) E9(R1) addendum recommends choosing an appropriate estimand based on the study objectives in advance of trial design. One defining attribute of an estimand is the intercurrent event, specifically what is considered an intercurrent event and how it should be handled. The primary objective of a clinical study is usually to assess a product's effectiveness and safety based on the planned treatment regimen instead of the actual treatment received. The estimand using the treatment policy strategy, which collects and analyzes data regardless of the occurrence of intercurrent events, is usually utilized. In this article, we explain how missing data can be handled using the treatment policy strategy from the authors' viewpoint in connection with antihyperglycemic product development programs. The article discusses five statistical methods to impute missing data occurring after intercurrent events. All five methods are applied within the framework of the treatment policy strategy. The article compares the five methods via Markov Chain Monte Carlo simulations and showcases how three of these five methods have been applied to estimate the treatment effects published in the labels for three antihyperglycemic agents currently on the market. 相似文献
8.
This paper presents a new parametric model for recurrent events, in which the time of each recurrence is associated to one or multiple latent causes and no information is provided about the responsible cause for the event. This model is characterized by a rate function and it is based on the Poisson-exponential distribution, namely the distribution of the maximum among a random number (truncated Poisson distributed) of exponential times. The time of each recurrence is then given by the maximum lifetime value among all latent causes. Inference is based on a maximum likelihood approach. A simulation study is performed in order to observe the frequentist properties of the estimation procedure for small and moderate sample sizes. We also investigated likelihood-based tests procedures. A real example from a gastroenterology study concerning small bowel motility during fasting state is used to illustrate the methodology. Finally, we apply the proposed model to a real data set and compare it with the classical Homogeneous Poisson model, which is a particular case. 相似文献
9.
Tong Tong Wu 《Journal of Statistical Computation and Simulation》2013,83(6):1145-1155
This paper studies a fast computational algorithm for variable selection on high-dimensional recurrent event data. Based on the lasso penalized partial likelihood function for the response process of recurrent event data, a coordinate descent algorithm is used to accelerate the estimation of regression coefficients. This algorithm is capable of selecting important predictors for underdetermined problems where the number of predictors far exceeds the number of cases. The selection strength is controlled by a tuning constant that is determined by a generalized cross-validation method. Our numerical experiments on simulated and real data demonstrate the good performance of penalized regression in model building for recurrent event data in high-dimensional settings. 相似文献
10.
Yang-Jin Kim 《Journal of applied statistics》2021,48(4):738
Bivariate recurrent event data are observed when subjects are at risk of experiencing two different type of recurrent events. In this paper, our interest is to suggest statistical model when there is a substantial portion of subjects not experiencing recurrent events but having a terminal event. In a context of recurrent event data, zero events can be related with either the risk free group or a terminal event. For simultaneously reflecting both a zero inflation and a terminal event in a context of bivariate recurrent event data, a joint model is implemented with bivariate frailty effects. Simulation studies are performed to evaluate the suggested models. Infection data from AML (acute myeloid leukemia) patients are analyzed as an application. 相似文献
11.
When modeling multilevel data, it is important to accurately represent the interdependence of observations within clusters. Ignoring data clustering may result in parameter misestimation. However, it is not well established to what degree parameter estimates are affected by model misspecification when applying missing data techniques (MDTs) to incomplete multilevel data. We compare the performance of three MDTs with incomplete hierarchical data. We consider the impact of imputation model misspecification on the quality of parameter estimates by employing multiple imputation under assumptions of a normal model (MI/NM) with two-level cross-sectional data when values are missing at random on the dependent variable at rates of 10%, 30%, and 50%. Five criteria are used to compare estimates from MI/NM to estimates from MI assuming a linear mixed model (MI/LMM) and maximum likelihood estimation to the same incomplete data sets. With 10% missing data (MD), techniques performed similarly for fixed-effects estimates, but variance components were biased with MI/NM. Effects of model misspecification worsened at higher rates of MD, with the hierarchical structure of the data markedly underrepresented by biased variance component estimates. MI/LMM and maximum likelihood provided generally accurate and unbiased parameter estimates but performance was negatively affected by increased rates of MD. 相似文献
12.
Yang-Jin Kim 《Journal of applied statistics》2014,41(7):1619-1626
For analyzing recurrent event data, either total time scale or gap time scale is adopted according to research interest. In particular, gap time scale is known to be more appropriate for modeling a renewal process. In this paper, we adopt gap time scale to analyze recurrent event data with repeated observation gaps which cannot be observed completely because of unknown termination times of observation gaps. In order to estimate termination times, interval-censored mechanism is applied. Simulation studies are done to compare the suggested methods with the unadjusted method ignoring incomplete observation gaps. As a real example, conviction data set with suspensions is analyzed with suggested methods. 相似文献
13.
Variable selection is an important issue in all regression analysis and in this paper, we discuss this in the context of regression
analysis of recurrent event data. Recurrent event data often occur in long-term studies in which individuals may experience
the events of interest more than once and their analysis has recently attracted a great deal of attention (Andersen et al.,
Statistical models based on counting processes, 1993; Cook and Lawless, Biometrics 52:1311–1323, 1996, The analysis of recurrent
event data, 2007; Cook et al., Biometrics 52:557–571, 1996; Lawless and Nadeau, Technometrics 37:158-168, 1995; Lin et al.,
J R Stat Soc B 69:711–730, 2000). However, it seems that there are no established approaches to the variable selection with
respect to recurrent event data. For the problem, we adopt the idea behind the nonconcave penalized likelihood approach proposed
in Fan and Li (J Am Stat Assoc 96:1348–1360, 2001) and develop a nonconcave penalized estimating function approach. The proposed
approach selects variables and estimates regression coefficients simultaneously and an algorithm is presented for this process.
We show that the proposed approach performs as well as the oracle procedure in that it yields the estimates as if the correct
submodel was known. Simulation studies are conducted for assessing the performance of the proposed approach and suggest that
it works well for practical situations. The proposed methodology is illustrated by using the data from a chronic granulomatous
disease study. 相似文献
14.
XiaoBing ZhaoXian Zhou JingLong Wang 《Journal of statistical planning and inference》2012,142(1):289-300
Recurrent event data are often encountered in longitudinal follow-up studies in many important areas such as biomedical science, econometrics, reliability, criminology and demography. Multiplicative marginal rates models have been used extensively to analyze recurrent event data, but often fail to fit the data adequately. In addition, the analysis is complicated by excess zeros in the data as well as the presence of a terminal event that precludes further recurrence. To address these problems, we propose a semiparametric model with an additive rate function and an unspecified baseline to analyze recurrent event data, which includes a parameter to accommodate excess zeros and a frailty term to account for a terminal event. Local likelihood procedure is applied to estimate the parameters, and the asymptotic properties of the estimators are established. A simulation study is conducted to evaluate the performance of the proposed methods, and an example of their application is presented on a set of tumor recurrent data for bladder cancer. 相似文献
15.
In this paper, we suggest three new ratio estimators of the population mean using quartiles of the auxiliary variable when there are missing data from the sample units. The suggested estimators are investigated under the simple random sampling method. We obtain the mean square errors equations for these estimators. The suggested estimators are compared with the sample mean and ratio estimators in the case of missing data. Also, they are compared with estimators in Singh and Horn [Compromised imputation in survey sampling, Metrika 51 (2000), pp. 267–276], Singh and Deo [Imputation by power transformation, Statist. Papers 45 (2003), pp. 555–579], and Kadilar and Cingi [Estimators for the population mean in the case of missing data, Commun. Stat.-Theory Methods, 37 (2008), pp. 2226–2236] and present under which conditions the proposed estimators are more efficient than other estimators. In terms of accuracy and of the coverage of the bootstrap confidence intervals, the suggested estimators performed better than other estimators. 相似文献
16.
《Journal of Statistical Computation and Simulation》2012,82(11):1653-1675
In real-life situations, we often encounter data sets containing missing observations. Statistical methods that address missingness have been extensively studied in recent years. One of the more popular approaches involves imputation of the missing values prior to the analysis, thereby rendering the data complete. Imputation broadly encompasses an entire scope of techniques that have been developed to make inferences about incomplete data, ranging from very simple strategies (e.g. mean imputation) to more advanced approaches that require estimation, for instance, of posterior distributions using Markov chain Monte Carlo methods. Additional complexity arises when the number of missingness patterns increases and/or when both categorical and continuous random variables are involved. Implementation of routines, procedures, or packages capable of generating imputations for incomplete data are now widely available. We review some of these in the context of a motivating example, as well as in a simulation study, under two missingness mechanisms (missing at random and missing not at random). Thus far, evaluation of existing implementations have frequently centred on the resulting parameter estimates of the prescribed model of interest after imputing the missing data. In some situations, however, interest may very well be on the quality of the imputed values at the level of the individual – an issue that has received relatively little attention. In this paper, we focus on the latter to provide further insight about the performance of the different routines, procedures, and packages in this respect. 相似文献
17.
The topic of heterogeneity in the analysis of recurrent event data has received considerable attention recent times. Frailty models are widely employed in such situations as they allow us to model the heterogeneity through common random effect. In this paper, we introduce a shared frailty model for gap time distributions of recurrent events with multiple causes. The parameters of the model are estimated using EM algorithm. An extensive simulation study is used to assess the performance of the method. Finally, we apply the proposed model to a real-life data. 相似文献
18.
Kang FangYuan 《统计学通讯:理论与方法》2018,47(8):1901-1912
In this article, an additive rate model is proposed for clustered recurrent event with a terminal event. The subjects are clustered by some property. For the clustered subjects, the recurrent event is precluded by the death. An estimating equation is developed for the model parameter and the baseline rate function. The asymptotic properties of the resulting estimators are established. In addition, a goodness-of-fit test is presented to assess the adequacy of the model. The finite-sample behavior of the proposed estimators is evaluated through simulation studies, and an application to a bladder cancer data is illustrated. 相似文献
19.
20.
Ecological momentary assessment (EMA) studies investigate intensive repeated observations of the current behavior and experiences of subjects in real time. In particular, such studies aim to minimize recall bias and maximize ecological validity, thereby strengthening the investigation and inference of microprocesses that influence behavior in real-world contexts by gathering intensive information on the temporal patterning of behavior of study subjects. Throughout this paper, we focus on the data analysis of an EMA study that examined behavior of intermittent smokers (ITS). Specifically, we sought to explore the pattern of clustered smoking behavior of ITS, or smoking ‘bouts’, as well as the covariates that predict such smoking behavior. To do this, in this paper we introduce a framework for characterizing the temporal behavior of ITS via the functions of event gap time to distinguish the smoking bouts. We used the time-varying coefficient models for the cumulative log gap time and to characterize the temporal patterns of smoking behavior, while simultaneously adjusting for behavioral covariates, and incorporated the inverse probability weighting into the models to accommodate missing data. Simulation studies showed that irrespective of whether missing by design or missing at random, the model was able to reliably determine prespecified time-varying functional forms of a given covariate coefficient, provided the the within-subject level was small. 相似文献