首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
While randomized controlled trials (RCTs) are the gold standard for estimating treatment effects in medical research, there is increasing use of and interest in using real-world data for drug development. One such use case is the construction of external control arms for evaluation of efficacy in single-arm trials, particularly in cases where randomization is either infeasible or unethical. However, it is well known that treated patients in non-randomized studies may not be comparable to control patients—on either measured or unmeasured variables—and that the underlying population differences between the two groups may result in biased treatment effect estimates as well as increased variability in estimation. To address these challenges for analyses of time-to-event outcomes, we developed a meta-analytic framework that uses historical reference studies to adjust a log hazard ratio estimate in a new external control study for its additional bias and variability. The set of historical studies is formed by constructing external control arms for historical RCTs, and a meta-analysis compares the trial controls to the external control arms. Importantly, a prospective external control study can be performed independently of the meta-analysis using standard causal inference techniques for observational data. We illustrate our approach with a simulation study and an empirical example based on reference studies for advanced non-small cell lung cancer. In our empirical analysis, external control patients had lower survival than trial controls (hazard ratio: 0.907), but our methodology is able to correct for this bias. An implementation of our approach is available in the R package ecmeta .  相似文献   

2.
The power of randomized controlled clinical trials to demonstrate the efficacy of a drug compared with a control group depends not just on how efficacious the drug is, but also on the variation in patients' outcomes. Adjusting for prognostic covariates during trial analysis can reduce this variation. For this reason, the primary statistical analysis of a clinical trial is often based on regression models that besides terms for treatment and some further terms (e.g., stratification factors used in the randomization scheme of the trial) also includes a baseline (pre-treatment) assessment of the primary outcome. We suggest to include a “super-covariate”—that is, a patient-specific prediction of the control group outcome—as a further covariate (but not as an offset). We train a prognostic model or ensembles of such models on the individual patient (or aggregate) data of other studies in similar patients, but not the new trial under analysis. This has the potential to use historical data to increase the power of clinical trials and avoids the concern of type I error inflation with Bayesian approaches, but in contrast to them has a greater benefit for larger sample sizes. It is important for prognostic models behind “super-covariates” to generalize well across different patient populations in order to similarly reduce unexplained variability whether the trial(s) to develop the model are identical to the new trial or not. In an example in neovascular age-related macular degeneration we saw efficiency gains from the use of a “super-covariate”.  相似文献   

3.
In the analysis of retrospective data or when interpreting results from a single-arm phase II clinical trial relative to historical data, it is often of interest to show plots summarizing time-to-event outcomes comparing treatment groups. If the groups being compared are imbalanced with respect to factors known to influence outcome, these plots can be misleading and seemingly incompatible with results obtained from a regression model that accounts for these imbalances. We consider ways in which covariate information can be used to obtain adjusted curves for time-to-event outcomes. We first review a common model-based method and then suggest another model-based approach that is not as reliant on model assumptions. Finally, an approach that is partially model free is suggested. Each method is applied to an example from hematopoietic cell transplantation.  相似文献   

4.
In this article, we develop a model to study treatment, period, carryover, and other applicable effects in a crossover design with a time-to-event response variable. Because time-to-event outcomes on different treatment regimens within the crossover design are correlated for an individual, we adopt a proportional hazards frailty model. If the frailty is assumed to have a gamma distribution, and the hazard rates are piecewise constant, then the likelihood function can be determined via closed-form expressions. We illustrate the methodology via an application to a data set from an asthma clinical trial and run simulations that investigate sensitivity of the model to data generated from different distributions.  相似文献   

5.
We derived two methods to estimate the logistic regression coefficients in a meta-analysis when only the 'aggregate' data (mean values) from each study are available. The estimators we proposed are the discriminant function estimator and the reverse Taylor series approximation. These two methods of estimation gave similar estimators using an example of individual data. However, when aggregate data were used, the discriminant function estimators were quite different from the other two estimators. A simulation study was then performed to evaluate the performance of these two estimators as well as the estimator obtained from the model that simply uses the aggregate data in a logistic regression model. The simulation study showed that all three estimators are biased. The bias increases as the variance of the covariate increases. The distribution type of the covariates also affects the bias. In general, the estimator from the logistic regression using the aggregate data has less bias and better coverage probabilities than the other two estimators. We concluded that analysts should be cautious in using aggregate data to estimate the parameters of the logistic regression model for the underlying individual data.  相似文献   

6.
This article provides a unified methodology of meta-analysis that synthesizes medical evidence by using both available individual patient data (IPD) and published summary statistics within the framework of likelihood principle. Most up-to-date scientific evidence on medicine is crucial information not only to consumers but also to decision makers, and can only be obtained when existing evidence from the literature and the most recent individual patient data are optimally synthesized. We propose a general linear mixed effects model to conduct meta-analyses when individual patient data are only available for some of the studies and summary statistics have to be used for the rest of the studies. Our approach includes both the traditional meta-analyses in which only summary statistics are available for all studies and the other extreme case in which individual patient data are available for all studies as special examples. We implement the proposed model with statistical procedures from standard computing packages. We provide measures of heterogeneity based on the proposed model. Finally, we demonstrate the proposed methodology through a real life example studying the cerebrospinal fluid biomarkers to identify individuals with high risk of developing Alzheimer's disease when they are still cognitively normal.  相似文献   

7.
We consider the problem of sample size calculation for non-inferiority based on the hazard ratio in time-to-event trials where overall study duration is fixed and subject enrollment is staggered with variable follow-up. An adaptation of previously developed formulae for the superiority framework is presented that specifically allows for effect reversal under the non-inferiority setting, and its consequent effect on variance. Empirical performance is assessed through a small simulation study, and an example based on an ongoing trial is presented. The formulae are straightforward to program and may prove a useful tool in planning trials of this type.  相似文献   

8.
We implement a joint model for mixed multivariate longitudinal measurements, applied to the prediction of time until lung transplant or death in idiopathic pulmonary fibrosis. Specifically, we formulate a unified Bayesian joint model for the mixed longitudinal responses and time-to-event outcomes. For the longitudinal model of continuous and binary responses, we investigate multivariate generalized linear mixed models using shared random effects. Longitudinal and time-to-event data are assumed to be independent conditional on available covariates and shared parameters. A Markov chain Monte Carlo algorithm, implemented in OpenBUGS, is used for parameter estimation. To illustrate practical considerations in choosing a final model, we fit 37 different candidate models using all possible combinations of random effects and employ a deviance information criterion to select a best-fitting model. We demonstrate the prediction of future event probabilities within a fixed time interval for patients utilizing baseline data, post-baseline longitudinal responses, and the time-to-event outcome. The performance of our joint model is also evaluated in simulation studies.  相似文献   

9.
Noninferiority trials intend to show that a new treatment is ‘not worse'' than a standard-of-care active control and can be used as an alternative when it is likely to cause fewer side effects compared to the active control. In the case of time-to-event endpoints, existing methods of sample size calculation are done either assuming proportional hazards between the two study arms, or assuming exponentially distributed lifetimes. In scenarios where these assumptions are not true, there are few reliable methods for calculating the sample sizes for a time-to-event noninferiority trial. Additionally, the choice of the non-inferiority margin is obtained either from a meta-analysis of prior studies, or strongly justifiable ‘expert opinion'', or from a ‘well conducted'' definitive large-sample study. Thus, when historical data do not support the traditional assumptions, it would not be appropriate to use these methods to design a noninferiority trial. For such scenarios, an alternate method of sample size calculation based on the assumption of Proportional Time is proposed. This method utilizes the generalized gamma ratio distribution to perform the sample size calculations. A practical example is discussed, followed by insights on choice of the non-inferiority margin, and the indirect testing of superiority of treatment compared to placebo.KEYWORDS: Generalized gamma, noninferiority, non-proportional hazards, proportional time, relative time, sample size  相似文献   

10.
Missing outcome data constitute a serious threat to the validity and precision of inferences from randomized controlled trials. In this paper, we propose the use of a multistate Markov model for the analysis of incomplete individual patient data for a dichotomous outcome reported over a period of time. The model accounts for patients dropping out of the study and also for patients relapsing. The time of each observation is accounted for, and the model allows the estimation of time‐dependent relative treatment effects. We apply our methods to data from a study comparing the effectiveness of 2 pharmacological treatments for schizophrenia. The model jointly estimates the relative efficacy and the dropout rate and also allows for a wide range of clinically interesting inferences to be made. Assumptions about the missingness mechanism and the unobserved outcomes of patients dropping out can be incorporated into the analysis. The presented method constitutes a viable candidate for analyzing longitudinal, incomplete binary data.  相似文献   

11.
Over the last 20 or more years a lot of clinical applications and methodological development in the area of joint models of longitudinal and time-to-event outcomes have come up. In these studies, patients are followed until an event, such as death, occurs. In most of the work, using subject-specific random-effects as frailty, the dependency of these two processes has been established. In this article, we propose a new joint model that consists of a linear mixed-effects model for longitudinal data and an accelerated failure time model for the time-to-event data. These two sub-models are linked via a latent random process. This model will capture the dependency of the time-to-event on the longitudinal measurements more directly. Using standard priors, a Bayesian method has been developed for estimation. All computations are implemented using OpenBUGS. Our proposed method is evaluated by a simulation study, which compares the conditional model with a joint model with local independence by way of calibration. Data on Duchenne muscular dystrophy (DMD) syndrome and a set of data in AIDS patients have been analysed.  相似文献   

12.
13.
For the analysis of a time-to-event endpoint in a single-arm or randomized clinical trial it is generally perceived that interpretation of a given estimate of the survival function, or the comparison between two groups, hinges on some quantification of the amount of follow-up. Typically, a median of some loosely defined quantity is reported. However, whatever median is reported, is typically not answering the question(s) trialists actually have in terms of follow-up quantification. In this paper, inspired by the estimand framework, we formulate a comprehensive list of relevant scientific questions that trialists have when reporting time-to-event data. We illustrate how these questions should be answered, and that reference to an unclearly defined follow-up quantity is not needed at all. In drug development, key decisions are made based on randomized controlled trials, and we therefore also discuss relevant scientific questions not only when looking at a time-to-event endpoint in one group, but also for comparisons. We find that different thinking about some of the relevant scientific questions around follow-up is required depending on whether a proportional hazards assumption can be made or other patterns of survival functions are anticipated, for example, delayed separation, crossing survival functions, or the potential for cure. We conclude the paper with practical recommendations.  相似文献   

14.
In May 2013, GlaxoSmithKline (980 Great West Road, Brentford, Middlesex, TW8 9GS, UK) established a new online system to enable scientific researchers to request access to anonymised patient level clinical trial data. Providing access to individual patient data collected in clinical trials enables conduct of further research that may help advance medical science or improve patient care. In turn, this helps ensure that the data provided by research participants are used to maximum effect in the creation of new knowledge and understanding. However, when providing access to individual patient data, maintaining the privacy and confidentiality of research participants is critical. This article describes the approach we have taken to prepare data for sharing with other researchers in a way that minimises risk with respect to the privacy and confidentiality of research participants, ensures compliance with current data privacy legal requirements and yet retains utility of the anonymised datasets for research purposes. We recognise that there are different possible approaches and that broad consensus is needed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
A model to accommodate time-to-event ordinal outcomes was proposed by Berridge and Whitehead. Very few studies have adopted this approach, despite its appeal in incorporating several ordered categories of event outcome. More recently, there has been increased interest in utilizing recurrent events to analyze practical endpoints in the study of disease history and to help quantify the changing pattern of disease over time. For example, in studies of heart failure, the analysis of a single fatal event no longer provides sufficient clinical information to manage the disease. Similarly, the grade/frequency/severity of adverse events may be more important than simply prolonged survival in studies of toxic therapies in oncology. We propose an extension of the ordinal time-to-event model to allow for multiple/recurrent events in the case of marginal models (where all subjects are at risk for each recurrence, irrespective of whether they have experienced previous recurrences) and conditional models (subjects are at risk of a recurrence only if they have experienced a previous recurrence). These models rely on marginal and conditional estimates of the instantaneous baseline hazard and provide estimates of the probabilities of an event of each severity for each recurrence over time. We outline how confidence intervals for these probabilities can be constructed and illustrate how to fit these models and provide examples of the methods, together with an interpretation of the results.  相似文献   

16.
If interest lies in reporting absolute measures of risk from time-to-event data then obtaining an appropriate approximation to the shape of the underlying hazard function is vital. It has previously been shown that restricted cubic splines can be used to approximate complex hazard functions in the context of time-to-event data. The degree of complexity for the spline functions is dictated by the number of knots that are defined. We highlight through the use of a motivating example that complex hazard function shapes are often required when analysing time-to-event data. Through the use of simulation, we show that provided a sufficient number of knots are used, the approximated hazard functions given by restricted cubic splines fit closely to the true function for a range of complex hazard shapes. The simulation results also highlight the insensitivity of the estimated relative effects (hazard ratios) to the correct specification of the baseline hazard.  相似文献   

17.
Compared with most of the existing phase I designs, the recently proposed calibration-free odds (CFO) design has been demonstrated to be robust, model-free, and easy to use in practice. However, the original CFO design cannot handle late-onset toxicities, which have been commonly encountered in phase I oncology dose-finding trials with targeted agents or immunotherapies. To account for late-onset outcomes, we extend the CFO design to its time-to-event (TITE) version, which inherits the calibration-free and model-free properties. One salient feature of CFO-type designs is to adopt game theory by competing three doses at a time, including the current dose and the two neighboring doses, while interval-based designs only use the data at the current dose and is thus less efficient. We conduct comprehensive numerical studies for the TITE-CFO design under both fixed and randomly generated scenarios. TITE-CFO shows robust and efficient performances compared with interval-based and model-based counterparts. As a conclusion, the TITE-CFO design provides robust, efficient, and easy-to-use alternatives for phase I trials when the toxicity outcome is late-onset.  相似文献   

18.
In this paper, a stochastic individual data model is considered. It accommodates occurrence times, reporting, and settlement delays and severity of every individual claims. This formulation gives rise to a model for the corresponding aggregate data under which classical chain ladder and Bornhuetter–Ferguson algorithms apply. A claims reserving algorithm is developed under this individual data model and comparisons of its performance with chain ladder and Bornhuetter–Ferguson algorithms are made to reveal the effects of using individual data to instead aggregate data. The research findings indicate a remarkable promotion in accuracy of loss reserving, especially when the claims amounts are not too heavy-tailed.  相似文献   

19.
The joint models for longitudinal data and time-to-event data have recently received numerous attention in clinical and epidemiologic studies. Our interest is in modeling the relationship between event time outcomes and internal time-dependent covariates. In practice, the longitudinal responses often show non linear and fluctuated curves. Therefore, the main aim of this paper is to use penalized splines with a truncated polynomial basis to parameterize the non linear longitudinal process. Then, the linear mixed-effects model is applied to subject-specific curves and to control the smoothing. The association between the dropout process and longitudinal outcomes is modeled through a proportional hazard model. Two types of baseline risk functions are considered, namely a Gompertz distribution and a piecewise constant model. The resulting models are referred to as penalized spline joint models; an extension of the standard joint models. The expectation conditional maximization (ECM) algorithm is applied to estimate the parameters in the proposed models. To validate the proposed algorithm, extensive simulation studies were implemented followed by a case study. In summary, the penalized spline joint models provide a new approach for joint models that have improved the existing standard joint models.  相似文献   

20.
Recurrent events in clinical trials have typically been analysed using either a multiple time-to-event method or a direct approach based on the distribution of the number of events. An area of application for these methods is exacerbation data from respiratory clinical trials. The different approaches to the analysis and the issues involved are illustrated for a large trial (n = 1465) in chronic obstructive pulmonary disease (COPD). For exacerbation rates, clinical interest centres on a direct comparison of rates for each treatment which favours the distribution-based analysis, rather than a time-to-event approach. Poisson regression has often been employed and has recently been recommended as the appropriate method of analysis for COPD exacerbations but the key assumptions often appear unreasonable for this analysis. By contrast use of a negative binomial model which corresponds to assuming a separate Poisson parameter for each subject offers a more appealing approach. Non-parametric methods avoid some of the assumptions required by these models, but do not provide appropriate estimates of treatment effects because of the discrete and bounded nature of the data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号