首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
When recruitment into a clinical trial is limited due to rarity of the disease of interest, or when recruitment to the control arm is limited due to ethical reasons (eg, pediatric studies or important unmet medical need), exploiting historical controls to augment the prospectively collected database can be an attractive option. Statistical methods for combining historical data with randomized data, while accounting for the incompatibility between the two, have been recently proposed and remain an active field of research. The current literature is lacking a rigorous comparison between methods but also guidelines about their use in practice. In this paper, we compare the existing methods based on a confirmatory phase III study design exercise done for a new antibacterial therapy with a binary endpoint and a single historical dataset. A procedure to assess the relative performance of the different methods for borrowing information from historical control data is proposed, and practical questions related to the selection and implementation of methods are discussed. Based on our examination, we found that the methods have a comparable performance, but we recommend the robust mixture prior for its ease of implementation.  相似文献   

2.
In this paper, we propose a multistage group sequential procedure to design survival trials using historical controls. The formula for the number of events required for historical control trial designs is derived. Furthermore, a transformed information time is proposed for trial monitoring. An example is given to illustrate the application of the proposed methods to survival trial designs using historical controls. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

4.
In the absence of placebo‐controlled trials, the efficacy of a test treatment can be alternatively examined by showing its non‐inferiority to an active control; that is, the test treatment is not worse than the active control by a pre‐specified margin. The margin is based on the effect of the active control over placebo in historical studies. In other words, the non‐inferiority setup involves a network of direct and indirect comparisons between test treatment, active controls, and placebo. Given this framework, we consider a Bayesian network meta‐analysis that models the uncertainty and heterogeneity of the historical trials into the non‐inferiority trial in a data‐driven manner through the use of the Dirichlet process and power priors. Depending on whether placebo was present in the historical trials, two cases of non‐inferiority testing are discussed that are analogs of the synthesis and fixed‐margin approach. In each of these cases, the model provides a more reliable estimate of the control given its effect in other trials in the network, and, in the case where placebo was only present in the historical trials, the model can predict the effect of the test treatment over placebo as if placebo had been present in the non‐inferiority trial. It can further answer other questions of interest, such as comparative effectiveness of the test treatment among its comparators. More importantly, the model provides an opportunity for disproportionate randomization or the use of small sample sizes by allowing borrowing of information from a network of trials to draw explicit conclusions on non‐inferiority. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
ABSTRACT Tests for trend in tumour response rates with increasing dose in long-term laboratory studies of carcinogenicity that take into account historical control information are discussed. The theoretical basis for these tests is described, and their small-sample properties evaluated using computer simulation. The performance of these tests is also evaluated using data from carcinogenicity experiments conducted under the U.S. National Toxicology Program. Based on these results, recommendations are made as to the most appropriate tests in practice. When the assumptions underlying these tests are satisfied, the use of historical control information is shown to result in an increase in power relative to the classical Cochran-Armitage test that is widely used without historical controls.  相似文献   

6.
While randomized controlled trials (RCTs) are the gold standard for estimating treatment effects in medical research, there is increasing use of and interest in using real-world data for drug development. One such use case is the construction of external control arms for evaluation of efficacy in single-arm trials, particularly in cases where randomization is either infeasible or unethical. However, it is well known that treated patients in non-randomized studies may not be comparable to control patients—on either measured or unmeasured variables—and that the underlying population differences between the two groups may result in biased treatment effect estimates as well as increased variability in estimation. To address these challenges for analyses of time-to-event outcomes, we developed a meta-analytic framework that uses historical reference studies to adjust a log hazard ratio estimate in a new external control study for its additional bias and variability. The set of historical studies is formed by constructing external control arms for historical RCTs, and a meta-analysis compares the trial controls to the external control arms. Importantly, a prospective external control study can be performed independently of the meta-analysis using standard causal inference techniques for observational data. We illustrate our approach with a simulation study and an empirical example based on reference studies for advanced non-small cell lung cancer. In our empirical analysis, external control patients had lower survival than trial controls (hazard ratio: 0.907), but our methodology is able to correct for this bias. An implementation of our approach is available in the R package ecmeta .  相似文献   

7.
In this paper we present an approach to using historical control data to augment information from a randomized controlled clinical trial, when it is not possible to continue the control regimen to obtain the most reliable and valid assessment of long term treatment effects. Using an adjustment procedure to the historical control data, we investigate a method of estimating the long term survival function for the clinical trial control group and for evaluating the long term treatment effect. The suggested method is simple to interpret, and particularly motivated in clinical trial settings when ethical considerations preclude the long term follow-up of placebo controls. A simulation study reveals that the bias in parameter estimates that arises in the setting of group sequential monitoring will be attenuated when long term historical control information is used in the proposed manner. Data from the first and second National Wilms' Tumor studies are used to illustrate the method.  相似文献   

8.
In the context of vaccine efficacy trial where the incidence rate is very low and a very large sample size is usually expected, incorporating historical data into a new trial is extremely attractive to reduce sample size and increase estimation precision. Nevertheless, for some infectious diseases, seasonal change in incidence rates poses a huge challenge in borrowing historical data and a critical question is how to properly take advantage of historical data borrowing with acceptable tolerance to between-trials heterogeneity commonly from seasonal disease transmission. In this article, we extend a probability-based power prior which determines the amount of information to be borrowed based on the agreement between the historical and current data, to make it applicable for either a single or multiple historical trials available, with constraint on the amount of historical information to be borrowed. Simulations are conducted to compare the performance of the proposed method with other methods including modified power prior (MPP), meta-analytic-predictive (MAP) prior and the commensurate prior methods. Furthermore, we illustrate the application of the proposed method for trial design in a practical setting.  相似文献   

9.
Existing statutes in the United States and Europe require manufacturers to demonstrate evidence of effectiveness through the conduct of adequate and well‐controlled studies to obtain marketing approval of a therapeutic product. What constitutes adequate and well‐controlled studies is usually interpreted as randomized controlled trials (RCTs). However, these trials are sometimes unfeasible because of their size, duration, cost, patient preference, or in some cases, ethical concerns. For example, RCTs may not be fully powered in rare diseases or in infections caused by multidrug resistant pathogens because of the low number of enrollable patients. In this case, data available from external controls (including historical controls and observational studies or data registries) can complement information provided by RCT. Propensity score matching methods can be used to select or “borrow” additional patients from the external controls, for maintaining a one‐to‐one randomization between the treatment arm and active control, by matching the new treatment and control units based on a set of measured covariates, ie, model‐based pairing of treatment and control units that are similar in terms of their observable pretreatment characteristics. To this end, 2 matching schemes based on propensity scores are explored and applied to a real clinical data example with the objective of using historical or external observations to augment data in a trial where the randomization is disproportionate or asymmetric.  相似文献   

10.
A parametric model is developed for analyzing the updated historical (ratio of identicals) imputation strategy in a periodic survey. A test is then developed for comparing alternative formulations of the historical values. The benefits of exponentially smoothing in this context are demonstrated using monthly gasoline sales data.  相似文献   

11.
Score statistics utilizing historical control data have been proposed to test for increasing trend in tumour occurrence rates in laboratory carcinogenicity studies. Novel invariance arguments are used to confirm, under slightly weaker conditions, previously established asymptotic distributions (mixtures of normal distributions) of tests unconditional on the tumor response rate in the concurrent control group. Conditioning on the control response rate, an ancillary statistic, leads to a new conditional limit theorem in which the test statistic converges to an unknown random variable. Because of this, a subasymptotic approximation to the conditional limiting distribution is also considered. The adequacy of these large-sample approximations in finite samples is evaluated using computer simulation. Bootstrap methods for use in finite samples are also proposed. The application of the conditional and unconditional tests is illustrated using bioassay data taken from the literature. The results presented in this paper are used to formulate recommendations for the use of tests for trend with historical controls in practice.  相似文献   

12.
We propose a monitoring procedure to test for the constancy of the correlation coefficient of a sequence of random variables. The idea of the method is that a historical sample is available and the goal is to monitor for changes in the correlation as new data become available. We introduce a detector which is based on the first hitting time of a CUSUM-type statistic over a suitably constructed threshold function. We derive the asymptotic distribution of the detector and show that the procedure detects a change with probability approaching unity as the length of the historical period increases. The method is illustrated by Monte Carlo experiments and the analysis of a real application with the log-returns of the Standard & Poor's 500 (S&P 500) and IBM stock assets.  相似文献   

13.
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully utilize all available information including the historical and phase II information of the treatment as well as external data of the other treatments. It is not uncommon that a phase II study may use a surrogate endpoint as the primary endpoint and has no or limited data for the final endpoint. On the other hand, external information from the other studies for the other treatments on the surrogate and final endpoints may be available to establish a relationship between the treatment effects on the two endpoints. Through this relationship, making full use of the surrogate information may enhance the estimate of the treatment effect on the final endpoint. In this research, we propose a bivariate Bayesian analysis approach to comprehensively deal with the problem. A dynamic borrowing approach is considered to regulate the amount of historical data and surrogate information borrowing based on the level of consistency. A much simpler frequentist method is also discussed. Simulations are conducted to compare the performances of different approaches. An example is used to illustrate the applications of the methods.  相似文献   

14.
The FDA released the final guidance on noninferiority trials in November 2016. In noninferiority trials, validity of the assessment of the efficacy of the test treatment depends on the control treatment's efficacy. Therefore, it is critically important that there be a reliable estimate of the control treatment effect—which is generally obtained from historical trials, and often assumed to hold in the current setting (the assay constancy assumption). Validating the constancy assumption requires clinical data, which are typically lacking. The guidance acknowledges that “lack of constancy can occur for many reasons.” We clarify the objectives of noninferiority trials. We conclude that correction for bias, rather than assay constancy, is critical to conducting valid noninferiority trials. We propose that assay constancy not be assumed and discounting or thresholds be used to address concern about loss of historical efficacy. Examples are provided for illustration.  相似文献   

15.
Traditional vaccine efficacy trials usually use fixed designs with fairly large sample sizes. Recruiting a large number of subjects requires longer time and higher costs. Furthermore, vaccine developers are more than ever facing the need to accelerate vaccine development to fulfill the public's medical needs. A possible approach to accelerate development is to use the method of dynamic borrowing of historical controls in clinical trials. In this paper, we evaluate the feasibility and the performance of this approach in vaccine development by retrospectively analyzing two real vaccine studies: a relatively small immunological trial (typical early phase study) and a large vaccine efficacy trial (typical Phase 3 study) assessing prophylactic human papillomavirus vaccine. Results are promising, particularly for early development immunological studies, where the adaptive design is feasible, and control of type I error is less relevant.  相似文献   

16.
Graphical methods have played a central role in the development of statistical theory and practice. This presentation briefly reviews some of the highlights in the historical development of statistical graphics and gives a simple taxonomy that can be used to characterize the current use of graphical methods. This taxonomy is used to describe the evolution of the use of graphics in some major statistical and related scientific journals.

Some recent advances in the use of graphical methods for statistical analysis are reviewed, and several graphical methods for the statistical presentation of data are illustrated, including the use of multicolor maps.  相似文献   

17.
Two approaches have been used for designing spatial surveys to detect a target. The classical approach controls the probability of missing a target that exists; a Bayesian approach controls the probability that a target exists given that none was seen. In both cases, information about the likely size of the target can reduce sampling requirements. In this paper, previous results are summarized and then used to assess the risk that Roman remains could be present at sites scheduled for development in Greater London.  相似文献   

18.
Developing prediction bounds for surgery duration is difficult due to the large number of distinct procedures. The variety of procedures at a multi-speciality surgery suite means that even with several years of historical data a large fraction of surgical cases will have little or no historical data for use in predicting case duration. Bayesian methods can be used to combine historical data with expert judgement to provide estimates to overcome this, but eliciting expert opinion for a probability distribution can be difficult. We combine expert judgement, expert classification of procedures by complexity category and historical data in a Markov Chain Monte Carlo model and test it against one year of actual surgery cases at a multi-speciality surgical suite.  相似文献   

19.
We suppose a case is to be compared with controls on the basis of a test that gives a single discrete score. The score of the case may tie with the scores of one or more controls. However, scores relate to an underlying quantity of interest that is continuous and so an observed score can be treated as the rounded value of an underlying continuous score. This makes it reasonable to break ties. This paper addresses the problem of forming a confidence interval for the proportion of controls that have a lower underlying score than the case. In the absence of ties, this is the standard task of making inferences about a binomial proportion and many methods for forming confidence intervals have been proposed. We give a general procedure to extend these methods to handle ties, under the assumption that ties may be broken at random. Properties of the procedure are given and an example examines its performance when it is used to extend several methods. A real example shows that an estimated confidence interval can be much too small if the uncertainty associated with ties is not taken into account. Software implementing the procedure is freely available.  相似文献   

20.
It is important to study historical temperature time series prior to the industrial revolution so that one can view the current global warming trend from a long‐term historical perspective. Because there are no instrumental records of such historical temperature data, climatologists have been interested in reconstructing historical temperatures using various proxy time series. In this paper, the authors examine a state‐space model approach for historical temperature reconstruction which not only makes use of the proxy data but also information on external forcings. A challenge in the implementation of this approach is the estimation of the parameters in the state‐space model. The authors developed two maximum likelihood methods for parameter estimation and studied the efficiency and asymptotic properties of the associated estimators through a combination of theoretical and numerical investigations. The Canadian Journal of Statistics 38: 488–505; 2010 © 2010 Crown in the right of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号