共查询到20条相似文献,搜索用时 23 毫秒
1.
When a candidate predictive marker is available, but evidence on its predictive ability is not sufficiently reliable, all‐comers trials with marker stratification are frequently conducted. We propose a framework for planning and evaluating prospective testing strategies in confirmatory, phase III marker‐stratified clinical trials based on a natural assumption on heterogeneity of treatment effects across marker‐defined subpopulations, where weak rather than strong control is permitted for multiple population tests. For phase III marker‐stratified trials, it is expected that treatment efficacy is established in a particular patient population, possibly in a marker‐defined subpopulation, and that the marker accuracy is assessed when the marker is used to restrict the indication or labelling of the treatment to a marker‐based subpopulation, ie, assessment of the clinical validity of the marker. In this paper, we develop statistical testing strategies based on criteria that are explicitly designated to the marker assessment, including those examining treatment effects in marker‐negative patients. As existing and developed statistical testing strategies can assert treatment efficacy for either the overall patient population or the marker‐positive subpopulation, we also develop criteria for evaluating the operating characteristics of the statistical testing strategies based on the probabilities of asserting treatment efficacy across marker subpopulations. Numerical evaluations to compare the statistical testing strategies based on the developed criteria are provided. 相似文献
2.
Heiko Götte Armin Schüler Marietta Kirchner Meinhard Kieser 《Pharmaceutical statistics》2015,14(6):515-524
In recent years, high failure rates in phase III trials were observed. One of the main reasons is overoptimistic assumptions for the planning of phase III resulting from limited phase II information and/or unawareness of realistic success probabilities. We present an approach for planning a phase II trial in a time‐to‐event setting that considers the whole phase II/III clinical development programme. We derive stopping boundaries after phase II that minimise the number of events under side conditions for the conditional probabilities of correct go/no‐go decision after phase II as well as the conditional success probabilities for phase III. In addition, we give general recommendations for the choice of phase II sample size. Our simulations show that unconditional probabilities of go/no‐go decision as well as the unconditional success probabilities for phase III are influenced by the number of events observed in phase II. However, choosing more than 150 events in phase II seems not necessary as the impact on these probabilities then becomes quite small. We recommend considering aspects like the number of compounds in phase II and the resources available when determining the sample size. The lower the number of compounds and the lower the resources are for phase III, the higher the investment for phase II should be. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
3.
Owing to increased costs and competition pressure, drug development becomes more and more challenging. Therefore, there is a strong need for improving efficiency of clinical research by developing and applying methods for quantitative decision making. In this context, the integrated planning for phase II/III programs plays an important role as numerous quantities can be varied that are crucial for cost, benefit, and program success. Recently, a utility‐based framework has been proposed for an optimal planning of phase II/III programs that puts the choice of decision boundaries and phase II sample sizes on a quantitative basis. However, this method is restricted to studies with a single time‐to‐event endpoint. We generalize this procedure to the setting of clinical trials with multiple endpoints and (asymptotically) normally distributed test statistics. Optimal phase II sample sizes and go/no‐go decision rules are provided for both the “all‐or‐none” and “at‐least‐one” win criteria. Application of the proposed method is illustrated by drug development programs in the fields of Alzheimer disease and oncology. 相似文献
4.
A late‐stage clinical development program typically contains multiple trials. Conventionally, the program's success or failure may not be known until the completion of all trials. Nowadays, interim analyses are often used to allow evaluation for early success and/or futility for each individual study by calculating conditional power, predictive power and other indexes. It presents a good opportunity for us to estimate the probability of program success (POPS) for the entire clinical development earlier. The sponsor may abandon the program early if the estimated POPS is very low and therefore permit resource savings and reallocation to other products. We provide a method to calculate probability of success (POS) at an individual study level and also POPS for clinical programs with multiple trials in binary outcomes. Methods for calculating variation and confidence measures of POS and POPS and timing for interim analysis will be discussed and evaluated through simulations. We also illustrate our approaches on historical data retrospectively from a completed clinical program for depression. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
5.
Bayesian predictive power: choice of prior and some recommendations for its use as probability of success in drug development 下载免费PDF全文
Bayesian predictive power, the expectation of the power function with respect to a prior distribution for the true underlying effect size, is routinely used in drug development to quantify the probability of success of a clinical trial. Choosing the prior is crucial for the properties and interpretability of Bayesian predictive power. We review recommendations on the choice of prior for Bayesian predictive power and explore its features as a function of the prior. The density of power values induced by a given prior is derived analytically and its shape characterized. We find that for a typical clinical trial scenario, this density has a u‐shape very similar, but not equal, to a β‐distribution. Alternative priors are discussed, and practical recommendations to assess the sensitivity of Bayesian predictive power to its input parameters are provided. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
6.
Chuang-Stein C 《Pharmaceutical statistics》2006,5(4):305-309
This paper describes the distinction between the concept of statistical power and the probability of getting a successful trial. While one can choose a very high statistical power to detect a certain treatment effect, the high statistical power does not necessarily translate to a high success probability if the treatment effect to detect is based on the perceived ability of the drug candidate. The crucial factor hinges on our knowledge of the drug's ability to deliver the effect used to power the study. The paper discusses a framework to calculate the 'average success probability' and demonstrates how uncertainty about the treatment effect could affect the average success probability for a confirmatory trial. It complements an earlier work by O'Hagan et al. (Pharmaceutical Statistics 2005; 4:187-201) published in this journal. Computer codes to calculate the average success probability are included. 相似文献
7.
The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. 相似文献
8.
Phase II trials evaluate whether a new drug or a new therapy is worth further pursuing or certain treatments are feasible or not. A typical phase II is a single arm (open label) trial with a binary clinical endpoint (response to therapy). Although many oncology Phase II clinical trials are designed with a two-stage procedure, multi-stage design for phase II cancer clinical trials are now feasible due to increased capability of data capture. Such design adjusts for multiple analyses and variations in analysis time, and provides greater flexibility such as minimizing the number of patients treated on an ineffective therapy and identifying the minimum number of patients needed to evaluate whether the trial would warrant further development. In most of the NIH sponsored studies, the early stopping rule is determined so that the number of patients treated on an ineffective therapy is minimized. In pharmaceutical trials, it is also of importance to know as early as possible if the trial is highly promising and what is the likelihood the early conclusion can sustain. Although various methods are available to address these issues, practitioners often use disparate methods for addressing different issues and do not realize a single unified method exists. This article shows how to utilize a unified approach via a fully sequential procedure, the sequential conditional probability ratio test, to address the multiple needs of a phase II trial. We show the fully sequential program can be used to derive an optimized efficient multi-stage design for either a low activity or a high activity, to identify the minimum number of patients required to assess whether a new drug warrants further study and to adjust for unplanned interim analyses. In addition, we calculate a probability of discordance that the statistical test will conclude otherwise should the trial continue to the planned end that is usually at the sample size of a fixed sample design. This probability can be used to aid in decision making in a drug development program. All computations are based on exact binomial distribution. 相似文献
9.
Minghua Shan 《Pharmaceutical statistics》2021,20(3):485-498
Single-arm one- or multi-stage study designs are commonly used in phase II oncology development when the primary outcome of interest is tumor response, a binary variable. Both two- and three-outcome designs are available. Simon two-stage design is a well-known example of two-outcome designs. The objective of a two-outcome trial is to reject either the null hypothesis that the objective response rate (ORR) is less than or equal to a pre-specified low uninteresting rate or to reject the alternative hypothesis that the ORR is greater than or equal to some target rate. Three-outcome designs proposed by Sargent et al. allow a middle gray decision zone which rejects neither hypothesis in order to reduce the required study size. We propose new two- and three-outcome designs with continual monitoring based on Bayesian posterior probability that meet frequentist specifications such as type I and II error rates. Futility and/or efficacy boundaries are based on confidence functions, which can require higher levels of evidence for early versus late stopping and have clear and intuitive interpretations. We search in a class of such procedures for optimal designs that minimize a given loss function such as average sample size under the null hypothesis. We present several examples and compare our design with other procedures in the literature and show that our design has good operating characteristics. 相似文献
10.
The term 'futility' is used to refer to the inability of a clinical trial to achieve its objectives. In particular, stopping a clinical trial when the interim results suggest that it is unlikely to achieve statistical significance can save resources that could be used on more promising research. There are various approaches that have been proposed to assess futility, including stochastic curtailment, predictive power, predictive probability, and group sequential methods. In this paper, we describe and contrast these approaches, and discuss several issues associated with futility analyses, such as ethical considerations, whether or not type I error can or should be reclaimed, one-sided vs two-sided futility rules, and the impact of futility analyses on power. 相似文献
11.
《Journal of Statistical Computation and Simulation》2012,82(11):1483-1493
This article considers the construction of level 1?α fixed width 2d confidence intervals for a Bernoulli success probability p, assuming no prior knowledge about p and so p can be anywhere in the interval [0, 1]. It is shown that some fixed width 2d confidence intervals that combine sequential sampling of Hall [Asymptotic theory of triple sampling for sequential estimation of a mean, Ann. Stat. 9 (1981), pp. 1229–1238] and fixed-sample-size confidence intervals of Agresti and Coull [Approximate is better than ‘exact’ for interval estimation of binomial proportions, Am. Stat. 52 (1998), pp. 119–126], Wilson [Probable inference, the law of succession, and statistical inference, J. Am. Stat. Assoc. 22 (1927), pp. 209–212] and Brown et al. [Interval estimation for binomial proportion (with discussion), Stat. Sci. 16 (2001), pp. 101–133] have close to 1?α confidence level. These sequential confidence intervals require a much smaller sample size than a fixed-sample-size confidence interval. For the coin jamming example considered, a fixed-sample-size confidence interval requires a sample size of 9457, while a sequential confidence interval requires a sample size that rarely exceeds 2042. 相似文献
12.
In a phase III multi‐center cancer clinical trial or a large public health study, sample size is predetermined to achieve desired power, and study participants are enrolled from tens or hundreds of participating institutions. As the accrual is closing to the target size, the coordinating data center needs to project the accrual closure date on the basis of the observed accrual pattern and notify the participating sites several weeks in advance. In the past, projections were simply based on some crude assessment, and conservative measures were incorporated in order to achieve the target accrual size. This approach often resulted in excessive accrual size and subsequently unnecessary financial burden on the study sponsors. Here we proposed a discrete‐time Poisson process‐based method to estimate the accrual rate at time of projection and subsequently the trial closure date. To ensure that target size would be reached with high confidence, we also proposed a conservative method for the closure date projection. The proposed method was illustrated through the analysis of the accrual data of the National Surgical Adjuvant Breast and Bowel Project trial B‐38. The results showed that application of the proposed method could help to save considerable amount of expenditure in patient management without compromising the accrual goal in multi‐center clinical trials. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
13.
The goal of a phase I clinical trial in oncology is to find a dose with acceptable dose‐limiting toxicity rate. Often, when a cytostatic drug is investigated or when the maximum tolerated dose is defined using a toxicity score, the main endpoint in a phase I trial is continuous. We propose a new method to use in a dose‐finding trial with continuous endpoints. The new method selects the right dose on par with other methods and provides more flexibility in assigning patients to doses in the course of the trial when the rate of accrual is fast relative to the follow‐up time. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
14.
It is challenging to estimate the statistical power when a complicated testing strategy is used to adjust for the type-I error for multiple comparisons in a clinical trial. In this paper, we use the Bonferroni Inequality to estimate the lower bound of the statistical power assuming that test statistics are approximately normally distributed and the correlation structure among test statistics is unknown or only partially known. The method was applied to the design of a clinical study for sample size and statistical power estimation. 相似文献
15.
In the usual design and analysis of a phase II trial, there is no differentiation between complete response and partial response. Since complete response is considered more desirable this paper proposes a weighted score method which extends Simon's (1989) two-stage design to the situation where the complete and partial responses are differentiated. The weight assigned to the complete response is suggested by examining the likelihood ratio (LR) statistic for testing a simple hypothesis of a trinomial distribution. Both optimal and minimax designs are tabulated for a wide range of design parameters. The weighted score approach is shown to give more efficient designs, especially when the response probability is moderate to large. 相似文献
16.
Although the statistical methods enabling efficient adaptive seamless designs are increasingly well established, it is important to continue to use the endpoints and specifications that best suit the therapy area and stage of development concerned when conducting such a trial. Approaches exist that allow adaptive designs to continue seamlessly either in a subpopulation of patients or in the whole population on the basis of data obtained from the first stage of a phase II/III design: our proposed design adds extra flexibility by also allowing the trial to continue in all patients but with both the subgroup and the full population as co-primary populations. Further, methodology is presented which controls the Type-I error rate at less than 2.5% when the phase II and III endpoints are different but correlated time-to-event endpoints. The operating characteristics of the design are described along with a discussion of the practical aspects in an oncology setting. 相似文献
17.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case. 相似文献
18.
In phase III clinical trials, some adverse events may not be rare or unexpected and can be considered as a primary measure for safety, particularly in trials of life-threatening conditions, such as stroke or traumatic brain injury. In some clinical areas, efficacy endpoints may be highly correlated with safety endpoints, yet the interim efficacy analyses under group sequential designs usually do not consider safety measures formally in the analyses. Furthermore, safety is often statistically monitored more frequently than efficacy measures. Because early termination of a trial in this situation can be triggered by either efficacy or safety, the impact of safety monitoring on the error probabilities of efficacy analyses may be nontrivial if the original design does not take the multiplicity effect into account. We estimate the actual error probabilities for a bivariate binary efficacy-safety response in large confirmatory group sequential trials. The estimated probabilities are verified by Monte Carlo simulation. Our findings suggest that type I error for efficacy analyses decreases as efficacy-safety correlation or between-group difference in the safety event rate increases. In addition, although power for efficacy is robust to misspecification of the efficacy-safety correlation, it decreases dramatically as between-group difference in the safety event rate increases. 相似文献
19.
《统计学通讯:理论与方法》2012,41(2):421-429
AbstractFor clinical trials, molecular heterogeneity has played a more important role recently. Many novel clinical trial designs prospectively incorporate molecular information to evaluation of treatment effects. In this paper, an adaptive procedure incorporating a non-pre-specified genomic biomarker is employed in the interim of a conventional trial. A non-pre-specified binary genomic biomarker, which is predictive of treatment effect, is used to classify study patients into two mutually exclusive subgroups at the interim review. According to the observations at the interim stage, adaptations such as adjusting sample size or shifting eligibility of study patients are then made in case of different scenarios. 相似文献
20.
Treatment during cancer clinical trials sometimes involves the combination of multiple drugs. In addition, in recent years there has been a trend toward phase I/II trials in which a phase I and a phase II trial are combined into a single trial to accelerate drug development. Methods for the seamless combination of phases I and II parts are currently under investigation. In the phase II part, adaptive randomization on the basis of patient efficacy outcomes allocates more patients to the dose combinations considered to have higher efficacy. Patient toxicity outcomes are used for determining admissibility to each dose combination and are not used for selection of the dose combination itself. In cases where the objective is not to find the optimum dose combination solely for efficacy but regarding both toxicity and efficacy, the need exists to allocate patients to dose combinations with consideration of the balance of existing trade‐offs between toxicity and efficacy. We propose a Bayesian hierarchical model and an adaptive randomization with consideration for the relationship with toxicity and efficacy. Using the toxicity and efficacy outcomes of patients, the Bayesian hierarchical model is used to estimate the toxicity probability and efficacy probability in each of the dose combinations. Here, we use Bayesian moving‐reference adaptive randomization on the basis of desirability computed from the obtained estimator. Computer simulations suggest that the proposed method will likely recommend a higher percentage of target dose combinations than a previously proposed method. 相似文献