首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2‐arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.  相似文献   

2.
Abstract

Optimized group sequential designs proposed in the literature have designs minimizing average sample size with respect to a prior distribution of treatment effect with overall type I and type II error rates well-controlled (i.e., at final stage). The optimized asymmetric group sequential designs that we present here additionally consider constrains on stopping probabilities at stage one: probability of stopping for futility at stage one when no drug effect exists as well as the probability of rejection when the maximum effect size is true at stage one so that accountability of group sequential design is ensured from the first stage throughout.  相似文献   

3.
4.
We propose an efficient group sequential monitoring rule for clinical trials. At each interim analysis both efficacy and futility are evaluated through a specified loss structure together with the predicted power. The proposed design is robust to a wide range of priors, and achieves the specified power with a saving of sample size compared to existing adaptive designs. A method is also proposed to obtain a reduced-bias estimator of treatment difference for the proposed design. The new approaches hold great potential for efficiently selecting a more effective treatment in comparative trials. Operating characteristics are evaluated and compared with other group sequential designs in empirical studies. An example is provided to illustrate the application of the method.  相似文献   

5.
An internal pilot with interim analysis (IPIA) design combines interim power analysis (an internal pilot) with interim data analysis (two stage group sequential). We provide IPIA methods for single df hypotheses within the Gaussian general linear model, including one and two group t tests. The design allows early stopping for efficacy and futility while also re-estimating sample size based on an interim variance estimate. Study planning in small samples requires the exact and computable forms reported here. The formulation gives fast and accurate calculations of power, type I error rate, and expected sample size.  相似文献   

6.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

7.
An internal pilot with interim analysis (IPIA) design combines interim power analysis (an internal pilot) with interim data analysis (two-stage group sequential). We provide IPIA methods for single df hypotheses within the Gaussian general linear model, including one and two group t tests. The design allows early stopping for efficacy and futility while also re-estimating sample size based on an interim variance estimate. Study planning in small samples requires the exact and computable forms reported here. The formulation gives fast and accurate calculations of power, Type I error rate, and expected sample size.  相似文献   

8.
We propose flexible group sequential designs using type I and type II error probability spending functions. The proposed designs preserve the overall significance level and power and allow the repeated testing to be perloimed at a flexible schedule. Computational methods are described. An example on a mega clinical trial is provided.  相似文献   

9.
Repeated confidence interval (RCI) is an important tool for design and monitoring of group sequential trials according to which we do not need to stop the trial with planned statistical stopping rules. In this article, we derive RCIs when data from each stage of the trial are not independent thus it is no longer a Brownian motion (BM) process. Under this assumption, a larger class of stochastic processes fractional Brownian motion (FBM) is considered. Comparisons of RCI width and sample size requirement are made to those under Brownian motion for different analysis times, Type I error rates and number of interim analysis. Power family spending functions including Pocock, O'Brien-Fleming design types are considered for these simulations. Interim data from BHAT and oncology trials is used to illustrate how to derive RCIs under FBM for efficacy and futility monitoring.  相似文献   

10.
Bayesian dynamic borrowing designs facilitate borrowing information from historical studies. Historical data, when perfectly commensurate with current data, have been shown to reduce the trial duration and the sample size, while inflation in the type I error and reduction in the power have been reported, when imperfectly commensurate. These results, however, were obtained without considering that Bayesian designs are calibrated to meet regulatory requirements in practice and even no‐borrowing designs may use information from historical data in the calibration. The implicit borrowing of historical data suggests that imperfectly commensurate historical data may similarly impact no‐borrowing designs negatively. We will provide a fair appraiser of Bayesian dynamic borrowing and no‐borrowing designs. We used a published selective adaptive randomization design and real clinical trial setting and conducted simulation studies under varying degrees of imperfectly commensurate historical control scenarios. The type I error was inflated under the null scenario of no intervention effect, while larger inflation was noted with borrowing. The larger inflation in type I error under the null setting can be offset by the greater probability to stop early correctly under the alternative. Response rates were estimated more precisely and the average sample size was smaller with borrowing. The expected increase in bias with borrowing was noted, but was negligible. Using Bayesian dynamic borrowing designs may improve trial efficiency by stopping trials early correctly and reducing trial length at the small cost of inflated type I error.  相似文献   

11.
Single-arm one- or multi-stage study designs are commonly used in phase II oncology development when the primary outcome of interest is tumor response, a binary variable. Both two- and three-outcome designs are available. Simon two-stage design is a well-known example of two-outcome designs. The objective of a two-outcome trial is to reject either the null hypothesis that the objective response rate (ORR) is less than or equal to a pre-specified low uninteresting rate or to reject the alternative hypothesis that the ORR is greater than or equal to some target rate. Three-outcome designs proposed by Sargent et al. allow a middle gray decision zone which rejects neither hypothesis in order to reduce the required study size. We propose new two- and three-outcome designs with continual monitoring based on Bayesian posterior probability that meet frequentist specifications such as type I and II error rates. Futility and/or efficacy boundaries are based on confidence functions, which can require higher levels of evidence for early versus late stopping and have clear and intuitive interpretations. We search in a class of such procedures for optimal designs that minimize a given loss function such as average sample size under the null hypothesis. We present several examples and compare our design with other procedures in the literature and show that our design has good operating characteristics.  相似文献   

12.
Futility analysis reduces the opportunity to commit Type I error. For a superiority study testing a two‐sided hypothesis, an interim futility analysis can substantially reduce the overall Type I error while keeping the overall power relatively intact. In this paper, we quantify the extent of the reduction for both one‐sided and two‐sided futility analysis. We argue that, because of the reduction, we should be allowed to set the significance level for the final analysis at a level higher than the allowable Type I error rate for the study. We propose a method to find the significance level for the final analysis. We illustrate the proposed methodology and show that a design employing a futility analysis can reduce the sample size, and therefore reduce the exposure of patients to unnecessary risk and lower the cost of a clinical trial. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper, we propose a design that uses a short‐term endpoint for accelerated approval at interim analysis and a long‐term endpoint for full approval at final analysis with sample size adaptation based on the long‐term endpoint. Two sample size adaptation rules are compared: an adaptation rule to maintain the conditional power at a prespecified level and a step function type adaptation rule to better address the bias issue. Three testing procedures are proposed: alpha splitting between the two endpoints; alpha exhaustive between the endpoints; and alpha exhaustive with improved critical value based on correlation. Family‐wise error rate is proved to be strongly controlled for the two endpoints, sample size adaptation, and two analysis time points with the proposed designs. We show that using alpha exhaustive designs greatly improve the power when both endpoints are effective, and the power difference between the two adaptation rules is minimal. The proposed design can be extended to more general settings. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
In studies with recurrent event endpoints, misspecified assumptions of event rates or dispersion can lead to underpowered trials or overexposure of patients. Specification of overdispersion is often a particular problem as it is usually not reported in clinical trial publications. Changing event rates over the years have been described for some diseases, adding to the uncertainty in planning. To mitigate the risks of inadequate sample sizes, internal pilot study designs have been proposed with a preference for blinded sample size reestimation procedures, as they generally do not affect the type I error rate and maintain trial integrity. Blinded sample size reestimation procedures are available for trials with recurrent events as endpoints. However, the variance in the reestimated sample size can be considerable in particular with early sample size reviews. Motivated by a randomized controlled trial in paediatric multiple sclerosis, a rare neurological condition in children, we apply the concept of blinded continuous monitoring of information, which is known to reduce the variance in the resulting sample size. Assuming negative binomial distributions for the counts of recurrent relapses, we derive information criteria and propose blinded continuous monitoring procedures. The operating characteristics of these are assessed in Monte Carlo trial simulations demonstrating favourable properties with regard to type I error rate, power, and stopping time, ie, sample size.  相似文献   

15.
We propose a two‐stage design for a single arm clinical trial with an early stopping rule for futility. This design employs different endpoints to assess early stopping and efficacy. The early stopping rule is based on a criteria determined more quickly than that for efficacy. These separate criteria are also nested in the sense that efficacy is a special case of, but usually not identical to, the early stopping endpoint. The design readily allows for planning in terms of statistical significance, power, expected sample size, and expected duration. This method is illustrated with a phase II design comparing rates of disease progression in elderly patients treated for lung cancer to rates found using a historical control. In this example, the early stopping rule is based on the number of patients who exhibit progression‐free survival (PFS) at 2 months post treatment follow‐up. Efficacy is judged by the number of patients who have PFS at 6 months. We demonstrate our design has expected sample size and power comparable with the Simon two‐stage design but exhibits shorter expected duration under a range of useful parameter values.  相似文献   

16.
The term 'futility' is used to refer to the inability of a clinical trial to achieve its objectives. In particular, stopping a clinical trial when the interim results suggest that it is unlikely to achieve statistical significance can save resources that could be used on more promising research. There are various approaches that have been proposed to assess futility, including stochastic curtailment, predictive power, predictive probability, and group sequential methods. In this paper, we describe and contrast these approaches, and discuss several issues associated with futility analyses, such as ethical considerations, whether or not type I error can or should be reclaimed, one-sided vs two-sided futility rules, and the impact of futility analyses on power.  相似文献   

17.
Phase II clinical trials investigate whether a new drug or treatment has sufficient evidence of effectiveness against the disease under study. Two-stage designs are popular for phase II since they can stop in the first stage if the drug is ineffective. Investigators often face difficulties in determining the target response rates, and adaptive designs can help to set the target response rate tested in the second stage based on the number of responses observed in the first stage. Popular adaptive designs consider two alternate response rates, and they generally minimise the expected sample size at the maximum uninterested response rate. Moreover, these designs consider only futility as the reason for early stopping and have high expected sample sizes if the provided drug is effective. Motivated by this problem, we propose an adaptive design that enables us to terminate the single-arm trial at the first stage for efficacy and conclude which alternate response rate to choose. Comparing the proposed design with a popular adaptive design from literature reveals that the expected sample size decreases notably if any of the two target response rates are correct. In contrast, the expected sample size remains almost the same under the null hypothesis.  相似文献   

18.
In prior works, this group demonstrated the feasibility of valid adaptive sequential designs for crossover bioequivalence studies. In this paper, we extend the prior work to optimize adaptive sequential designs over a range of geometric mean test/reference ratios (GMRs) of 70–143% within each of two ranges of intra‐subject coefficient of variation (10–30% and 30–55%). These designs also introduce a futility decision for stopping the study after the first stage if there is sufficiently low likelihood of meeting bioequivalence criteria if the second stage were completed, as well as an upper limit on total study size. The optimized designs exhibited substantially improved performance characteristics over our previous adaptive sequential designs. Even though the optimized designs avoided undue inflation of type I error and maintained power at 80%, their average sample sizes were similar to or less than those of conventional single stage designs. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
Two‐stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two‐stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family‐wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed ‘true’ subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two‐stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
This article proposes new optimal and minimax designs, which allow early stopping not only for ineffectiveness or toxicity but also for sufficient effectiveness and safety. These designs may facilitate effective drug development by detecting sufficient effectiveness and safety at an early stage or by detecting ineffectiveness or excessive toxicity at an early stage. The proposed design has advantage over other designs in the sense that it can control the type I error rate and is robust against the real association parameter. Comparing to Jin's design, it is always advantageous in terms of expected sample size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号