首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Basket trials evaluate a single drug targeting a single genetic variant in multiple cancer cohorts. Empirical findings suggest that treatment efficacy across baskets may be heterogeneous. Most modern basket trial designs use Bayesian methods. These methods require the prior specification of at least one parameter that permits information sharing across baskets. In this study, we provide recommendations for selecting a prior for scale parameters for adaptive basket trials by using Bayesian hierarchical modeling. Heterogeneity among baskets attracts much attention in basket trial research, and substantial heterogeneity challenges the basic assumption of exchangeability of Bayesian hierarchical approach. Thus, we also allowed each stratum-specific parameter to be exchangeable or nonexchangeable with similar strata by using data observed in an interim analysis. Through a simulation study, we evaluated the overall performance of our design based on statistical power and type I error rates. Our research contributes to the understanding of the properties of Bayesian basket trial designs.  相似文献   

2.
The borrowing of historical control data can be an efficient way to improve the treatment effect estimate of the current control group in a randomized clinical trial. When the historical and current control data are consistent, the borrowing of historical data can increase power and reduce Type I error rate. However, when these 2 sources of data are inconsistent, it may result in a combination of biased estimates, reduced power, and inflation of Type I error rate. In some situations, inconsistency between historical and current control data may be caused by a systematic variation in the measured baseline prognostic factors, which can be appropriately addressed through statistical modeling. In this paper, we propose a Bayesian hierarchical model that can incorporate patient‐level baseline covariates to enhance the appropriateness of the exchangeability assumption between current and historical control data. The performance of the proposed method is shown through simulation studies, and its application to a clinical trial design for amyotrophic lateral sclerosis is described. The proposed method is developed for scenarios involving multiple imbalanced prognostic factors and thus has meaningful implications for clinical trials evaluating new treatments for heterogeneous diseases such as amyotrophic lateral sclerosis.  相似文献   

3.
Bayesian dynamic borrowing designs facilitate borrowing information from historical studies. Historical data, when perfectly commensurate with current data, have been shown to reduce the trial duration and the sample size, while inflation in the type I error and reduction in the power have been reported, when imperfectly commensurate. These results, however, were obtained without considering that Bayesian designs are calibrated to meet regulatory requirements in practice and even no‐borrowing designs may use information from historical data in the calibration. The implicit borrowing of historical data suggests that imperfectly commensurate historical data may similarly impact no‐borrowing designs negatively. We will provide a fair appraiser of Bayesian dynamic borrowing and no‐borrowing designs. We used a published selective adaptive randomization design and real clinical trial setting and conducted simulation studies under varying degrees of imperfectly commensurate historical control scenarios. The type I error was inflated under the null scenario of no intervention effect, while larger inflation was noted with borrowing. The larger inflation in type I error under the null setting can be offset by the greater probability to stop early correctly under the alternative. Response rates were estimated more precisely and the average sample size was smaller with borrowing. The expected increase in bias with borrowing was noted, but was negligible. Using Bayesian dynamic borrowing designs may improve trial efficiency by stopping trials early correctly and reducing trial length at the small cost of inflated type I error.  相似文献   

4.
Abstract. A non‐parametric rank‐based test of exchangeability for bivariate extreme‐value copulas is first proposed. The two key ingredients of the suggested approach are the non‐parametric rank‐based estimators of the Pickands dependence function recently studied by Genest and Segers, and a multiplier technique for obtaining approximate p‐values for the derived statistics. The proposed approach is then extended to left‐tail decreasing dependence structures that are not necessarily extreme‐value copulas. Large‐scale Monte Carlo experiments are used to investigate the level and power of the various versions of the test and show that the proposed procedure can be substantially more powerful than tests of exchangeability derived directly from the empirical copula. The approach is illustrated on well‐known financial data.  相似文献   

5.
The feasibility of a new clinical trial may be increased by incorporating historical data of previous trials. In the particular case where only data from a single historical trial are available, there exists no clear recommendation in the literature regarding the most favorable approach. A main problem of the incorporation of historical data is the possible inflation of the type I error rate. A way to control this type of error is the so‐called power prior approach. This Bayesian method does not “borrow” the full historical information but uses a parameter 0 ≤ δ ≤ 1 to determine the amount of borrowed data. Based on the methodology of the power prior, we propose a frequentist framework that allows incorporation of historical data from both arms of two‐armed trials with binary outcome, while simultaneously controlling the type I error rate. It is shown that for any specific trial scenario a value δ > 0 can be determined such that the type I error rate falls below the prespecified significance level. The magnitude of this value of δ depends on the characteristics of the data observed in the historical trial. Conditionally on these characteristics, an increase in power as compared to a trial without borrowing may result. Similarly, we propose methods how the required sample size can be reduced. The results are discussed and compared to those obtained in a Bayesian framework. Application is illustrated by a clinical trial example.  相似文献   

6.
As described in the ICH E5 guidelines, a bridging study is an additional study executed in a new geographical region or subpopulation to link or “build a bridge” from global clinical trial outcomes to the new region. The regulatory and scientific goals of a bridging study is to evaluate potential subpopulation differences while minimizing duplication of studies and meeting unmet medical needs expeditiously. Use of historical data (borrowing) from global studies is an attractive approach to meet these conflicting goals. Here, we propose a practical and relevant approach to guide the optimal borrowing rate (percent of subjects in earlier studies) and the number of subjects in the new regional bridging study. We address the limitations in global/regional exchangeability through use of a Bayesian power prior method and then optimize bridging study design with a return on investment viewpoint. The method is demonstrated using clinical data from global and Japanese trials in dapagliflozin for type 2 diabetes.  相似文献   

7.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

8.
Traditional vaccine efficacy trials usually use fixed designs with fairly large sample sizes. Recruiting a large number of subjects requires longer time and higher costs. Furthermore, vaccine developers are more than ever facing the need to accelerate vaccine development to fulfill the public's medical needs. A possible approach to accelerate development is to use the method of dynamic borrowing of historical controls in clinical trials. In this paper, we evaluate the feasibility and the performance of this approach in vaccine development by retrospectively analyzing two real vaccine studies: a relatively small immunological trial (typical early phase study) and a large vaccine efficacy trial (typical Phase 3 study) assessing prophylactic human papillomavirus vaccine. Results are promising, particularly for early development immunological studies, where the adaptive design is feasible, and control of type I error is less relevant.  相似文献   

9.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

10.
This paper provides a novel approach to constructing bivariate prior distributions. The idea is based on the notion of partial exchangeability. In particular, in a simple extension of the exchangeable sequence, we create two dependent exchangeable sequences via a branching mechanism. This implies the existence of a bivariate prior distribution.  相似文献   

11.
In this note, we develop a new two-group bootstrap-permutation test that utilizes the tail-extrapolated quantile function estimator for the bootstrap component. This test is an extension of the standard two-group permutation test, that through its construction is defined to meet the exchangeability assumption, and thus it guarantees that the type I error is appropriately bounded by definition. This methodology is particularly useful in the non-randomized two-group setting for which the exchangeability assumption for the traditional two-group permutation test is untestable. We develop some theoretical results for the new test, followed by a simulation study and an example.  相似文献   

12.
Two significant pivotal trials are usually required for a new drug approval by a regulatory agency. This standard requirement is known as the two-trial paradigm. However, several authors have questioned why we need exactly two pivotal trials, what statistical error the regulators are trying to protect against, and potential alternative approaches. Therefore, it is important to investigate these questions to better understand the regulatory decision-making in the assessment of drugs' effectiveness. It is common that two identically designed trials are run solely to adhere to the two-trial rule. Previous work showed that combining the data from the two trials into a single trial (one-trial paradigm) would increase the power while ensuring the same level of type I error protection as the two-trial paradigm. However, this is true only under a specific scenario and there is little investigation on the type I error protection over the whole null region. In this article, we compare the two paradigms by considering scenarios in which the two trials are conducted in identical or different populations as well as with equal or unequal size. With identical populations, the results show that a single trial provides better type I error protection and higher power. Conversely, with different populations, although the one-trial rule is more powerful in some cases, it does not always protect against the type I error. Hence, there is the need for appropriate flexibility around the two-trial paradigm and the appropriate approach should be chosen based on the questions we are interested in.  相似文献   

13.
The signature-based mixture representations for coherent systems are a good way to obtain distribution-free comparisons of systems. Unfortunately, these representations only hold for systems whose component lifetimes are independent and identically distributed (IID) or exchangeable (i.e., their joint distribution is invariant under permutations). In this paper we obtain comparison results for generalized mixtures, that is, for reliability functions that can be written as linear combinations of some baseline reliability functions with positive and negative coefficients. These results are based on some concepts in Graph Theory. We apply these results to obtain new comparison results for coherent systems without the IID or exchangeability assumptions by using their generalized mixture representations based on the minimal path sets.  相似文献   

14.
Borrowing data from external control has been an appealing strategy for evidence synthesis when conducting randomized controlled trials (RCTs). Often named hybrid control trials, they leverage existing control data from clinical trials or potentially real-world data (RWD), enable trial designs to allocate more patients to the novel intervention arm, and improve the efficiency or lower the cost of the primary RCT. Several methods have been established and developed to borrow external control data, among which the propensity score methods and Bayesian dynamic borrowing framework play essential roles. Noticing the unique strengths of propensity score methods and Bayesian hierarchical models, we utilize both methods in a complementary manner to analyze hybrid control studies. In this article, we review methods including covariate adjustments, propensity score matching and weighting in combination with dynamic borrowing and compare the performance of these methods through comprehensive simulations. Different degrees of covariate imbalance and confounding are examined. Our findings suggested that the conventional covariate adjustment in combination with the Bayesian commensurate prior model provides the highest power with good type I error control under the investigated settings. It has desired performance especially under scenarios of different degrees of confounding. To estimate efficacy signals in the exploratory setting, the covariate adjustment method in combination with the Bayesian commensurate prior is recommended.  相似文献   

15.
Leveraging historical data into the design and analysis of phase 2 randomized controlled trials can improve efficiency of drug development programs. Such approaches can reduce sample size without loss of power. Potential issues arise when the current control arm is inconsistent with historical data, which may lead to biased estimates of treatment efficacy, loss of power, or inflated type 1 error. Consideration as to how to borrow historical information is important, and in particular, adjustment for prognostic factors should be considered. This paper will illustrate two motivating case studies of oncology Bayesian augmented control (BAC) trials. In the first example, a glioblastoma study, an informative prior was used for the control arm hazard rate. Sample size savings were 15% to 20% by using a BAC design. In the second example, a pancreatic cancer study, a hierarchical model borrowing method was used, which enabled the extent of borrowing to be determined by consistency of observed study data with historical studies. Supporting Bayesian analyses also adjusted for prognostic factors. Incorporating historical data via Bayesian trial design can provide sample size savings, reduce study duration, and enable a more scientific approach to development of novel therapies by avoiding excess recruitment to a control arm. Various sensitivity analyses are necessary to interpret results. Current industry efforts for data transparency have meaningful implications for access to patient‐level historical data, which, while not critical, is helpful to adjust for potential imbalances in prognostic factors.  相似文献   

16.
We consider the blinded sample size re‐estimation based on the simple one‐sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two‐sample t‐test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re‐estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non‐inferiority margin for non‐inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

18.
In terms of the risk of making a Type I error in evaluating a null hypothesis of equality, requiring two independent confirmatory trials with two‐sided p‐values less than 0.05 is equivalent to requiring one confirmatory trial with two‐sided p‐value less than 0.001 25. Furthermore, the use of a single confirmatory trial is gaining acceptability, with discussion in both ICH E9 and a CPMP Points to Consider document. Given the growing acceptance of this approach, this note provides a formula for the sample size savings that are obtained with the single clinical trial approach depending on the levels of Type I and Type II errors chosen. For two replicate trials each powered at 90%, which corresponds to a single larger trial powered at 81%, an approximate 19% reduction in total sample size is achieved with the single trial approach. Alternatively, a single trial with the same sample size as the total sample size from two smaller trials will have much greater power. For example, in the case where two trials are each powered at 90% for two‐sided α=0.05 yielding an overall power of 81%, a single trial using two‐sided α=0.001 25 would have 91% power. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
Statistical approaches for addressing multiplicity in clinical trials range from the very conservative (the Bonferroni method) to the least conservative the fixed sequence approach. Recently, several authors proposed methods that combine merits of the two extreme approaches. Wiens [2003. A fixed sequence Bonferroni procedure for testing multiple endpoints. Pharmaceutical Statist. 2003, 2, 211–215], for example, considered an extension of the Bonferroni approach where the type I error rate (α)(α) is allocated among the endpoints, however, testing proceeds in a pre-determined order allowing the type I error rate to be saved for later use as long as the null hypotheses are rejected. This leads to a higher power of the test in testing later null hypotheses. In this paper, we consider an extension of Wiens’ approach by taking into account correlations among endpoints for achieving higher flexibility in testing. We show strong control of the family-wise type I error rate for this extension and provide critical values and significance levels for testing up to three endpoints with equal correlations and show how to calculate them for other correlation structures. We also present results of a simulation experiment for comparing the power of the proposed method with those of Wiens’ and others. The results of this experiment show that the magnitude of the gain in power of the proposed method depends on the prospective ordering of testing of the endpoints, the magnitude of the treatment effects of the endpoints and the magnitude of correlation between endpoints. Finally, we consider applications of the proposed method for clinical trials with multiple time points and multiple doses, where correlations among endpoints frequently arise.  相似文献   

20.
《统计学通讯:理论与方法》2012,41(13-14):2545-2569
We study the general linear model (GLM) with doubly exchangeable distributed error for m observed random variables. The doubly exchangeable general linear model (DEGLM) arises when the m-dimensional error vectors are “doubly exchangeable,” jointly normally distributed, which is a much weaker assumption than the independent and identically distributed error vectors as in the case of GLM or classical GLM (CGLM). We estimate the parameters in the model and also find their distributions. We show that the tests of intercept and slope are possible in DEGLM as a particular case using parametric bootstrap as well as multivariate Satterthwaite approximation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号