首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Treatment during cancer clinical trials sometimes involves the combination of multiple drugs. In addition, in recent years there has been a trend toward phase I/II trials in which a phase I and a phase II trial are combined into a single trial to accelerate drug development. Methods for the seamless combination of phases I and II parts are currently under investigation. In the phase II part, adaptive randomization on the basis of patient efficacy outcomes allocates more patients to the dose combinations considered to have higher efficacy. Patient toxicity outcomes are used for determining admissibility to each dose combination and are not used for selection of the dose combination itself. In cases where the objective is not to find the optimum dose combination solely for efficacy but regarding both toxicity and efficacy, the need exists to allocate patients to dose combinations with consideration of the balance of existing trade‐offs between toxicity and efficacy. We propose a Bayesian hierarchical model and an adaptive randomization with consideration for the relationship with toxicity and efficacy. Using the toxicity and efficacy outcomes of patients, the Bayesian hierarchical model is used to estimate the toxicity probability and efficacy probability in each of the dose combinations. Here, we use Bayesian moving‐reference adaptive randomization on the basis of desirability computed from the obtained estimator. Computer simulations suggest that the proposed method will likely recommend a higher percentage of target dose combinations than a previously proposed method.  相似文献   

2.
Response adaptive randomization (RAR) methods for clinical trials are susceptible to imbalance in the distribution of influential covariates across treatment arms. This can make the interpretation of trial results difficult, because observed differences between treatment groups may be a function of the covariates and not necessarily because of the treatments themselves. We propose a method for balancing the distribution of covariate strata across treatment arms within RAR. The method uses odds ratios to modify global RAR probabilities to obtain stratum‐specific modified RAR probabilities. We provide illustrative examples and a simple simulation study to demonstrate the effectiveness of the strategy for maintaining covariate balance. The proposed method is straightforward to implement and applicable to any type of RAR method or outcome.  相似文献   

3.
We investigate multiple features of response adaptive randomization (RAR) in the context of a multiple arm randomized trial with control, where the primary goal is the identification of the best arm for use in a broader patient population. We maintain constant control allocation and vary the length of time until RAR is started, interim frequency, the underlying quantity used to calculate the randomization probabilities, and a threshold resulting in temporary arm dropping. We evaluate the designs on five metrics measuring benefit to the internal trial population, the future external population, and statistical estimation. Our results indicate these features have minimal interaction within the space explored, with preference for earlier activation of RAR, more frequent interim analyses, randomizing in proportion to the probability each arm is the best, and aggressive thresholding for temporarily dropping arms. The results illustrate useful principles for maximizing the benefit of RAR in practice.  相似文献   

4.
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2‐arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.  相似文献   

5.
In a clinical trial, response-adaptive randomization (RAR) uses accumulating data to weigh the randomization of remaining patients in favour of the better performing treatment. The aim is to reduce the number of failures within the trial. However, many well-known RAR designs, in particular, the randomized play-the-winner-rule (RPWR), have a highly myopic structure which has sometimes led to unfortunate randomization sequences when used in practice. This paper introduces random permuted blocks into two RAR designs, the RPWR and sequential maximum likelihood estimation, for trials with a binary endpoint. Allocation ratios within each block are restricted to be one of 1:1, 2:1 or 3:1, preventing unfortunate randomization sequences. Exact calculations are performed to determine error rates and expected number of failures across a range of trial scenarios. The results presented show that when compared with equal allocation, block RAR designs give similar reductions in the expected number of failures to their unmodified counterparts. The reductions are typically modest under the alternative hypothesis but become more impressive if the treatment effect exceeds the clinically relevant difference.  相似文献   

6.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered.  相似文献   

7.
8.
In oncology, toxicity is typically observable shortly after a chemotherapy treatment, whereas efficacy, often characterized by tumor shrinkage, is observable after a relatively long period of time. In a phase II clinical trial design, we propose a Bayesian adaptive randomization procedure that accounts for both efficacy and toxicity outcomes. We model efficacy as a time-to-event endpoint and toxicity as a binary endpoint, sharing common random effects in order to induce dependence between the bivariate outcomes. More generally, we allow the randomization probability to depend on patients’ specific covariates, such as prognostic factors. Early stopping boundaries are constructed for toxicity and futility, and a superior treatment arm is recommended at the end of the trial. Following the setup of a recent renal cancer clinical trial at M. D. Anderson Cancer Center, we conduct extensive simulation studies under various scenarios to investigate the performance of the proposed method, and compare it with available Bayesian adaptive randomization procedures.  相似文献   

9.
Doubly adaptive biased coin design (DBCD) is an important family of response-adaptive randomization procedures for clinical trials. It uses sequentially updated estimation to skew the allocation probability to favor the treatment that has performed better thus far. An important assumption for the DBCD is the homogeneity assumption for the patient responses. However, this assumption may be violated in many sequential experiments. Here we prove the robustness of the DBCD against certain time trends in patient responses. Strong consistency and asymptotic normality of the design are obtained under some widely satisfied conditions. Also, we propose a general weighted likelihood method to reduce the bias caused by the heterogeneity in the inference after a trial. Some numerical studies are also presented to illustrate the finite sample properties of DBCD.  相似文献   

10.
Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations.  相似文献   

11.
Clinical trials often involve longitudinal data set which has two important characteristics: repeated and correlated measurements and time-varying covariates. In this paper, we propose a general framework of longitudinal covariate-adjusted response-adaptive (LCARA) randomization procedures. We study their properties under widely satisfied conditions. This design skews the allocation probabilities which depend on both patients' first observed covariates and sequentially estimated parameters based on the accrued longitudinal responses and covariates. The asymptotic properties of estimators for the unknown parameters and allocation proportions are established. The special case of binary treatment and continuous responses is studied in detail. Simulation studies and an analysis of the National Cooperative Gallstone Study (NCGS) data are carried out to illustrate the advantages of the proposed LCARA randomization procedure.  相似文献   

12.
Summary.  Few references deal with response-adaptive randomization procedures for survival outcomes and those that do either dichotomize the outcomes or use a non-parametric approach. In this paper, the optimal allocation approach and a parametric response-adaptive randomization procedure are used under exponential and Weibull distributions. The optimal allocation proportions are derived for both distributions and the doubly adaptive biased coin design is applied to target the optimal allocations. The asymptotic variance of the procedure is obtained for the exponential distribution. The effect of intrinsic delay of survival outcomes is treated. These findings are based on rigorous theory but are also verified by simulation. It is shown that using a doubly adaptive biased coin design to target the optimal allocation proportion results in more patients being randomized to the better performing treatment without loss of power. We illustrate our procedure by redesigning a clinical trial.  相似文献   

13.
Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non‐inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
Re‐randomization test has been considered as a robust alternative to the traditional population model‐based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate‐adaptive randomization method for ensuring balance among prognostic factors. Among various re‐randomization tests, fixed‐entry‐order re‐randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed‐entry‐order re‐randomization test is biased and thus compromised in power. We find that the bias is due to non‐uniform re‐allocation probabilities incurred by the re‐randomization in this case. We therefore propose a weighted fixed‐entry‐order re‐randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re‐randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
Clinical trials in the era of precision cancer medicine aim to identify and validate biomarker signatures which can guide the assignment of individually optimal treatments to patients. In this article, we propose a group sequential randomized phase II design, which updates the biomarker signature as the trial goes on, utilizes enrichment strategies for patient selection, and uses Bayesian response-adaptive randomization for treatment assignment. To evaluate the performance of the new design, in addition to the commonly considered criteria of Type I error and power, we propose four new criteria measuring the benefits and losses for individuals both inside and outside of the clinical trial. Compared with designs with equal randomization, the proposed design gives trial participants a better chance to receive their personalized optimal treatments and thus results in a higher response rate on the trial. This design increases the chance to discover a successful new drug by an adaptive enrichment strategy, i.e. identification and selective enrollment of a subset of patients who are sensitive to the experimental therapies. Simulation studies demonstrate these advantages of the proposed design. It is illustrated by an example based on an actual clinical trial in non-small-cell lung cancer.  相似文献   

16.
Umbrella trials are an innovative trial design where different treatments are matched with subtypes of a disease, with the matching typically based on a set of biomarkers. Consequently, when patients can be positive for more than one biomarker, they may be eligible for multiple treatment arms. In practice, different approaches could be applied to allocate patients who are positive for multiple biomarkers to treatments. However, to date there has been little exploration of how these approaches compare statistically. We conduct a simulation study to compare five approaches to handling treatment allocation in the presence of multiple biomarkers – equal randomisation; randomisation with fixed probability of allocation to control; Bayesian adaptive randomisation (BAR); constrained randomisation; and hierarchy of biomarkers. We evaluate these approaches under different scenarios in the context of a hypothetical phase II biomarker-guided umbrella trial. We define the pairings representing the pre-trial expectations on efficacy as linked pairs, and the other biomarker-treatment pairings as unlinked. The hierarchy and BAR approaches have the highest power to detect a treatment-biomarker linked interaction. However, the hierarchy procedure performs poorly if the pre-specified treatment-biomarker pairings are incorrect. The BAR method allocates a higher proportion of patients who are positive for multiple biomarkers to promising treatments when an unlinked interaction is present. In most scenarios, the constrained randomisation approach best balances allocation to all treatment arms. Pre-specification of an approach to deal with treatment allocation in the presence of multiple biomarkers is important, especially when overlapping subgroups are likely.  相似文献   

17.
We compare posterior and predictive estimators and probabilities in response-adaptive randomization designs for two- and three-group clinical trials with binary outcomes. Adaptation based upon posterior estimates are discussed, as are two predictive probability algorithms: one using the traditional definition, the other using a skeptical distribution. Optimal and natural lead-in designs are covered. Simulation studies show that efficacy comparisons lead to more adaptation than center comparisons, though at some power loss, skeptically predictive efficacy comparisons and natural lead-in approaches lead to less adaptation but offer reduced allocation variability. Though nuanced, these results help clarify the power-adaptation trade-off in adaptive randomization.  相似文献   

18.
Data analysis for randomized trials including multi-treatment arms is often complicated by subjects who do not comply with their treatment assignment. We discuss here methods of estimating treatment efficacy for randomized trials involving multi-treatment arms subject to non-compliance. One treatment effect of interest in the presence of non-compliance is the complier average causal effect (CACE) (Angrist et al. 1996), which is defined as the treatment effect for subjects who would comply regardless of the assigned treatment. Following the idea of principal stratification (Frangakis & Rubin 2002), we define principal compliance (Little et al. 2009) in trials with three treatment arms, extend CACE and define causal estimands of interest in this setting. In addition, we discuss structural assumptions needed for estimation of causal effects and the identifiability problem inherent in this setting from both a Bayesian and a classical statistical perspective. We propose a likelihood-based framework that models potential outcomes in this setting and a Bayes procedure for statistical inference. We compare our method with a method of moments approach proposed by Cheng & Small (2006) using a hypothetical data set, and further illustrate our approach with an application to a behavioral intervention study (Janevic et al. 2003).  相似文献   

19.
In a response-adaptive design, we review and update the trial on the basis of outcomes in order to achieve a specific goal. Response-adaptive designs for clinical trials are usually constructed to achieve a single objective. In this paper, we develop a new adaptive allocation rule to improve current strategies for building response-adaptive designs to construct multiple-objective repeated measurement designs. This new rule is designed to increase estimation precision and treatment benefit by assigning more patients to a better treatment sequence. We demonstrate that designs constructed under the new proposed allocation rule can be nearly as efficient as fixed optimal designs in terms of the mean squared error, while leading to improved patient care.  相似文献   

20.
In a response-adaptive design, we review and update the trial on the basis of outcomes in order to achive a specific goal. In clinical trials our goal is to allocate a larger number of patients to the better treatment. In the present paper, we use a response adaptive design in a two-treatment two-period crossover trial where the treatment responses are continuous. We provide probability measures to choose between the possible treatment combinations AA, AB, BA, or BB. The goal is to use the better treatment combination a larger number of times. We calculate the allocation proportions to the possible treatment combinations and their standard errors. We also derive some asymptotic results and provide solutions on related inferential problems. The proposed procedure is compared with a possible competitor. Finally, we use a data set to illustrate the applicability of our proposed design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号