首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
ABSTRACT

The clinical trials are usually designed with the implicit assumption that data analysis will occur only after the trial is completed. It is a challenging problem if the sponsor wishes to evaluate the drug efficacy in the middle of the study without breaking the randomization codes. In this article, the randomized response model and mixture model are introduced to analyze the data, masking the randomization codes of the crossover design. Given the probability of treatment sequence, the test of mixture model provides higher power than the test of randomized response model, which is inadequate in the example. The paired t-test has higher powers than both models if the randomization codes are broken. The sponsor may stop the trial early to claim the effectiveness of the study drug if the mixture model concludes a positive result.  相似文献   

2.
In a clinical trial, response-adaptive randomization (RAR) uses accumulating data to weigh the randomization of remaining patients in favour of the better performing treatment. The aim is to reduce the number of failures within the trial. However, many well-known RAR designs, in particular, the randomized play-the-winner-rule (RPWR), have a highly myopic structure which has sometimes led to unfortunate randomization sequences when used in practice. This paper introduces random permuted blocks into two RAR designs, the RPWR and sequential maximum likelihood estimation, for trials with a binary endpoint. Allocation ratios within each block are restricted to be one of 1:1, 2:1 or 3:1, preventing unfortunate randomization sequences. Exact calculations are performed to determine error rates and expected number of failures across a range of trial scenarios. The results presented show that when compared with equal allocation, block RAR designs give similar reductions in the expected number of failures to their unmodified counterparts. The reductions are typically modest under the alternative hypothesis but become more impressive if the treatment effect exceeds the clinically relevant difference.  相似文献   

3.
Clinical trials in the era of precision cancer medicine aim to identify and validate biomarker signatures which can guide the assignment of individually optimal treatments to patients. In this article, we propose a group sequential randomized phase II design, which updates the biomarker signature as the trial goes on, utilizes enrichment strategies for patient selection, and uses Bayesian response-adaptive randomization for treatment assignment. To evaluate the performance of the new design, in addition to the commonly considered criteria of Type I error and power, we propose four new criteria measuring the benefits and losses for individuals both inside and outside of the clinical trial. Compared with designs with equal randomization, the proposed design gives trial participants a better chance to receive their personalized optimal treatments and thus results in a higher response rate on the trial. This design increases the chance to discover a successful new drug by an adaptive enrichment strategy, i.e. identification and selective enrollment of a subset of patients who are sensitive to the experimental therapies. Simulation studies demonstrate these advantages of the proposed design. It is illustrated by an example based on an actual clinical trial in non-small-cell lung cancer.  相似文献   

4.
The power of randomized controlled clinical trials to demonstrate the efficacy of a drug compared with a control group depends not just on how efficacious the drug is, but also on the variation in patients' outcomes. Adjusting for prognostic covariates during trial analysis can reduce this variation. For this reason, the primary statistical analysis of a clinical trial is often based on regression models that besides terms for treatment and some further terms (e.g., stratification factors used in the randomization scheme of the trial) also includes a baseline (pre-treatment) assessment of the primary outcome. We suggest to include a “super-covariate”—that is, a patient-specific prediction of the control group outcome—as a further covariate (but not as an offset). We train a prognostic model or ensembles of such models on the individual patient (or aggregate) data of other studies in similar patients, but not the new trial under analysis. This has the potential to use historical data to increase the power of clinical trials and avoids the concern of type I error inflation with Bayesian approaches, but in contrast to them has a greater benefit for larger sample sizes. It is important for prognostic models behind “super-covariates” to generalize well across different patient populations in order to similarly reduce unexplained variability whether the trial(s) to develop the model are identical to the new trial or not. In an example in neovascular age-related macular degeneration we saw efficiency gains from the use of a “super-covariate”.  相似文献   

5.
Random assignment of experimental units to treatment and control groups is a conventional device tob create unbiased comparisons. However, when sample sizes are small and the units differ considerably, there is a significant risk that randomization will create seriously unbalanced partitions of the units into treatment and control groups. We develop and evaluate an alternative to complete randomization for small-sample comparisons involving ordinal data with partial information on ranks of units. For instance, we might know that, of eight units, Rank (A) < Rank (C), Rank (A) < Rank (E) and Rank(D) < Rank(H). We develop an efficient computational procedure to use such information as the basis for restricted randomization of units to the treatment group. We compare our methods to complete randomization in the context of the Mann-Whitney test. With sufficient ranking information, the restricted randomization results in more powerful comparisons.  相似文献   

6.
7.
In a clinical trial comparing two treatment groups, one commonly‐used endpoint is time to death. Another is time until the first nonfatal event (if there is one) or until death (if not). Both endpoints have drawbacks. The wrong choice may adversely affect the value of the study by impairing power if deaths are too few (with the first endpoint) or by lessening the role of mortality if not (with the second endpoint). We propose a compromise that provides a simple test based on the time to death if the patient has died or time since randomization augmented by an increment otherwise. The test applies the ordinary two‐sample Wilcoxon statistic to these values. The formula for the increment (the same for experimental and control patients) must be specified before the trial starts. In the simplest (and perhaps most useful) case, the increment assumes only two values, according to whether or not the (surviving) patient had a nonfatal event. More generally, the increment depends on the time of the first nonfatal event, if any, and the time since randomization. The test has correct Type I error even though it does not handle censoring in a customary way. For conditions where investigators would face no easy (advance) choice between the two older tests, simulation results favor the new test. An example using a renal‐cancer trial is presented. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Phase II clinical trials designed for evaluating a drug's treatment effect can be either single‐arm or double‐arm. A single‐arm design tests the null hypothesis that the response rate of a new drug is lower than a fixed threshold, whereas a double‐arm scheme takes a more objective comparison of the response rate between the new treatment and the standard of care through randomization. Although the randomized design is the gold standard for efficacy assessment, various situations may arise where a single‐arm pilot study prior to a randomized trial is necessary. To combine the single‐ and double‐arm phases and pool the information together for better decision making, we propose a Single‐To‐double ARm Transition design (START) with switching hypotheses tests, where the first stage compares the new drug's response rate with a minimum required level and imposes a continuation criterion, and the second stage utilizes randomization to determine the treatment's superiority. We develop a software package in R to calibrate the frequentist error rates and perform simulation studies to assess the trial characteristics. Finally, a metastatic pancreatic cancer trial is used for illustrating the decision rules under the proposed START design.  相似文献   

9.
We study the design of multi-armed parallel group clinical trials to estimate personalized treatment rules that identify the best treatment for a given patient with given covariates. Assuming that the outcomes in each treatment arm are given by a homoscedastic linear model, with possibly different variances between treatment arms, and that the trial subjects form a random sample from an unselected overall population, we optimize the (possibly randomized) treatment allocation allowing the allocation rates to depend on the covariates. We find that, for the case of two treatments, the approximately optimal allocation rule does not depend on the value of the covariates but only on the variances of the responses. In contrast, for the case of three treatments or more, the optimal treatment allocation does depend on the values of the covariates as well as the true regression coefficients. The methods are illustrated with a recently published dietary clinical trial.  相似文献   

10.
Existing statutes in the United States and Europe require manufacturers to demonstrate evidence of effectiveness through the conduct of adequate and well‐controlled studies to obtain marketing approval of a therapeutic product. What constitutes adequate and well‐controlled studies is usually interpreted as randomized controlled trials (RCTs). However, these trials are sometimes unfeasible because of their size, duration, cost, patient preference, or in some cases, ethical concerns. For example, RCTs may not be fully powered in rare diseases or in infections caused by multidrug resistant pathogens because of the low number of enrollable patients. In this case, data available from external controls (including historical controls and observational studies or data registries) can complement information provided by RCT. Propensity score matching methods can be used to select or “borrow” additional patients from the external controls, for maintaining a one‐to‐one randomization between the treatment arm and active control, by matching the new treatment and control units based on a set of measured covariates, ie, model‐based pairing of treatment and control units that are similar in terms of their observable pretreatment characteristics. To this end, 2 matching schemes based on propensity scores are explored and applied to a real clinical data example with the objective of using historical or external observations to augment data in a trial where the randomization is disproportionate or asymmetric.  相似文献   

11.
Stratified randomization based on the baseline value of the primary analysis variable is common in clinical trial design. We illustrate from a theoretical viewpoint the advantage of such a stratified randomization to achieve balance of the baseline covariate. We also conclude that the estimator for the treatment effect is consistent when including both the continuous baseline covariate and the stratification factor derived from the baseline covariate. In addition, the analysis of covariance model including both the continuous covariate and the stratification factor is asymptotically no less efficient than including either only the continuous baseline value or only the stratification factor. We recommend that the continuous baseline covariate should generally be included in the analysis model. The corresponding stratification factor may also be included in the analysis model if one is not confident that the relationship between the baseline covariate and the response variable is linear. In spite of the above recommendation, one should always carefully examine relevant historical data to pre-specify the most appropriate analysis model for a perspective study.  相似文献   

12.
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2‐arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.  相似文献   

13.
Summary.  Few references deal with response-adaptive randomization procedures for survival outcomes and those that do either dichotomize the outcomes or use a non-parametric approach. In this paper, the optimal allocation approach and a parametric response-adaptive randomization procedure are used under exponential and Weibull distributions. The optimal allocation proportions are derived for both distributions and the doubly adaptive biased coin design is applied to target the optimal allocations. The asymptotic variance of the procedure is obtained for the exponential distribution. The effect of intrinsic delay of survival outcomes is treated. These findings are based on rigorous theory but are also verified by simulation. It is shown that using a doubly adaptive biased coin design to target the optimal allocation proportion results in more patients being randomized to the better performing treatment without loss of power. We illustrate our procedure by redesigning a clinical trial.  相似文献   

14.
We propose a two‐stage design for a single arm clinical trial with an early stopping rule for futility. This design employs different endpoints to assess early stopping and efficacy. The early stopping rule is based on a criteria determined more quickly than that for efficacy. These separate criteria are also nested in the sense that efficacy is a special case of, but usually not identical to, the early stopping endpoint. The design readily allows for planning in terms of statistical significance, power, expected sample size, and expected duration. This method is illustrated with a phase II design comparing rates of disease progression in elderly patients treated for lung cancer to rates found using a historical control. In this example, the early stopping rule is based on the number of patients who exhibit progression‐free survival (PFS) at 2 months post treatment follow‐up. Efficacy is judged by the number of patients who have PFS at 6 months. We demonstrate our design has expected sample size and power comparable with the Simon two‐stage design but exhibits shorter expected duration under a range of useful parameter values.  相似文献   

15.
Despite advances in clinical trial design, failure rates near 80% in phase 2 and 50% in phase 3 have recently been reported. The challenges to successful drug development are particularly acute in central nervous system trials such as for pain, schizophrenia, mania, and depression because high‐placebo response rates lessen assay sensitivity, diminish estimated treatment effect sizes, and thereby decrease statistical power. This paper addresses the importance of rigorous patient selection in major depressive disorder trials through an enhanced enrichment paradigm. This approach led to a redefinition of an ongoing, blinded phase 3 trial algorithm for patient inclusion (1) to eliminate further randomization of transient placebo responders and (2) to exclude previously randomized transient responders from the primary analysis of the double blind phase of the trial. It is illustrated for a case study for the comparison between brexpiprazole + antidepressant therapy and placebo + antidepressant therapy. Analysis of the primary endpoint showed that efficacy of brexpiprazole versus placebo could not be established statistically if the original algorithm for identification of placebo responders was used, but the enhanced enrichment approach did statistically demonstrate efficacy. Additionally, the enhanced enrichment approach identified a target population with a clinically meaningful treatment effect. Through its successful identification of a target population, the innovative enhanced enrichment approach enabled the demonstration of a positive treatment effect in a very challenging area of depression research.  相似文献   

16.
Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well‐known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph‐based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid‐trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
For comparing two treatment effects in a clinical trial under a univariate set-up, a sampling design called randomized play-the-winner (RPW) rule was used by different authors (see, e.g., Wei, 1979Wei, 1988; Wei and Durham, 1978). The objective of using such a rule was to allocate more patients to the better treatment. The present work suggests a bivariate version of the RPW rule. The rule is used to propose some sequential-type nonparametric tests for the equivalence of two bivariate treatment effects against a class of restricted alternatives. Some exact and asymptotic results associated with the tests are studied and examined. The limiting proportions of allocations to the two treatments are also obtained.  相似文献   

18.
Use of full Bayesian decision-theoretic approaches to obtain optimal stopping rules for clinical trial designs typically requires the use of Backward Induction. However, the implementation of Backward Induction, apart from simple trial designs, is generally impossible due to analytical and computational difficulties. In this paper we present a numerical approximation of Backward Induction in a multiple-arm clinical trial design comparing k experimental treatments with a standard treatment where patient response is binary. We propose a novel stopping rule, denoted by τ p , as an approximation of the optimal stopping rule, using the optimal stopping rule of a single-arm clinical trial obtained by Backward Induction. We then present an example of a double-arm (k=2) clinical trial where we use a simulation-based algorithm together with τ p to estimate the expected utility of continuing and compare our estimates with exact values obtained by an implementation of Backward Induction. For trials with more than two treatment arms, we evaluate τ p by studying its operating characteristics in a three-arm trial example. Results from these examples show that our approximate trial design has attractive properties and hence offers a relevant solution to the problem posed by Backward Induction.  相似文献   

19.
Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations.  相似文献   

20.
The conventional phase II trial design paradigm is to make the go/no-go decision based on the hypothesis testing framework. Statistical significance itself alone, however, may not be sufficient to establish that the drug is clinically effective enough to warrant confirmatory phase III trials. We propose the Bayesian optimal phase II trial design with dual-criterion decision making (BOP2-DC), which incorporates both statistical significance and clinical relevance into decision making. Based on the posterior probability that the treatment effect reaches the lower reference value (statistical significance) and the clinically meaningful value (clinical significance), BOP2-DC allows for go/consider/no-go decisions, rather than a binary go/no-go decision. BOP2-DC is highly flexible and accommodates various types of endpoints, including binary, continuous, time-to-event, multiple, and coprimary endpoints, in single-arm and randomized trials. The decision rule of BOP2-DC is optimized to maximize the probability of a go decision when the treatment is effective or minimize the expected sample size when the treatment is futile. Simulation studies show that the BOP2-DC design yields desirable operating characteristics. The software to implement BOP2-DC is freely available at www.trialdesign.org .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号