首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Proschan, Brittain, and Kammerman made a very interesting observation that for some examples of the unequal allocation minimization, the mean of the unconditional randomization distribution is shifted away from 0. Kuznetsova and Tymofyeyev linked this phenomenon to the variations in the allocation ratio from allocation to allocation in the examples considered in the paper by Proschan et al. and advocated the use of unequal allocation procedures that preserve the allocation ratio at every step. In this paper, we show that the shift phenomenon extends to very common settings: using conditional randomization test in a study with equal allocation. This phenomenon has the same cause: variations in the allocation ratio among the allocation sequences in the conditional reference set, not previously noted. We consider two kinds of conditional randomization tests. The first kind is the often used randomization test that conditions on the treatment group totals; we describe the variations in the conditional allocation ratio with this test on examples of permuted block randomization and biased coin randomization. The second kind is the randomization test proposed by Zheng and Zelen for a multicenter trial with permuted block central allocation that conditions on the within‐center treatment totals. On the basis of the sequence of conditional allocation ratios, we derive the value of the shift in the conditional randomization distribution for specific vector of responses and the expected value of the shift when responses are independent identically distributed random variables. We discuss the asymptotic behavior of the shift for the two types of tests. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations.  相似文献   

3.
This paper deals with the analysis of randomization effects in multi‐centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre‐stratified block‐permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson‐gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed‐form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre‐stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2‐arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.  相似文献   

5.
Crossover designs have some advantages over standard clinical trial designs and they are often used in trials evaluating the efficacy of treatments for infertility. However, clinical trials of infertility treatments violate a fundamental condition of crossover designs, because women who become pregnant in the first treatment period are not treated in the second period. In previous research, to deal with this problem, some new designs, such as re‐randomization designs, and analysis methods including the logistic mixture model and the beta‐binomial mixture model were proposed. Although the performance of these designs and methods has previously been evaluated in large‐scale clinical trials with sample sizes of more than 1000 per group, the actual sample sizes of infertility treatment trials are usually around 100 per group. The most appropriate design and analysis for these moderate‐scale clinical trials are currently unclear. In this study, we conducted simulation studies to determine the appropriate design and analysis method of moderate‐scale clinical trials for irreversible endpoints by evaluating the statistical power and bias in the treatment effect estimates. The Mantel–Haenszel method had similar power and bias to the logistic mixture model. The crossover designs had the highest power and the smallest bias. We recommend using a combination of the crossover design and the Mantel–Haenszel method for two‐period, two‐treatment clinical trials with irreversible endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
In two-stage randomization designs, patients are randomized to one or more available therapies upon entry into the study. Depending on the response to the initial treatment (such as complete remission or shrinkage of tumor), patients are then randomized to receive maintenance treatments to maintain the response or salvage treatment to induce response. One goal of such trials is to compare the combinations of initial and maintenance or salvage therapies in the form of treatment strategies. In cases where the endpoint is defined as overall survival, Lunceford et al. [2002. Estimation of survival distributions of treatment policies in two-stage and randomization designs in clinical trials. Biometrics 58, 48–57] used mean survival time and pointwise survival probability to compare treatment strategies. But, mean survival time or survival probability at a specific time may not be a good summary representative of the overall distribution when the data are skewed or contain influential tail observations. In this article, we propose consistent and asymptotic normal estimators for percentiles of survival curves under various treatment strategies and demonstrate the use of percentiles for comparing treatment strategies. Small sample properties of these estimators are investigated using simulation. We demonstrate our methods by applying them to a leukemia clinical trial data set that motivated this research.  相似文献   

7.
A. Galbete  J.A. Moler 《Statistics》2016,50(2):418-434
In a randomized clinical trial, response-adaptive randomization procedures use the information gathered, including the previous patients' responses, to allocate the next patient. In this setting, we consider randomization-based inference. We provide an algorithm to obtain exact p-values for statistical tests that compare two treatments with dichotomous responses. This algorithm can be applied to a family of response adaptive randomization procedures which share the following property: the distribution of the allocation rule depends only on the imbalance between treatments and on the imbalance between successes for treatments 1 and 2 in the previous step. This family includes some outstanding response adaptive randomization procedures. We study a randomization test to contrast the null hypothesis of equivalence of treatments and we show that this test has a similar performance to that of its parametric counterpart. Besides, we study the effect of a covariate in the inferential process. First, we obtain a parametric test, constructed assuming a logit model which relates responses to treatments and covariate levels, and we give conditions that guarantee its asymptotic normality. Finally, we show that the randomization test, which is free of model specification, performs as well as the parametric test that takes the covariate into account.  相似文献   

8.
Doubly adaptive biased coin design (DBCD) is an important family of response-adaptive randomization procedures for clinical trials. It uses sequentially updated estimation to skew the allocation probability to favor the treatment that has performed better thus far. An important assumption for the DBCD is the homogeneity assumption for the patient responses. However, this assumption may be violated in many sequential experiments. Here we prove the robustness of the DBCD against certain time trends in patient responses. Strong consistency and asymptotic normality of the design are obtained under some widely satisfied conditions. Also, we propose a general weighted likelihood method to reduce the bias caused by the heterogeneity in the inference after a trial. Some numerical studies are also presented to illustrate the finite sample properties of DBCD.  相似文献   

9.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
In the planning of randomized survival trials, the role of follow‐up time of trial participants introduces a level of complexity not encountered in non‐survival trials. Of the two commonly used survival designs, one design fixes the follow‐up time whereas the other allows it to vary. When the follow‐up time is fixed the number of events varies. Conversely, when the number of events is fixed, the follow‐up time varies. These two designs influence test statistics in ways that have not been fully explored resulting in a misunderstanding of the design–test statistic relationship. We use examples from the literature to strengthen the understanding of this relationship. Group sequential trials are briefly discussed. When the number of events is fixed, we demonstrate why a two‐sample risk difference test statistic reduces to a one‐sample test statistic which is nearly equal to the risk ratio test statistic. Some aspects of fixed event designs that need further consideration are also discussed. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
The response adaptive randomization (RAR) method is used to increase the number of patients assigned to more efficacious treatment arms in clinical trials. In many trials evaluating longitudinal patient outcomes, RAR methods based only on the final measurement may not benefit significantly from RAR because of its delayed initiation. We propose a Bayesian RAR method to improve RAR performance by accounting for longitudinal patient outcomes (longitudinal RAR). We use a Bayesian linear mixed effects model to analyze longitudinal continuous patient outcomes for calculating a patient allocation probability. In addition, we aim to mitigate the loss of statistical power because of large patient allocation imbalances by embedding adjusters into the patient allocation probability calculation. Using extensive simulation we compared the operating characteristics of our proposed longitudinal RAR method with those of the RAR method based only on the final measurement and with an equal randomization method. Simulation results showed that our proposed longitudinal RAR method assigned more patients to the presumably superior treatment arm compared with the other two methods. In addition, the embedded adjuster effectively worked to prevent extreme patient allocation imbalances. However, our proposed method may not function adequately when the treatment effect difference is moderate or less, and still needs to be modified to deal with unexpectedly large departures from the presumed longitudinal data model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
This article compares the properties of two balanced randomization schemes with several treatments under non-uniform allocation probabilities. According to the first procedure, the so-called truncated multinomial randomization design, the process employs a given allocation distribution, until a treatment receives its quota of subjects, after which this distribution switches to the conditional distribution for the remaining treatments, and so on. The second scheme, the random allocation rule, selects at random any legitimate assignment of the given number of subjects per treatment. The behavior of these two schemes is shown to be quite different: the truncated multinomial randomization design's assignment probabilities to a treatment turn out to vary over the recruitment period, and its accidental bias can be large, whereas the random allocation rule's this bias is bounded. The limiting distributions of the instants at which a treatment receives the given number of subjects is shown to be that of weighted spacings for normal order statistics with different variances. Formulas for the selection bias of both procedures are also derived.  相似文献   

13.
Some multicenter randomized controlled trials (e.g. for rare diseases or with slow recruitment) involve many centers with few patients in each. Under within-center randomization, some centers might not assign each treatment to at least one patient; hence, such centers have no within-center treatment effect estimates and the center-stratified treatment effect estimate can be inefficient, perhaps to an extent with statistical and ethical implications. Recently, combining complete and incomplete centers with a priori weights has been suggested. However, a concern is whether using the incomplete centers increases bias. To study this concern, an approach with randomization models for a finite population was used to evaluate bias of the usual complete center estimator, the simple center-ignoring estimator, and the weighted estimator combining complete and incomplete centers. The situation with two treatments and many centers, each with either one or two patients, was evaluated. Various patient accrual mechanisms were considered, including one involving selection bias. The usual complete center estimator and the weighted estimator were unbiased under the overall null hypothesis, even with selection bias. An actual dermatology clinical trial motivates and illustrates these methods.  相似文献   

14.
Phase II clinical trials designed for evaluating a drug's treatment effect can be either single‐arm or double‐arm. A single‐arm design tests the null hypothesis that the response rate of a new drug is lower than a fixed threshold, whereas a double‐arm scheme takes a more objective comparison of the response rate between the new treatment and the standard of care through randomization. Although the randomized design is the gold standard for efficacy assessment, various situations may arise where a single‐arm pilot study prior to a randomized trial is necessary. To combine the single‐ and double‐arm phases and pool the information together for better decision making, we propose a Single‐To‐double ARm Transition design (START) with switching hypotheses tests, where the first stage compares the new drug's response rate with a minimum required level and imposes a continuation criterion, and the second stage utilizes randomization to determine the treatment's superiority. We develop a software package in R to calibrate the frequentist error rates and perform simulation studies to assess the trial characteristics. Finally, a metastatic pancreatic cancer trial is used for illustrating the decision rules under the proposed START design.  相似文献   

15.
In a clinical trial, response-adaptive randomization (RAR) uses accumulating data to weigh the randomization of remaining patients in favour of the better performing treatment. The aim is to reduce the number of failures within the trial. However, many well-known RAR designs, in particular, the randomized play-the-winner-rule (RPWR), have a highly myopic structure which has sometimes led to unfortunate randomization sequences when used in practice. This paper introduces random permuted blocks into two RAR designs, the RPWR and sequential maximum likelihood estimation, for trials with a binary endpoint. Allocation ratios within each block are restricted to be one of 1:1, 2:1 or 3:1, preventing unfortunate randomization sequences. Exact calculations are performed to determine error rates and expected number of failures across a range of trial scenarios. The results presented show that when compared with equal allocation, block RAR designs give similar reductions in the expected number of failures to their unmodified counterparts. The reductions are typically modest under the alternative hypothesis but become more impressive if the treatment effect exceeds the clinically relevant difference.  相似文献   

16.
A complete two‐period experimental design has been defined as one in which subjects are randomized to treatment, observed for the occurrence of an event of interest, re‐randomized, and observed again for the event in a second period. A 4‐year vaccine efficacy trial was planned to compare a high‐dose vaccine with a standard dose vaccine. Subjects would be randomized each year, and subjects who had participated in a previous year would be allowed to re‐enroll in a subsequent year and would be re‐randomized. A question of interest is whether positive correlation between observations on subjects who re‐enrolled would inflate the variance of test statistics. The effect of re‐enrollment and correlation on type 1 error in a 4‐year trial is investigated by simulation. As conducted, the trial met its power requirements after two years. Subjects therefore included some who participated for a single year and others who participated in both years. Those who participated in both years constituted a complete two‐period design. An algebraic expression for the variance of the treatment difference in a complete two‐period design is derived. It is shown that under a ‘no difference’ null, correlation does not result in variance inflation in this design. When there is a treatment difference, there is variance inflation but it is small. In the vaccine efficacy trial, the effect of correlation on the statistical inference was negligible. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Treatment during cancer clinical trials sometimes involves the combination of multiple drugs. In addition, in recent years there has been a trend toward phase I/II trials in which a phase I and a phase II trial are combined into a single trial to accelerate drug development. Methods for the seamless combination of phases I and II parts are currently under investigation. In the phase II part, adaptive randomization on the basis of patient efficacy outcomes allocates more patients to the dose combinations considered to have higher efficacy. Patient toxicity outcomes are used for determining admissibility to each dose combination and are not used for selection of the dose combination itself. In cases where the objective is not to find the optimum dose combination solely for efficacy but regarding both toxicity and efficacy, the need exists to allocate patients to dose combinations with consideration of the balance of existing trade‐offs between toxicity and efficacy. We propose a Bayesian hierarchical model and an adaptive randomization with consideration for the relationship with toxicity and efficacy. Using the toxicity and efficacy outcomes of patients, the Bayesian hierarchical model is used to estimate the toxicity probability and efficacy probability in each of the dose combinations. Here, we use Bayesian moving‐reference adaptive randomization on the basis of desirability computed from the obtained estimator. Computer simulations suggest that the proposed method will likely recommend a higher percentage of target dose combinations than a previously proposed method.  相似文献   

18.
In a clinical trial to compare two treatments, subjects may be allocated sequentially to treatment groups by a restricted randomization rule. Suppose that at the end of the trial, the investigator is interested in a post-stratified or subgroup analysis with respect to a particular demographic or clinical factor which was not selected prior to the trial for stratified randomization. Under a randomization model, large sample theory of two-sample post-stratified permutational tests is developed with a broad class of restricted randomization treatment allocation rules. The test procedures proposed here are illustrated with a real-life example. The results of this example indicate that it is not always possible to ignore the treatment rule used in the trial in the design-based analysis.  相似文献   

19.
This paper develops clinical trial designs that compare two treatments with a binary outcome. The imprecise beta class (IBC), a class of beta probability distributions, is used in a robust Bayesian framework to calculate posterior upper and lower expectations for treatment success rates using accumulating data. The posterior expectation for the difference in success rates can be used to decide when there is sufficient evidence for randomized treatment allocation to cease. This design is formally related to the randomized play‐the‐winner (RPW) design, an adaptive allocation scheme where randomization probabilities are updated sequentially to favour the treatment with the higher observed success rate. A connection is also made between the IBC and the sequential clinical trial design based on the triangular test. Theoretical and simulation results are presented to show that the expected sample sizes on the truly inferior arm are lower using the IBC compared with either the triangular test or the RPW design, and that the IBC performs well against established criteria involving error rates and the expected number of treatment failures.  相似文献   

20.
We compare posterior and predictive estimators and probabilities in response-adaptive randomization designs for two- and three-group clinical trials with binary outcomes. Adaptation based upon posterior estimates are discussed, as are two predictive probability algorithms: one using the traditional definition, the other using a skeptical distribution. Optimal and natural lead-in designs are covered. Simulation studies show that efficacy comparisons lead to more adaptation than center comparisons, though at some power loss, skeptically predictive efficacy comparisons and natural lead-in approaches lead to less adaptation but offer reduced allocation variability. Though nuanced, these results help clarify the power-adaptation trade-off in adaptive randomization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号