首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In two-stage randomization designs, patients are randomized to one or more available therapies upon entry into the study. Depending on the response to the initial treatment (such as complete remission or shrinkage of tumor), patients are then randomized to receive maintenance treatments to maintain the response or salvage treatment to induce response. One goal of such trials is to compare the combinations of initial and maintenance or salvage therapies in the form of treatment strategies. In cases where the endpoint is defined as overall survival, Lunceford et al. [2002. Estimation of survival distributions of treatment policies in two-stage and randomization designs in clinical trials. Biometrics 58, 48–57] used mean survival time and pointwise survival probability to compare treatment strategies. But, mean survival time or survival probability at a specific time may not be a good summary representative of the overall distribution when the data are skewed or contain influential tail observations. In this article, we propose consistent and asymptotic normal estimators for percentiles of survival curves under various treatment strategies and demonstrate the use of percentiles for comparing treatment strategies. Small sample properties of these estimators are investigated using simulation. We demonstrate our methods by applying them to a leukemia clinical trial data set that motivated this research.  相似文献   

2.
Adaptive designs of clinical trials are ethical alternatives when the traditional randomization becomes ethically infeasible in desperate medical situations. However, such a design creates a dependency among trial data and its statistical analysis becomes more complex than the analysis for traditional randomized clinical trials. In this article, we examine adaptive designs with dichotomous responses from two treatments and extend some commonly used statistical methods for independent data. Under a regularity condition, the estimated odds ratio and its logarithm are shown to follow asymptotically normal distributions. Moreover, the ordinary goodness-of-fit test statistic for two-by-two contingency tables with dependent data is shown to be asymptotically chi-square distributed. We also discuss the consistency of maximum likelihood estimators of the unknown parameters for a wide class of adaptive designs.  相似文献   

3.
Summary.  Few references deal with response-adaptive randomization procedures for survival outcomes and those that do either dichotomize the outcomes or use a non-parametric approach. In this paper, the optimal allocation approach and a parametric response-adaptive randomization procedure are used under exponential and Weibull distributions. The optimal allocation proportions are derived for both distributions and the doubly adaptive biased coin design is applied to target the optimal allocations. The asymptotic variance of the procedure is obtained for the exponential distribution. The effect of intrinsic delay of survival outcomes is treated. These findings are based on rigorous theory but are also verified by simulation. It is shown that using a doubly adaptive biased coin design to target the optimal allocation proportion results in more patients being randomized to the better performing treatment without loss of power. We illustrate our procedure by redesigning a clinical trial.  相似文献   

4.
This paper deals with the analysis of randomization effects in multi‐centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre‐stratified block‐permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson‐gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed‐form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre‐stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
Treatment regimes are algorithms for assigning treatments to patients with complex diseases, where treatment consists of more than one episode of therapy, potentially with different dosages of the same agent or different agents. Sequentially randomized clinical trials are usually designed to evaluate and compare the effect of different treatment regimes. In such designs, eligible patients are first randomly assigned to receive one of the initial treatments. Patients meeting some criteria (e.g. no progressive disease) are then randomized to receive one of the maintenance treatments. Usually, the procedure continues until all treatment options are exhausted. Such multistage treatment assignment results in treatment regimes consisting of initial treatments, intermediate responses and second-stage treatments. However, methods for efficient analysis of sequentially randomized trials have only been developed very recently. As a result, earlier clinical trials reported results based only on the comparison of stage-specific treatments. In this article, we propose a model that applies to comparisons of any combination of any number of treatment regimes regardless of the number of stages of treatment adjusted for auxiliary variables. Contrasts of treatment regimes are tested using the Wald chi-square method. Both the model and Wald chi-square tests of contrasts are illustrated through a simulation study and an application to a high-risk neuroblastoma study to complement the earlier results reported on this study.  相似文献   

6.
Clinical trials in the era of precision cancer medicine aim to identify and validate biomarker signatures which can guide the assignment of individually optimal treatments to patients. In this article, we propose a group sequential randomized phase II design, which updates the biomarker signature as the trial goes on, utilizes enrichment strategies for patient selection, and uses Bayesian response-adaptive randomization for treatment assignment. To evaluate the performance of the new design, in addition to the commonly considered criteria of Type I error and power, we propose four new criteria measuring the benefits and losses for individuals both inside and outside of the clinical trial. Compared with designs with equal randomization, the proposed design gives trial participants a better chance to receive their personalized optimal treatments and thus results in a higher response rate on the trial. This design increases the chance to discover a successful new drug by an adaptive enrichment strategy, i.e. identification and selective enrollment of a subset of patients who are sensitive to the experimental therapies. Simulation studies demonstrate these advantages of the proposed design. It is illustrated by an example based on an actual clinical trial in non-small-cell lung cancer.  相似文献   

7.
8.
A. Galbete  J.A. Moler 《Statistics》2016,50(2):418-434
In a randomized clinical trial, response-adaptive randomization procedures use the information gathered, including the previous patients' responses, to allocate the next patient. In this setting, we consider randomization-based inference. We provide an algorithm to obtain exact p-values for statistical tests that compare two treatments with dichotomous responses. This algorithm can be applied to a family of response adaptive randomization procedures which share the following property: the distribution of the allocation rule depends only on the imbalance between treatments and on the imbalance between successes for treatments 1 and 2 in the previous step. This family includes some outstanding response adaptive randomization procedures. We study a randomization test to contrast the null hypothesis of equivalence of treatments and we show that this test has a similar performance to that of its parametric counterpart. Besides, we study the effect of a covariate in the inferential process. First, we obtain a parametric test, constructed assuming a logit model which relates responses to treatments and covariate levels, and we give conditions that guarantee its asymptotic normality. Finally, we show that the randomization test, which is free of model specification, performs as well as the parametric test that takes the covariate into account.  相似文献   

9.
While randomization inference is well developed for continuous and binary outcomes, there has been comparatively little work for outcomes with nonnegative support and clumping at zero. Typically, outcomes of this type have been modeled using parametric models that impose strong distributional assumptions. This article proposes new randomization inference procedures for nonnegative outcomes with clumping at zero. Instead of making distributional assumptions, we propose various assumptions about the nature of the response to treatment and use permutation inference for both testing and estimation. This approach allows for some natural goodness-of-fit tests for model assessment, as well as flexibility in selecting test statistics sensitive to different potential alternatives. We illustrate our approach using two randomized trials, where job training interventions were designed to increase earnings of participants.  相似文献   

10.
Mixed treatment comparison (MTC) models rely on estimates of relative effectiveness from randomized clinical trials so as to respect randomization across treatment arms. This approach could potentially be simplified by an alternative parameterization of the way effectiveness is modeled. We introduce a treatment‐based parameterization of the MTC model that estimates outcomes on both the study and treatment levels. We compare the proposed model to the commonly used MTC models using a simulation study as well as three randomized clinical trial datasets from published systematic reviews comparing (i) treatments on bleeding after cirrhosis, (ii) the impact of antihypertensive drugs in diabetes mellitus, and (iii) smoking cessation strategies. The simulation results suggest similar or sometimes better performance of the treatment‐based MTC model. Moreover, from the real data analyses, little differences were observed on the inference extracted from both models. Overall, our proposed MTC approach performed as good, or better, than the commonly applied indirect and MTC models and is simpler, fast, and easier to implement in standard statistical software. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
Dynamic treatment strategies are designed to change treatments over time in response to intermediate outcomes. They can be deployed for primary treatment as well as for the introduction of adjuvant treatment or other treatment‐enhancing interventions. When treatment interventions are delayed until needed, more cost‐efficient strategies will result. Sequential multiple assignment randomized (SMAR) trials allow for unbiased estimation of the marginal effects of different sequences of history‐dependent treatment decisions. Because a single SMAR trial enables evaluation of many different dynamic regimes at once, it is naturally thought to require larger sample sizes than the parallel randomized trial. In this paper, we compare power between SMAR trials studying a regime, where treatment boosting enters when triggered by an observed event, versus the parallel design, where a treatment boost is consistently prescribed over the entire study period. In some settings, we found that the dynamic design yields the more efficient trial for the detection of treatment activity. We develop one particular trial to compare a dynamic nursing intervention with telemonitoring for the enhancement of medication adherence in epilepsy patients. To this end, we derive from the SMAR trial data either an average of conditional treatment effects (‘conditional estimator’) or the population‐averaged (‘marginal’) estimator of the dynamic regimes. Analytical sample size calculations for the parallel design and the conditional estimator are compared with simulated results for the population‐averaged estimator. We conclude that in specific settings, well‐chosen SMAR designs may require fewer data for the development of more cost‐efficient treatment strategies than parallel designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
The tumor burden (TB) process is postulated to be the primary mechanism through which most anticancer treatments provide benefit. In phase II oncology trials, the biologic effects of a therapeutic agent are often analyzed using conventional endpoints for best response, such as objective response rate and progression‐free survival, both of which causes loss of information. On the other hand, graphical methods including spider plot and waterfall plot lack any statistical inference when there is more than one treatment arm. Therefore, longitudinal analysis of TB data is well recognized as a better approach for treatment evaluation. However, longitudinal TB process suffers from informative missingness because of progression or death. We propose to analyze the treatment effect on tumor growth kinetics using a joint modeling framework accounting for the informative missing mechanism. Our approach is illustrated by multisetting simulation studies and an application to a nonsmall‐cell lung cancer data set. The proposed analyses can be performed in early‐phase clinical trials to better characterize treatment effect and thereby inform decision‐making. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
In practice, it is important to find optimal allocation strategies for continuous response with multiple treatments under some optimization criteria. In this article, we focus on exponential responses. For a multivariate test of homogeneity, we obtain the optimal allocation strategies to maximize power while (1) fixing sample size and (2) fixing expected total responses. Then the doubly adaptive biased coin design [Hu, F., Zhang, L.-X., 2004. Asymptotic properties of doubly adaptive biased coin designs for multi-treatment clinical trials. The Annals of Statistics 21, 268–301] is used to implement the optimal allocation strategies. Simulation results show that the proposed procedures have advantages over complete randomization with respect to both inferential (power) and ethical standpoints on average. It is important to note that one can usually implement optimal allocation strategies numerically for other continuous responses, though it is usually not easy to get the closed form of the optimal allocation theoretically.  相似文献   

14.
Re‐randomization test has been considered as a robust alternative to the traditional population model‐based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate‐adaptive randomization method for ensuring balance among prognostic factors. Among various re‐randomization tests, fixed‐entry‐order re‐randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed‐entry‐order re‐randomization test is biased and thus compromised in power. We find that the bias is due to non‐uniform re‐allocation probabilities incurred by the re‐randomization in this case. We therefore propose a weighted fixed‐entry‐order re‐randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re‐randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2‐arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size.  相似文献   

16.
The randomized cluster design is typical in studies where the unit of randomization is a cluster of individuals rather than the individual. Evaluating various intervention strategies across medical care providers at either an institutional level or at a physician group practice level fits the randomized cluster model. Clearly, the analytical approach to such studies must take the unit of randomization and accompanying intraclass correlation into consideration. We review alternative methods to the typical Pearson's chi-square analysis and illustrate these alternatives. We have written and tested a Fortran program that produces the statistics outlined in this paper. The program, in an executable format is available from the author on request.  相似文献   

17.
Data on the Likert scale are ubiquitous in medical research, including randomized trials. Statistical analysis of such data may be conducted using the means of raw scores or the rank information of the scores. In the context of parallel-group randomized trials, we quantify treatment effects by the probability that a subject in the treatment group has a better score than (or a win over) a subject in the control group. Asymptotic parametric and nonparametric confidence intervals for this win probability and associated sample size formulas are derived for studies with only follow-up scores, and those with both baseline and follow-up measurements. We assessed the performance of both the parametric and nonparametric approaches using simulation studies based on real studies with Likert item and Likert scale data. The simulation results demonstrate that even without baseline adjustment, the parametric methods did not perform well, in terms of bias, interval coverage percentage, balance of tail error, and assurance of achieving a pre-specified precision. In contrast, the nonparametric approach performed very well for both the unadjusted and adjusted win probability. We illustrate the methods with two examples: one using Likert item data and the other using Like scale data. We conclude that non-parametric methods are preferable for two-group randomization trials with Likert data. Illustrative SAS code for the nonparametric approach using existing procedures is provided.  相似文献   

18.
In oncology, toxicity is typically observable shortly after a chemotherapy treatment, whereas efficacy, often characterized by tumor shrinkage, is observable after a relatively long period of time. In a phase II clinical trial design, we propose a Bayesian adaptive randomization procedure that accounts for both efficacy and toxicity outcomes. We model efficacy as a time-to-event endpoint and toxicity as a binary endpoint, sharing common random effects in order to induce dependence between the bivariate outcomes. More generally, we allow the randomization probability to depend on patients’ specific covariates, such as prognostic factors. Early stopping boundaries are constructed for toxicity and futility, and a superior treatment arm is recommended at the end of the trial. Following the setup of a recent renal cancer clinical trial at M. D. Anderson Cancer Center, we conduct extensive simulation studies under various scenarios to investigate the performance of the proposed method, and compare it with available Bayesian adaptive randomization procedures.  相似文献   

19.
Historical control trials compare an experimental treatment with a previously conducted control treatment. By assigning all recruited samples to the experimental arm, historical control trials can better identify promising treatments in early phase trials compared with randomized control trials. Existing designs of historical control trials with survival endpoints are based on asymptotic normal distribution. However, it remains unclear whether the asymptotic distribution of the test statistic is close enough to the true distribution given relatively small sample sizes in early phase trials. In this article, we address this question by introducing an exact design approach for exponentially distributed survival endpoints, and compare it with an asymptotic design in both real examples and simulation examples. Simulation results show that the asymptotic test could lead to bias in the sample size estimation. We conclude the proposed exact design should be used in the design of historical control trials.  相似文献   

20.
ABSTRACT

The clinical trials are usually designed with the implicit assumption that data analysis will occur only after the trial is completed. It is a challenging problem if the sponsor wishes to evaluate the drug efficacy in the middle of the study without breaking the randomization codes. In this article, the randomized response model and mixture model are introduced to analyze the data, masking the randomization codes of the crossover design. Given the probability of treatment sequence, the test of mixture model provides higher power than the test of randomized response model, which is inadequate in the example. The paired t-test has higher powers than both models if the randomization codes are broken. The sponsor may stop the trial early to claim the effectiveness of the study drug if the mixture model concludes a positive result.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号