首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
A cancer clinical trial with an immunotherapy often has 2 special features, which are patients being potentially cured from the cancer and the immunotherapy starting to take clinical effect after a certain delay time. Existing testing methods may be inadequate for immunotherapy clinical trials, because they do not appropriately take the 2 features into consideration at the same time, hence have low power to detect the true treatment effect. In this paper, we proposed a piece‐wise proportional hazards cure rate model with a random delay time to fit data, and a new weighted log‐rank test to detect the treatment effect of an immunotherapy over a chemotherapy control. We showed that the proposed weight was nearly optimal under mild conditions. Our simulation study showed a substantial gain of power in the proposed test over the existing tests and robustness of the test with misspecified weight. We also introduced a sample size calculation formula to design the immunotherapy clinical trials using the proposed weighted log‐rank test.  相似文献   

2.
Molecularly targeted, genomic‐driven, and immunotherapy‐based clinical trials continue to be advanced for the treatment of relapse or refractory cancer patients, where the growth modulation index (GMI) is often considered a primary endpoint of treatment efficacy. However, there little literature is available that considers the trial design with GMI as the primary endpoint. In this article, we derived a sample size formula for the score test under a log‐linear model of the GMI. Study designs using the derived sample size formula are illustrated under a bivariate exponential model, the Weibull frailty model, and the generalized treatment effect size. The proposed designs provide sound statistical methods for a single‐arm phase II trial with GMI as the primary endpoint.  相似文献   

3.
A challenge arising in cancer immunotherapy trial design is the presence of a delayed treatment effect wherein the proportional hazard assumption no longer holds true. As a result, a traditional survival trial design based on the standard log‐rank test, which ignores the delayed treatment effect, will lead to substantial loss of statistical power. Recently, a piecewise weighted log‐rank test is proposed to incorporate the delayed treatment effect into consideration of the trial design. However, because the sample size formula was derived under a sequence of local alternative hypotheses, it results in an underestimated sample size when the hazard ratio is relatively small for a balanced trial design and an inaccurate sample size estimation for an unbalanced design. In this article, we derived a new sample size formula under a fixed alternative hypothesis for the delayed treatment effect model. Simulation results show that the new formula provides accurate sample size estimation for both balanced and unbalanced designs.  相似文献   

4.
The current practice of designing single‐arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single‐arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single‐arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
The indirect mechanism of action of immunotherapy causes a delayed treatment effect, producing delayed separation of survival curves between the treatment groups, and violates the proportional hazards assumption. Therefore using the log‐rank test in immunotherapy trial design could result in a severe loss efficiency. Although few statistical methods are available for immunotherapy trial design that incorporates a delayed treatment effect, recently, Ye and Yu proposed the use of a maximin efficiency robust test (MERT) for the trial design. The MERT is a weighted log‐rank test that puts less weight on early events and full weight after the delayed period. However, the weight function of the MERT involves an unknown function that has to be estimated from historical data. Here, for simplicity, we propose the use of an approximated maximin test, the V0 test, which is the sum of the log‐rank test for the full data set and the log‐rank test for the data beyond the lag time point. The V0 test fully uses the trial data and is more efficient than the log‐rank test when lag exits with relatively little efficiency loss when no lag exists. The sample size formula for the V0 test is derived. Simulations are conducted to compare the performance of the V0 test to the existing tests. A real trial is used to illustrate cancer immunotherapy trial design with delayed treatment effect.  相似文献   

6.
Recently, molecularly targeted agents and immunotherapy have been advanced for the treatment of relapse or refractory cancer patients, where disease progression‐free survival or event‐free survival is often a primary endpoint for the trial design. However, methods to evaluate two‐stage single‐arm phase II trials with a time‐to‐event endpoint are currently processed under an exponential distribution, which limits application of real trial designs. In this paper, we developed an optimal two‐stage design, which is applied to the four commonly used parametric survival distributions. The proposed method has advantages compared with existing methods in that the choice of underlying survival model is more flexible and the power of the study is more adequately addressed. Therefore, the proposed two‐stage design can be routinely used for single‐arm phase II trial designs with a time‐to‐event endpoint as a complement to the commonly used Simon's two‐stage design for the binary outcome.  相似文献   

7.
In this paper, an exact variance of the one‐sample log‐rank test statistic is derived under the alternative hypothesis, and a sample size formula is proposed based on the derived exact variance. Simulation results showed that the proposed sample size formula provides adequate power to design a study to compare the survival of a single sample with that of a standard population. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial.  相似文献   

9.
In some exceptional circumstances, as in very rare diseases, nonrandomized one‐arm trials are the sole source of evidence to demonstrate efficacy and safety of a new treatment. The design of such studies needs a sound methodological approach in order to provide reliable information, and the determination of the appropriate sample size still represents a critical step of this planning process. As, to our knowledge, no method exists for sample size calculation in one‐arm trials with a recurrent event endpoint, we propose here a closed sample size formula. It is derived assuming a mixed Poisson process, and it is based on the asymptotic distribution of the one‐sample robust nonparametric test recently developed for the analysis of recurrent events data. The validity of this formula in managing a situation with heterogeneity of event rates, both in time and between patients, and time‐varying treatment effect was demonstrated with exhaustive simulation studies. Moreover, although the method requires the specification of a process for events generation, it seems to be robust under erroneous definition of this process, provided that the number of events at the end of the study is similar to the one assumed in the planning phase. The motivating clinical context is represented by a nonrandomized one‐arm study on gene therapy in a very rare immunodeficiency in children (ADA‐SCID), where a major endpoint is the recurrence of severe infections. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
In recent years, immunological science has evolved, and cancer vaccines are available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected. Accordingly, the use of weighted log‐rank tests with the Fleming–Harrington class of weights is proposed for evaluation of survival endpoints. We present a method for calculating the sample size under assumption of a piecewise exponential distribution for the cancer vaccine group and an exponential distribution for the placebo group as the survival model. The impact of delayed effect timing on both the choice of the Fleming–Harrington's weights and the increment in the required number of events is discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
A 3‐arm trial design that includes an experimental treatment, an active reference treatment, and a placebo is useful for assessing the noninferiority of an experimental treatment. The inclusion of a placebo arm enables the assessment of assay sensitivity and internal validation, in addition to the testing of the noninferiority of the experimental treatment compared with the reference treatment. In 3‐arm noninferiority trials, various statistical test procedures have been considered to evaluate the following 3 hypotheses: (i) superiority of the experimental treatment over the placebo, (ii) superiority of the reference treatment over the placebo, and (iii) noninferiority of the experimental treatment compared with the reference treatment. However, hypothesis (ii) can be insufficient and may not accurately assess the assay sensitivity for the noninferiority of the experimental treatment compared with the reference treatment. Thus, demonstrating that the superiority of the reference treatment over the placebo is greater than the noninferiority margin (the nonsuperiority of the reference treatment compared with the placebo) can be necessary. Here, we propose log‐rank statistical procedures for evaluating data obtained from 3‐arm noninferiority trials to assess assay sensitivity with a prespecified margin Δ. In addition, we derive the approximate sample size and optimal allocation required to minimize the total sample size and that of the placebo treatment sample size, hierarchically.  相似文献   

12.
Phase II clinical trials designed for evaluating a drug's treatment effect can be either single‐arm or double‐arm. A single‐arm design tests the null hypothesis that the response rate of a new drug is lower than a fixed threshold, whereas a double‐arm scheme takes a more objective comparison of the response rate between the new treatment and the standard of care through randomization. Although the randomized design is the gold standard for efficacy assessment, various situations may arise where a single‐arm pilot study prior to a randomized trial is necessary. To combine the single‐ and double‐arm phases and pool the information together for better decision making, we propose a Single‐To‐double ARm Transition design (START) with switching hypotheses tests, where the first stage compares the new drug's response rate with a minimum required level and imposes a continuation criterion, and the second stage utilizes randomization to determine the treatment's superiority. We develop a software package in R to calibrate the frequentist error rates and perform simulation studies to assess the trial characteristics. Finally, a metastatic pancreatic cancer trial is used for illustrating the decision rules under the proposed START design.  相似文献   

13.
For two‐arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre‐specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, we review the adaptive design methodology of Li et al. (Biostatistics 3 :277–287) for two‐stage trials with mid‐trial sample size adjustment. We argue that it is closer in principle to a group sequential design, in spite of its obvious adaptive element. Several extensions are proposed that aim to make it even more attractive and transparent alternative to a standard (fixed sample size) trial for funding bodies to consider. These enable a cap to be put on the maximum sample size and for the trial data to be analysed using standard methods at its conclusion. The regulatory view of trials incorporating unblinded sample size re‐estimation is also discussed. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons, Ltd.  相似文献   

15.
Clinical phase II trials in oncology are conducted to determine whether the activity of a new anticancer treatment is promising enough to merit further investigation. Two‐stage designs are commonly used for this situation to allow for early termination. Designs proposed in the literature so far have the common drawback that the sample sizes for the two stages have to be specified in the protocol and have to be adhered to strictly during the course of the trial. As a consequence, designs that allow a higher extent of flexibility are desirable. In this article, we propose a new adaptive method that allows an arbitrary modification of the sample size of the second stage using the results of the interim analysis or external information while controlling the type I error rate. If the sample size is not changed during the trial, the proposed design shows very similar characteristics to the optimal two‐stage design proposed by Chang et al. (Biometrics 1987; 43:865–874). However, the new design allows the use of mid‐course information for the planning of the second stage, thus meeting practical requirements when performing clinical phase II trials in oncology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

17.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

18.
Dose‐escalation trials commonly assume a homogeneous trial population to identify a single recommended dose of the experimental treatment for use in future trials. Wrongly assuming a homogeneous population can lead to a diluted treatment effect. Equally, exclusion of a subgroup that could in fact benefit from the treatment can cause a beneficial treatment effect to be missed. Accounting for a potential subgroup effect (ie, difference in reaction to the treatment between subgroups) in dose‐escalation can increase the chance of finding the treatment to be efficacious in a larger patient population. A standard Bayesian model‐based method of dose‐escalation is extended to account for a subgroup effect by including covariates for subgroup membership in the dose‐toxicity model. A stratified design performs well but uses available data inefficiently and makes no inferences concerning presence of a subgroup effect. A hypothesis test could potentially rectify this problem but the small sample sizes result in a low‐powered test. As an alternative, the use of spike and slab priors for variable selection is proposed. This method continually assesses the presence of a subgroup effect, enabling efficient use of the available trial data throughout escalation and in identifying the recommended dose(s). A simulation study, based on real trial data, was conducted and this design was found to be both promising and feasible.  相似文献   

19.
With the emergence of novel therapies exhibiting distinct mechanisms of action compared to traditional treatments, departure from the proportional hazard (PH) assumption in clinical trials with a time‐to‐event end point is increasingly common. In these situations, the hazard ratio may not be a valid statistical measurement of treatment effect, and the log‐rank test may no longer be the most powerful statistical test. The restricted mean survival time (RMST) is an alternative robust and clinically interpretable summary measure that does not rely on the PH assumption. We conduct extensive simulations to evaluate the performance and operating characteristics of the RMST‐based inference and against the hazard ratio–based inference, under various scenarios and design parameter setups. The log‐rank test is generally a powerful test when there is evident separation favoring 1 treatment arm at most of the time points across the Kaplan‐Meier survival curves, but the performance of the RMST test is similar. Under non‐PH scenarios where late separation of survival curves is observed, the RMST‐based test has better performance than the log‐rank test when the truncation time is reasonably close to the tail of the observed curves. Furthermore, when flat survival tail (or low event rate) in the experimental arm is expected, selecting the minimum of the maximum observed event time as the truncation timepoint for the RMST is not recommended. In addition, we recommend the inclusion of analysis based on the RMST curve over the truncation time in clinical settings where there is suspicion of substantial departure from the PH assumption.  相似文献   

20.
A three‐arm trial including an experimental treatment, an active reference treatment and a placebo is often used to assess the non‐inferiority (NI) with assay sensitivity of an experimental treatment. Various hypothesis‐test‐based approaches via a fraction or pre‐specified margin have been proposed to assess the NI with assay sensitivity in a three‐arm trial. There is little work done on confidence interval in a three‐arm trial. This paper develops a hybrid approach to construct simultaneous confidence interval for assessing NI and assay sensitivity in a three‐arm trial. For comparison, we present normal‐approximation‐based and bootstrap‐resampling‐based simultaneous confidence intervals. Simulation studies evidence that the hybrid approach with the Wilson score statistic performs better than other approaches in terms of empirical coverage probability and mesial‐non‐coverage probability. An example is used to illustrate the proposed approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号