首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For clinical trials with time‐to‐event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre‐specified number of deaths. Often, correlated surrogate information, such as time‐to‐progression (TTP) and progression‐free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression‐free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
With the advent of ever more effective second and third line cancer treatments and the growing use of 'crossover' trial designs in oncology, in which patients switch to the alternate randomized treatment upon disease progression, progression-free survival (PFS) is an increasingly important endpoint in oncologic drug development. However, several concerns exist regarding the use of PFS as a basis to compare treatments. Unlike survival, the exact time of progression is unknown, so progression times might be over-estimated and, consequently, bias may be introduced when comparing treatments. Further, it is not uncommon for randomized therapy to be stopped prior to progression being documented due to toxicity or the initiation of additional anti-cancer therapy; in such cases patients are frequently not followed further for progression and, consequently, are right-censored in the analysis. This article reviews these issues and concludes that concerns relating to the exact timing of progression are generally overstated, with analysis techniques and simple alternative endpoints available to either remove bias entirely or at least provide reassurance via supportive analyses that bias is not present. Further, it is concluded that the regularly recommended manoeuvre to censor PFS time at dropout due to toxicity or upon the initiation of additional anti-cancer therapy is likely to favour the more toxic, less efficacious treatment and so should be avoided whenever possible.  相似文献   

3.
Pre‐study sample size calculations for clinical trial research protocols are now mandatory. When an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a pre‐determined difference (effect size) in the outcome variable, at a given level of statistical significance. Frequently studies will recruit fewer patients than the initial pre‐study sample size calculation suggested. Investigators are faced with the fact that their study may be inadequately powered to detect the pre‐specified treatment effect and the statistical analysis of the collected outcome data may or may not report a statistically significant result. If the data produces a “non‐statistically significant result” then investigators are frequently tempted to ask the question “Given the actual final study size, what is the power of the study, now, to detect a treatment effect or difference?” The aim of this article is to debate whether or not it is desirable to answer this question and to undertake a power calculation, after the data have been collected and analysed. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

4.
One characterization of group sequential methods uses alpha spending functions to allocate the false positive rate throughout a study. We consider and evaluate several such spending functions as well as the time points of the interim analyses at which they apply. In addition, we evaluate the double triangular test as an alternative procedure that allows for early termination of the trial not only due to efficacy differences between treatments, but also due to lack of such differences. We motivate and illustrate our work by reference to the analysis of survival data from a proposed oncology study. Such group sequential procedures with one or two interim analyses are only slightly less powerful than fixed sample trials, but provide for the strong possibility of early stopping. Therefore, in all situations where they can practically be applied, we recommend their routine use in clinical trials. The double triangular test provides a suitable alternative to the group sequential procedures in that they do not provide for early stopping with acceptance of the null hypothesis. Again, there is only a modest loss in power relative to fixed sample tests. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

5.
In oncology/hematology early phase clinical trials, efficacies were often observed in terms of response rate, depth, timing, and duration. However, the true clinical benefits that eventually support registrational purpose are progression-free survival (PFS) and/or overall survival (OS), the follow-up of which are typically not long enough in early phase trials. This gap imposes challenges in strategies for late phase drug development. In this article, we tackle the question by leveraging published study to establish a quantitative link between early efficacy outcomes and late phase efficacy endpoints. We used solid tumor cancer as disease model. We modeled the disease course of a RECISTv1.1 assessed solid tumor with a continuous Markov chain (CMC) model. We parameterize the transition intensity matrix of a CMC model based on published aggregate-level summary statistics, and then simulate subject-level time-to-event data. The simulated data is shown to have good approximation to published studies. PFS and/or OS could be predicted with the transition intensity matrix modified given clinical knowledge to reflect various assumptions on response rate, depth, timing, and duration. The authors have built a R shiny application named PubPredict, the tool implements the algorithm described above and allows customized features including multiple response levels, treatment crossover and varying follow-up duration. This toolset has been applied to advise phase 3 trial design when only early efficacy data are available from phase 1 or 2 studies.  相似文献   

6.
Baseline adjustment is an important consideration in thorough QT studies for non‐antiarrhythmic drugs. For crossover studies with period‐specific pre‐dose baselines, we propose a by‐time‐point analysis of covariance model with change from pre‐dose baseline as response, treatment as a fixed effect, pre‐dose baseline for current treatment and pre‐dose baseline averaged across treatments as covariates, and subject as a random effect. Additional factors such as period and sex should be included in the model as appropriate. Multiple pre‐dose measurements can be averaged to obtain a pre‐dose‐averaged baseline and used in the model. We provide conditions under which the proposed model is more efficient than other models. We demonstrate the efficiency and robustness of the proposed model both analytically and through simulation studies. The advantage of the proposed model is also illustrated using the data from a real clinical trial. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Brownian motion has been used to derive stopping boundaries for group sequential trials, however, when we observe dependent increment in the data, fractional Brownian motion is an alternative to be considered to model such data. In this article we compared expected sample sizes and stopping times for different stopping boundaries based on the power family alpha spending function under various values of Hurst coefficient. Results showed that the expected sample sizes and stopping times will decrease and power increases when the Hurst coefficient increases. With same Hurst coefficient, the closer the boundaries are to that of O'Brien-Fleming, the higher the expected sample sizes and stopping times are; however, power has a decreasing trend for values start from H = 0.6 (early analysis), 0.7 (equal space), 0.8 (late analysis). We also illustrate study design changes using results from the BHAT study.  相似文献   

8.
The analysis of clinical trials aiming to show symptomatic benefits is often complicated by the ethical requirement for rescue medication when the disease state of patients worsens. In type 2 diabetes trials, patients receive glucose‐lowering rescue medications continuously for the remaining trial duration, if one of several markers of glycemic control exceeds pre‐specified thresholds. This may mask differences in glycemic values between treatment groups, because it will occur more frequently in less effective treatment groups. Traditionally, the last pre‐rescue medication value was carried forward and analyzed as the end‐of‐trial value. The deficits of such simplistic single imputation approaches are increasingly recognized by regulatory authorities and trialists. We discuss alternative approaches and evaluate them through a simulation study. When the estimand of interest is the effect attributable to the treatments initially assigned at randomization, then our recommendation for estimation and hypothesis testing is to treat data after meeting rescue criteria as deterministically ‘missing’ at random, because initiation of rescue medication is determined by observed in‐trial values. An appropriate imputation of values after meeting rescue criteria is then possible either directly through multiple imputation or implicitly with a repeated measures model. Crucially, one needs to jointly impute or model all markers of glycemic control that can lead to the initiation of rescue medication. An alternative for hypothesis testing only are rank tests with outcomes from patients ‘requiring rescue medication’ ranked worst, and non‐rescued patients ranked according to final visit values. However, an appropriate ranking of not observed values may be controversial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
A clinical hold order by the Food and Drug Administration (FDA) to the sponsor of a clinical trial is a measure to delay a proposed or to suspend an ongoing clinical investigation. The phase III clinical trial START serves as motivating data example to explore implications and potential statistical approaches for a trial continuing after a clinical hold is lifted. In spite of a modified intention‐to‐treat (ITT) analysis introduced to account for the clinical hold by excluding patients potentially affected most by the clinical hold, results of the trial did not show a significant improvement of overall survival duration, and the question remains whether the negative result was an effect of the clinical hold. In this paper, we propose a multistate model incorporating the clinical hold as well as disease progression as intermediate events to investigate the impact of the clinical hold on the treatment effect. Moreover, we consider a simple counterfactual censoring approach as alternative strategy to the modified ITT analysis to deal with a clinical hold. Using a realistic simulation study informed by the START data and with a design based on our multistate model, we show that the modified ITT analysis used in the START trial was reasonable. However, the censoring approach will be shown to have some benefits in terms of power and flexibility.  相似文献   

10.
For normally distributed data analyzed with linear models, it is well known that measurement error on an independent variable leads to attenuation of the effect of the independent variable on the dependent variable. However, for time‐to‐event variables such as progression‐free survival (PFS), the effect of the measurement variability in the underlying measurements defining the event is less well understood. We conducted a simulation study to evaluate the impact of measurement variability in tumor assessment on the treatment effect hazard ratio for PFS and on the median PFS time, for different tumor assessment frequencies. Our results show that scan measurement variability can cause attenuation of the treatment effect (i.e. the hazard ratio is closer to one) and that the extent of attenuation may be increased with more frequent scan assessments. This attenuation leads to inflation of the type II error. Therefore, scan measurement variability should be minimized as far as possible in order to reveal a treatment effect that is closest to the truth. In disease settings where the measurement variability is shown to be large, consideration may be given to inflating the sample size of the study to maintain statistical power. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
Evaluation (or assessment)–time bias can arise in oncology trials that study progression‐free survival (PFS) when randomized groups have different patterns of timing of assessments. Modelling or computer simulation is sometimes used to explore the extent of such bias; valid results require building such simulations under realistic assumptions concerning the timing of assessments. This paper considers a trial that used a logrank test where computer simulations were based on unrealistic assumptions that severely overestimated the extent of potential bias. The paper shows that seemingly small differences in assumptions can lead to dramatic differences in the apparent operating characteristics of logrank tests. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
In drug development, treatments are most often selected at Phase 2 for further development when an initial trial of a new treatment produces a result that is considered positive. This selection due to a positive result means, however, that an estimator of the treatment effect, which does not take account of the selection is likely to over‐estimate the true treatment effect (ie, will be biased). This bias can be large and researchers may face a disappointingly lower estimated treatment effect in further trials. In this paper, we review a number of methods that have been proposed to correct for this bias and introduce three new methods. We present results from applying the various methods to two examples and consider extensions of the examples. We assess the methods with respect to bias of estimation of the treatment effect and compare the probabilities that a bias‐corrected treatment effect estimate will exceed a decision threshold. Following previous work, we also compare average power for the situation where a Phase 3 trial is launched given that the bias‐corrected observed Phase 2 treatment effect exceeds a launch threshold. Finally, we discuss our findings and potential application of the bias correction methods.  相似文献   

13.
We propose a two‐stage design for a single arm clinical trial with an early stopping rule for futility. This design employs different endpoints to assess early stopping and efficacy. The early stopping rule is based on a criteria determined more quickly than that for efficacy. These separate criteria are also nested in the sense that efficacy is a special case of, but usually not identical to, the early stopping endpoint. The design readily allows for planning in terms of statistical significance, power, expected sample size, and expected duration. This method is illustrated with a phase II design comparing rates of disease progression in elderly patients treated for lung cancer to rates found using a historical control. In this example, the early stopping rule is based on the number of patients who exhibit progression‐free survival (PFS) at 2 months post treatment follow‐up. Efficacy is judged by the number of patients who have PFS at 6 months. We demonstrate our design has expected sample size and power comparable with the Simon two‐stage design but exhibits shorter expected duration under a range of useful parameter values.  相似文献   

14.
Several researchers have proposed solutions to control type I error rate in sequential designs. The use of Bayesian sequential design becomes more common; however, these designs are subject to inflation of the type I error rate. We propose a Bayesian sequential design for binary outcome using an alpha‐spending function to control the overall type I error rate. Algorithms are presented for calculating critical values and power for the proposed designs. We also propose a new stopping rule for futility. Sensitivity analysis is implemented for assessing the effects of varying the parameters of the prior distribution and maximum total sample size on critical values. Alpha‐spending functions are compared using power and actual sample size through simulations. Further simulations show that, when total sample size is fixed, the proposed design has greater power than the traditional Bayesian sequential design, which sets equal stopping bounds at all interim analyses. We also find that the proposed design with the new stopping for futility rule results in greater power and can stop earlier with a smaller actual sample size, compared with the traditional stopping rule for futility when all other conditions are held constant. Finally, we apply the proposed method to a real data set and compare the results with traditional designs.  相似文献   

15.
Understanding the dose–response relationship is a key objective in Phase II clinical development. Yet, designing a dose‐ranging trial is a challenging task, as it requires identifying the therapeutic window and the shape of the dose–response curve for a new drug on the basis of a limited number of doses. Adaptive designs have been proposed as a solution to improve both quality and efficiency of Phase II trials as they give the possibility to select the dose to be tested as the trial goes. In this article, we present a ‘shapebased’ two‐stage adaptive trial design where the doses to be tested in the second stage are determined based on the correlation observed between efficacy of the doses tested in the first stage and a set of pre‐specified candidate dose–response profiles. At the end of the trial, the data are analyzed using the generalized MCP‐Mod approach in order to account for model uncertainty. A simulation study shows that this approach gives more precise estimates of a desired target dose (e.g. ED70) than a single‐stage (fixed‐dose) design and performs as well as a two‐stage D‐optimal design. We present the results of an adaptive model‐based dose‐ranging trial in multiple sclerosis that motivated this research and was conducted using the presented methodology. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
A challenge arising in cancer immunotherapy trial design is the presence of a delayed treatment effect wherein the proportional hazard assumption no longer holds true. As a result, a traditional survival trial design based on the standard log‐rank test, which ignores the delayed treatment effect, will lead to substantial loss of statistical power. Recently, a piecewise weighted log‐rank test is proposed to incorporate the delayed treatment effect into consideration of the trial design. However, because the sample size formula was derived under a sequence of local alternative hypotheses, it results in an underestimated sample size when the hazard ratio is relatively small for a balanced trial design and an inaccurate sample size estimation for an unbalanced design. In this article, we derived a new sample size formula under a fixed alternative hypothesis for the delayed treatment effect model. Simulation results show that the new formula provides accurate sample size estimation for both balanced and unbalanced designs.  相似文献   

17.
Recently, molecularly targeted agents and immunotherapy have been advanced for the treatment of relapse or refractory cancer patients, where disease progression‐free survival or event‐free survival is often a primary endpoint for the trial design. However, methods to evaluate two‐stage single‐arm phase II trials with a time‐to‐event endpoint are currently processed under an exponential distribution, which limits application of real trial designs. In this paper, we developed an optimal two‐stage design, which is applied to the four commonly used parametric survival distributions. The proposed method has advantages compared with existing methods in that the choice of underlying survival model is more flexible and the power of the study is more adequately addressed. Therefore, the proposed two‐stage design can be routinely used for single‐arm phase II trial designs with a time‐to‐event endpoint as a complement to the commonly used Simon's two‐stage design for the binary outcome.  相似文献   

18.
In cost‐effectiveness analyses of drugs or health technologies, estimates of life years saved or quality‐adjusted life years saved are required. Randomised controlled trials can provide an estimate of the average treatment effect; for survival data, the treatment effect is the difference in mean survival. However, typically not all patients will have reached the endpoint of interest at the close‐out of a trial, making it difficult to estimate the difference in mean survival. In this situation, it is common to report the more readily estimable difference in median survival. Alternative approaches to estimating the mean have also been proposed. We conducted a simulation study to investigate the bias and precision of the three most commonly used sample measures of absolute survival gain – difference in median, restricted mean and extended mean survival – when used as estimates of the true mean difference, under different censoring proportions, while assuming a range of survival patterns, represented by Weibull survival distributions with constant, increasing and decreasing hazards. Our study showed that the three commonly used methods tended to underestimate the true treatment effect; consequently, the incremental cost‐effectiveness ratio (ICER) would be overestimated. Of the three methods, the least biased is the extended mean survival, which perhaps should be used as the point estimate of the treatment effect to be inputted into the ICER, while the other two approaches could be used in sensitivity analyses. More work on the trade‐offs between simple extrapolation using the exponential distribution and more complicated extrapolation using other methods would be valuable. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
Progression-free survival (PFS) is a frequently used endpoint in oncological clinical studies. In case of PFS, potential events are progression and death. Progressions are usually observed delayed as they can be diagnosed not before the next study visit. For this reason potential bias of treatment effect estimates for progression-free survival is a concern. In randomized trials and for relative treatment effects measures like hazard ratios, bias-correcting methods are not necessarily required or have been proposed before. However, less is known on cross-trial comparisons of absolute outcome measures like median survival times. This paper proposes a new method for correcting the assessment time bias of progression-free survival estimates to allow a fair cross-trial comparison of median PFS. Using median PFS for example, the presented method approximates the unknown posterior distribution by a Bayesian approach based on simulations. It is shown that the proposed method leads to a substantial reduction of bias as compared to estimates derived from maximum likelihood or Kaplan–Meier estimates. Bias could be reduced by more than 90% over a broad range of considered situations differing in assessment times and underlying distributions. By coverage probabilities of at least 94% based on the credibility interval of the posterior distribution the resulting parameters hold common confidence levels. In summary, the proposed approach is shown to be useful for a cross-trial comparison of median PFS.  相似文献   

20.
A three‐arm trial including an experimental treatment, an active reference treatment and a placebo is often used to assess the non‐inferiority (NI) with assay sensitivity of an experimental treatment. Various hypothesis‐test‐based approaches via a fraction or pre‐specified margin have been proposed to assess the NI with assay sensitivity in a three‐arm trial. There is little work done on confidence interval in a three‐arm trial. This paper develops a hybrid approach to construct simultaneous confidence interval for assessing NI and assay sensitivity in a three‐arm trial. For comparison, we present normal‐approximation‐based and bootstrap‐resampling‐based simultaneous confidence intervals. Simulation studies evidence that the hybrid approach with the Wilson score statistic performs better than other approaches in terms of empirical coverage probability and mesial‐non‐coverage probability. An example is used to illustrate the proposed approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号