首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In recent years, immunological science has evolved, and cancer vaccines are now approved and available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected and is actually observed in drug approval studies. Accordingly, we propose the evaluation of survival endpoints by weighted log‐rank tests with the Fleming–Harrington class of weights. We consider group sequential monitoring, which allows early efficacy stopping, and determine a semiparametric information fraction for the Fleming–Harrington family of weights, which is necessary for the error spending function. Moreover, we give a flexible survival model in cancer vaccine studies that considers not only the delayed treatment effect but also the long‐term survivors. In a Monte Carlo simulation study, we illustrate that when the primary analysis is a weighted log‐rank test emphasizing the late differences, the proposed information fraction can be a useful alternative to the surrogate information fraction, which is proportional to the number of events. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
In recent years, immunological science has evolved, and cancer vaccines are available for treating existing cancers. Because cancer vaccines require time to elicit an immune response, a delayed treatment effect is expected. Accordingly, the use of weighted log‐rank tests with the Fleming–Harrington class of weights is proposed for evaluation of survival endpoints. We present a method for calculating the sample size under assumption of a piecewise exponential distribution for the cancer vaccine group and an exponential distribution for the placebo group as the survival model. The impact of delayed effect timing on both the choice of the Fleming–Harrington's weights and the increment in the required number of events is discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
In the traditional study design of a single‐arm phase II cancer clinical trial, the one‐sample log‐rank test has been frequently used. A common practice in sample size calculation is to assume that the event time in the new treatment follows exponential distribution. Such a study design may not be suitable for immunotherapy cancer trials, when both long‐term survivors (or even cured patients from the disease) and delayed treatment effect are present, because exponential distribution is not appropriate to describe such data and consequently could lead to severely underpowered trial. In this research, we proposed a piecewise proportional hazards cure rate model with random delayed treatment effect to design single‐arm phase II immunotherapy cancer trials. To improve test power, we proposed a new weighted one‐sample log‐rank test and provided a sample size calculation formula for designing trials. Our simulation study showed that the proposed log‐rank test performs well and is robust of misspecified weight and the sample size calculation formula also performs well.  相似文献   

4.
A challenge arising in cancer immunotherapy trial design is the presence of a delayed treatment effect wherein the proportional hazard assumption no longer holds true. As a result, a traditional survival trial design based on the standard log‐rank test, which ignores the delayed treatment effect, will lead to substantial loss of statistical power. Recently, a piecewise weighted log‐rank test is proposed to incorporate the delayed treatment effect into consideration of the trial design. However, because the sample size formula was derived under a sequence of local alternative hypotheses, it results in an underestimated sample size when the hazard ratio is relatively small for a balanced trial design and an inaccurate sample size estimation for an unbalanced design. In this article, we derived a new sample size formula under a fixed alternative hypothesis for the delayed treatment effect model. Simulation results show that the new formula provides accurate sample size estimation for both balanced and unbalanced designs.  相似文献   

5.
The class $G^{\rho,\lambda }$ of weighted log‐rank tests proposed by Fleming & Harrington [Fleming & Harrington (1991) Counting Processes and Survival Analysis, Wiley, New York] has been widely used in survival analysis and is nowadays, unquestionably, the established method to compare, nonparametrically, k different survival functions based on right‐censored survival data. This paper extends the $G^{\rho,\lambda }$ class to interval‐censored data. First we introduce a new general class of rank based tests, then we show the analogy to the above proposal of Fleming & Harrington. The asymptotic behaviour of the proposed tests is derived using an observed Fisher information approach and a permutation approach. Aiming to make this family of tests interpretable and useful for practitioners, we explain how to interpret different choices of weights and we apply it to data from a cohort of intravenous drug users at risk for HIV infection. The Canadian Journal of Statistics 40: 501–516; 2012 © 2012 Statistical Society of Canada  相似文献   

6.
The indirect mechanism of action of immunotherapy causes a delayed treatment effect, producing delayed separation of survival curves between the treatment groups, and violates the proportional hazards assumption. Therefore using the log‐rank test in immunotherapy trial design could result in a severe loss efficiency. Although few statistical methods are available for immunotherapy trial design that incorporates a delayed treatment effect, recently, Ye and Yu proposed the use of a maximin efficiency robust test (MERT) for the trial design. The MERT is a weighted log‐rank test that puts less weight on early events and full weight after the delayed period. However, the weight function of the MERT involves an unknown function that has to be estimated from historical data. Here, for simplicity, we propose the use of an approximated maximin test, the V0 test, which is the sum of the log‐rank test for the full data set and the log‐rank test for the data beyond the lag time point. The V0 test fully uses the trial data and is more efficient than the log‐rank test when lag exits with relatively little efficiency loss when no lag exists. The sample size formula for the V0 test is derived. Simulations are conducted to compare the performance of the V0 test to the existing tests. A real trial is used to illustrate cancer immunotherapy trial design with delayed treatment effect.  相似文献   

7.
A cancer clinical trial with an immunotherapy often has 2 special features, which are patients being potentially cured from the cancer and the immunotherapy starting to take clinical effect after a certain delay time. Existing testing methods may be inadequate for immunotherapy clinical trials, because they do not appropriately take the 2 features into consideration at the same time, hence have low power to detect the true treatment effect. In this paper, we proposed a piece‐wise proportional hazards cure rate model with a random delay time to fit data, and a new weighted log‐rank test to detect the treatment effect of an immunotherapy over a chemotherapy control. We showed that the proposed weight was nearly optimal under mild conditions. Our simulation study showed a substantial gain of power in the proposed test over the existing tests and robustness of the test with misspecified weight. We also introduced a sample size calculation formula to design the immunotherapy clinical trials using the proposed weighted log‐rank test.  相似文献   

8.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

9.
For clinical trials with time‐to‐event as the primary endpoint, the clinical cutoff is often event‐driven and the log‐rank test is the most commonly used statistical method for evaluating treatment effect. However, this method relies on the proportional hazards assumption in that it has the maximal power in this circumstance. In certain disease areas or populations, some patients can be curable and never experience the events despite a long follow‐up. The event accumulation may dry out after a certain period of follow‐up and the treatment effect could be reflected as the combination of improvement of cure rate and the delay of events for those uncurable patients. Study power depends on both cure rate improvement and hazard reduction. In this paper, we illustrate these practical issues using simulation studies and explore sample size recommendations, alternative ways for clinical cutoffs, and efficient testing methods with the highest study power possible.  相似文献   

10.
In the analysis of survival times, the logrank test and the Cox model have been established as key tools, which do not require specific distributional assumptions. Under the assumption of proportional hazards, they are efficient and their results can be interpreted unambiguously. However, delayed treatment effects, disease progression, treatment switchers or the presence of subgroups with differential treatment effects may challenge the assumption of proportional hazards. In practice, weighted logrank tests emphasizing either early, intermediate or late event times via an appropriate weighting function may be used to accommodate for an expected pattern of non-proportionality. We model these sources of non-proportional hazards via a mixture of survival functions with piecewise constant hazard. The model is then applied to study the power of unweighted and weighted log-rank tests, as well as maximum tests allowing different time dependent weights. Simulation results suggest a robust performance of maximum tests across different scenarios, with little loss in power compared to the most powerful among the considered weighting schemes and huge power gain compared to unfavorable weights. The actual sources of non-proportional hazards are not obvious from resulting populationwise survival functions, highlighting the importance of detailed simulations in the planning phase of a trial when assuming non-proportional hazards.We provide the required tools in a software package, allowing to model data generating processes under complex non-proportional hazard scenarios, to simulate data from these models and to perform the weighted logrank tests.  相似文献   

11.
In non‐randomized biomedical studies using the proportional hazards model, the data often constitute an unrepresentative sample of the underlying target population, which results in biased regression coefficients. The bias can be avoided by weighting included subjects by the inverse of their respective selection probabilities, as proposed by Horvitz & Thompson (1952) and extended to the proportional hazards setting for use in surveys by Binder (1992) and Lin (2000). In practice, the weights are often estimated and must be treated as such in order for the resulting inference to be accurate. The authors propose a two‐stage weighted proportional hazards model in which, at the first stage, weights are estimated through a logistic regression model fitted to a representative sample from the target population. At the second stage, a weighted Cox model is fitted to the biased sample. The authors propose estimators for the regression parameter and cumulative baseline hazard. They derive the asymptotic properties of the parameter estimators, accounting for the difference in the variance introduced by the randomness of the weights. They evaluate the accuracy of the asymptotic approximations in finite samples through simulation. They illustrate their approach in an analysis of renal transplant patients using data obtained from the Scientific Registry of Transplant Recipients  相似文献   

12.
Connections are established between the theories of weighted logrank tests and of frailty models. These connections arise because omission of a balanced covariate from a proportional hazards model generally leads to a model with non-proportional hazards, for which the simple logrank test is no longer optimal. The optimal weighting function and the asymptotic relative efficiencies of the simple logrank test and of the optimally weighted logrank test relative to the adjusted test that would be used if the covariate values were known, are expressible in terms of the Laplace transform of the hazard ratio for the distribution of the omitted covariate. For example if this hazard ratio has a gamma distribution, the optimal test is a member of the G class introduced by Harrington and Fleming (1982). We also consider positive stable, inverse Gaussian, displaced Poisson and two-point frailty distribution. Results are obtained for parametric and nonparametric tests and are extended to include random censoring. We show that the loss of efficiency from omitting a covariate is generally more important than the additional loss due to misspecification of the resulting non-proportional hazards model as a proportional hazards model. However two-point frailty distributions can provide exceptions to this rule. Censoring generally increases the efficiency of the simple logrank test to the adjusted logrank test.  相似文献   

13.
With the emergence of novel therapies exhibiting distinct mechanisms of action compared to traditional treatments, departure from the proportional hazard (PH) assumption in clinical trials with a time‐to‐event end point is increasingly common. In these situations, the hazard ratio may not be a valid statistical measurement of treatment effect, and the log‐rank test may no longer be the most powerful statistical test. The restricted mean survival time (RMST) is an alternative robust and clinically interpretable summary measure that does not rely on the PH assumption. We conduct extensive simulations to evaluate the performance and operating characteristics of the RMST‐based inference and against the hazard ratio–based inference, under various scenarios and design parameter setups. The log‐rank test is generally a powerful test when there is evident separation favoring 1 treatment arm at most of the time points across the Kaplan‐Meier survival curves, but the performance of the RMST test is similar. Under non‐PH scenarios where late separation of survival curves is observed, the RMST‐based test has better performance than the log‐rank test when the truncation time is reasonably close to the tail of the observed curves. Furthermore, when flat survival tail (or low event rate) in the experimental arm is expected, selecting the minimum of the maximum observed event time as the truncation timepoint for the RMST is not recommended. In addition, we recommend the inclusion of analysis based on the RMST curve over the truncation time in clinical settings where there is suspicion of substantial departure from the PH assumption.  相似文献   

14.
ABSTRACT

Cox proportional hazards regression model has been widely used to estimate the effect of a prognostic factor on a time-to-event outcome. In a survey of survival analyses in cancer journals, it was found that only 5% of studies using Cox proportional hazards model attempted to verify the underlying assumption. Usually an estimate of the treatment effect from fitting a Cox model was reported without validation of the proportionality assumption. It is not clear how such an estimate should be interpreted if the proportionality assumption is violated. In this article, we show that the estimate of treatment effect from a Cox regression model can be interpreted as a weighted average of the log-scaled hazard ratio over the duration of study. A hypothetic example is used to explain the weights.  相似文献   

15.
Progression‐free survival is recognized as an important endpoint in oncology clinical trials. In clinical trials aimed at new drug development, the target population often comprises patients that are refractory to standard therapy with a tumor that shows rapid progression. This situation would increase the bias of the hazard ratio calculated for progression‐free survival, resulting in decreased power for such patients. Therefore, new measures are needed to prevent decreasing the power in advance when estimating the sample size. Here, I propose a novel calculation procedure to assume the hazard ratio for progression‐free survival using the Cox proportional hazards model, which can be applied in sample size calculation. The hazard ratios derived by the proposed procedure were almost identical to those obtained by simulation. The hazard ratio calculated by the proposed procedure is applicable to sample size calculation and coincides with the nominal power. Methods that compensate for the lack of power due to biases in the hazard ratio are also discussed from a practical point of view.  相似文献   

16.
In clinical trials survival endpoints are usually compared using the log-rank test. Sequential methods for the log-rank test and the Cox proportional hazards model are largely reported in the statistical literature. When the proportional hazards assumption is violated the hazard ratio is ill-defined and the power of the log-rank test depends on the distribution of the censoring times. The average hazard ratio was proposed as an alternative effect measure, which has a meaningful interpretation in the case of non-proportional hazards, and is equal to the hazard ratio, if the hazards are indeed proportional. In the present work we prove that the average hazard ratio based sequential test statistics are asymptotically multivariate normal with the independent increments property. This allows for the calculation of group-sequential boundaries using standard methods and existing software. The finite sample characteristics of the new method are examined in a simulation study in a proportional and a non-proportional hazards setting.  相似文献   

17.
Suppose p + 1 experimental groups correspond to increasing dose levels of a treatment and all groups are subject to right censoring. In such instances, permutation tests for trend can be performed based on statistics derived from the weighted log‐rank class. This article uses saddlepoint methods to determine the mid‐P‐values for such permutation tests for any test statistic in the weighted log‐rank class. Permutation simulations are replaced by analytical saddlepoint computations which provide extremely accurate mid‐P‐values that are exact for most practical purposes and almost always more accurate than normal approximations. The speed of mid‐P‐value computation allows for the inversion of such tests to determine confidence intervals for the percentage increase in mean (or median) survival time per unit increase in dosage. The Canadian Journal of Statistics 37: 5‐16; 2009 © 2009 Statistical Society of Canada  相似文献   

18.
Many clinical research studies evaluate a time‐to‐event outcome, illustrate survival functions, and conventionally report estimated hazard ratios to express the magnitude of the treatment effect when comparing between groups. However, it may not be straightforward to interpret the hazard ratio clinically and statistically when the proportional hazards assumption is invalid. In some recent papers published in clinical journals, the use of restricted mean survival time (RMST) or τ ‐year mean survival time is discussed as one of the alternative summary measures for the time‐to‐event outcome. The RMST is defined as the expected value of time to event limited to a specific time point corresponding to the area under the survival curve up to the specific time point. This article summarizes the necessary information to conduct statistical analysis using the RMST, including the definition and statistical properties of the RMST, adjusted analysis methods, sample size calculation, information fraction for the RMST difference, and clinical and statistical meaning and interpretation. Additionally, we discuss how to set the specific time point to define the RMST from two main points of view. We also provide developed SAS codes to determine the sample size required to detect an expected RMST difference with appropriate power and reconstruct individual survival data to estimate an RMST reference value from a reported survival curve.  相似文献   

19.
We derive the large sample distribution of the weighted log rank statistic under a general class of local alternatives in which both the cure rates and the conditional distribution of time to failure among those who fail are assumed to vary in the two treatment arms. The analytic result presented here is important to data analysts who are designing clinical trials for diseases such as non-Hodgkins lymphoma, leukemia and melanoma, where a significant proportion of patients are cured. We present a numerical illustration comparing powers obtained from the analytic result to those obtained from simulations. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

20.
In this paper, an exact variance of the one‐sample log‐rank test statistic is derived under the alternative hypothesis, and a sample size formula is proposed based on the derived exact variance. Simulation results showed that the proposed sample size formula provides adequate power to design a study to compare the survival of a single sample with that of a standard population. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号