首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
With the emergence of novel therapies exhibiting distinct mechanisms of action compared to traditional treatments, departure from the proportional hazard (PH) assumption in clinical trials with a time‐to‐event end point is increasingly common. In these situations, the hazard ratio may not be a valid statistical measurement of treatment effect, and the log‐rank test may no longer be the most powerful statistical test. The restricted mean survival time (RMST) is an alternative robust and clinically interpretable summary measure that does not rely on the PH assumption. We conduct extensive simulations to evaluate the performance and operating characteristics of the RMST‐based inference and against the hazard ratio–based inference, under various scenarios and design parameter setups. The log‐rank test is generally a powerful test when there is evident separation favoring 1 treatment arm at most of the time points across the Kaplan‐Meier survival curves, but the performance of the RMST test is similar. Under non‐PH scenarios where late separation of survival curves is observed, the RMST‐based test has better performance than the log‐rank test when the truncation time is reasonably close to the tail of the observed curves. Furthermore, when flat survival tail (or low event rate) in the experimental arm is expected, selecting the minimum of the maximum observed event time as the truncation timepoint for the RMST is not recommended. In addition, we recommend the inclusion of analysis based on the RMST curve over the truncation time in clinical settings where there is suspicion of substantial departure from the PH assumption.  相似文献   

2.
The win ratio has been studied methodologically and applied in data analysis and in designing clinical trials. Researchers have pointed out that the results depend on follow‐up time and censoring time, which are sometimes used interchangeably. In this article, we distinguish between follow‐up time and censoring time, show theoretically the impact of censoring on the win ratio, and illustrate the impact of follow‐up time. We then point out that, if the treatment has long‐term benefit from a more important but less frequent endpoint (eg, death), the win ratio can show that benefit by following patients longer, avoiding masking by more frequent but less important outcomes, which occurs in conventional time‐to‐first‐event analyses. For the situation of nonproportional hazards, we demonstrate that the win ratio can be a good alternative to methods such as landmark survival rate, restricted mean survival time, and weighted log‐rank tests.  相似文献   

3.
Time‐to‐event data have been extensively studied in many areas. Although multiple time scales are often observed, commonly used methods are based on a single time scale. Analysing time‐to‐event data on two time scales can offer a more extensive insight into the phenomenon. We introduce a non‐parametric Bayesian intensity model to analyse two‐dimensional point process on Lexis diagrams. After a simple discretization of the two‐dimensional process, we model the intensity by a one‐dimensional piecewise constant hazard functions parametrized by the change points and corresponding hazard levels. Our prior distribution incorporates a built‐in smoothing feature in two dimensions. We implement posterior simulation using the reversible jump Metropolis–Hastings algorithm and demonstrate the applicability of the method using both simulated and empirical survival data. Our approach outperforms commonly applied models by borrowing strength in two dimensions.  相似文献   

4.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

5.
In cost‐effectiveness analyses of drugs or health technologies, estimates of life years saved or quality‐adjusted life years saved are required. Randomised controlled trials can provide an estimate of the average treatment effect; for survival data, the treatment effect is the difference in mean survival. However, typically not all patients will have reached the endpoint of interest at the close‐out of a trial, making it difficult to estimate the difference in mean survival. In this situation, it is common to report the more readily estimable difference in median survival. Alternative approaches to estimating the mean have also been proposed. We conducted a simulation study to investigate the bias and precision of the three most commonly used sample measures of absolute survival gain – difference in median, restricted mean and extended mean survival – when used as estimates of the true mean difference, under different censoring proportions, while assuming a range of survival patterns, represented by Weibull survival distributions with constant, increasing and decreasing hazards. Our study showed that the three commonly used methods tended to underestimate the true treatment effect; consequently, the incremental cost‐effectiveness ratio (ICER) would be overestimated. Of the three methods, the least biased is the extended mean survival, which perhaps should be used as the point estimate of the treatment effect to be inputted into the ICER, while the other two approaches could be used in sensitivity analyses. More work on the trade‐offs between simple extrapolation using the exponential distribution and more complicated extrapolation using other methods would be valuable. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
Clinical trials involving multiple time‐to‐event outcomes are increasingly common. In this paper, permutation tests for testing for group differences in multivariate time‐to‐event data are proposed. Unlike other two‐sample tests for multivariate survival data, the proposed tests attain the nominal type I error rate. A simulation study shows that the proposed tests outperform their competitors when the degree of censored observations is sufficiently high. When the degree of censoring is low, it is seen that naive tests such as Hotelling's T2 outperform tests tailored to survival data. Computational and practical aspects of the proposed tests are discussed, and their use is illustrated by analyses of three publicly available datasets. Implementations of the proposed tests are available in an accompanying R package.  相似文献   

7.
Progression‐free survival is recognized as an important endpoint in oncology clinical trials. In clinical trials aimed at new drug development, the target population often comprises patients that are refractory to standard therapy with a tumor that shows rapid progression. This situation would increase the bias of the hazard ratio calculated for progression‐free survival, resulting in decreased power for such patients. Therefore, new measures are needed to prevent decreasing the power in advance when estimating the sample size. Here, I propose a novel calculation procedure to assume the hazard ratio for progression‐free survival using the Cox proportional hazards model, which can be applied in sample size calculation. The hazard ratios derived by the proposed procedure were almost identical to those obtained by simulation. The hazard ratio calculated by the proposed procedure is applicable to sample size calculation and coincides with the nominal power. Methods that compensate for the lack of power due to biases in the hazard ratio are also discussed from a practical point of view.  相似文献   

8.
Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter‐term surrogate endpoints as substitutes for costly long‐term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long‐term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time‐to‐event surrogates for a time‐to‐event true endpoint. This two‐stage meta‐analytic copula method has been extensively studied for time‐to‐event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two‐stage meta‐analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use.  相似文献   

9.
In a clinical trial with a time-to-event endpoint the treatment effect can be measured in various ways. Under proportional hazards all reasonable measures (such as the hazard ratio and the difference in restricted mean survival time) are consistent in the following sense: Take any control group survival distribution such that the hazard rate remains above zero; if there is no benefit by any measure there is no benefit by all measures, and as the magnitude of treatment benefit increases by any measure it increases by all measures. Under nonproportional hazards, however, survival curves can cross, and the direction of the effect for any pair of measures can be inconsistent. In this paper we critically evaluate a variety of treatment effect measures in common use and identify flaws with them. In particular, we demonstrate that a treatment's benefit has two distinct and independent dimensions which can be measured by the difference in the survival rate at the end of follow-up and the difference in restricted mean survival time, and that commonly used measures do not adequately capture both dimensions. We demonstrate that a generalized hazard difference, which can be estimated by the difference in exposure-adjusted subject incidence rates, captures both dimensions, and that its inverse, the number of patient-years of follow-up that results in one fewer event (the NYNT), is an easily interpretable measure of the magnitude of clinical benefit.  相似文献   

10.
For clinical trials with time‐to‐event as the primary endpoint, the clinical cutoff is often event‐driven and the log‐rank test is the most commonly used statistical method for evaluating treatment effect. However, this method relies on the proportional hazards assumption in that it has the maximal power in this circumstance. In certain disease areas or populations, some patients can be curable and never experience the events despite a long follow‐up. The event accumulation may dry out after a certain period of follow‐up and the treatment effect could be reflected as the combination of improvement of cure rate and the delay of events for those uncurable patients. Study power depends on both cure rate improvement and hazard reduction. In this paper, we illustrate these practical issues using simulation studies and explore sample size recommendations, alternative ways for clinical cutoffs, and efficient testing methods with the highest study power possible.  相似文献   

11.
For clinical trials with time‐to‐event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre‐specified number of deaths. Often, correlated surrogate information, such as time‐to‐progression (TTP) and progression‐free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression‐free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time‐to‐event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time‐to‐event outcomes could be extended in a clinically meaningful and interpretable way to stress‐test the assumption of ignorable censoring. We focus on a ‘tipping point’ approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi‐parametric, and non‐parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time‐to‐event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
For normally distributed data analyzed with linear models, it is well known that measurement error on an independent variable leads to attenuation of the effect of the independent variable on the dependent variable. However, for time‐to‐event variables such as progression‐free survival (PFS), the effect of the measurement variability in the underlying measurements defining the event is less well understood. We conducted a simulation study to evaluate the impact of measurement variability in tumor assessment on the treatment effect hazard ratio for PFS and on the median PFS time, for different tumor assessment frequencies. Our results show that scan measurement variability can cause attenuation of the treatment effect (i.e. the hazard ratio is closer to one) and that the extent of attenuation may be increased with more frequent scan assessments. This attenuation leads to inflation of the type II error. Therefore, scan measurement variability should be minimized as far as possible in order to reveal a treatment effect that is closest to the truth. In disease settings where the measurement variability is shown to be large, consideration may be given to inflating the sample size of the study to maintain statistical power. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Occasionally, investigators collect auxiliary marks at the time of failure in a clinical study. Because the failure event may be censored at the end of the follow‐up period, these marked endpoints are subject to induced censoring. We propose two new families of two‐sample tests for the null hypothesis of no difference in mark‐scale distribution that allows for arbitrary associations between mark and time. One family of proposed tests is a nonparametric extension of an existing semi‐parametric linear test of the same null hypothesis while a second family of tests is based on novel marked rank processes. Simulation studies indicate that the proposed tests have the desired size and possess adequate statistical power to reject the null hypothesis under a simple change of location in the marginal mark distribution. When the marginal mark distribution has heavy tails, the proposed rank‐based tests can be nearly twice as powerful as linear tests.  相似文献   

15.
A draft addendum to ICH E9 has been released for public consultation in August 2017. The addendum focuses on two topics particularly relevant for randomized confirmatory clinical trials: estimands and sensitivity analyses. The need to amend ICH E9 grew out of the realization of a lack of alignment between the objectives of a clinical trial stated in the protocol and the accompanying quantification of the “treatment effect” reported in a regulatory submission. We embed time‐to‐event endpoints in the estimand framework and discuss how the four estimand attributes described in the addendum apply to time‐to‐event endpoints. We point out that if the proportional hazards assumption is not met, the estimand targeted by the most prevalent methods used to analyze time‐to‐event endpoints, logrank test, and Cox regression depends on the censoring distribution. We discuss for a large randomized clinical trial how the analyses for the primary and secondary endpoints as well as the sensitivity analyses actually performed in the trial can be seen in the context of the addendum. To the best of our knowledge, this is the first attempt to do so for a trial with a time‐to‐event endpoint. Questions that remain open with the addendum for time‐to‐event endpoints and beyond are formulated, and recommendations for planning of future trials are given. We hope that this will provide a contribution to developing a common framework based on the final version of the addendum that can be applied to design, protocols, statistical analysis plans, and clinical study reports in the future.  相似文献   

16.
Recently, molecularly targeted agents and immunotherapy have been advanced for the treatment of relapse or refractory cancer patients, where disease progression‐free survival or event‐free survival is often a primary endpoint for the trial design. However, methods to evaluate two‐stage single‐arm phase II trials with a time‐to‐event endpoint are currently processed under an exponential distribution, which limits application of real trial designs. In this paper, we developed an optimal two‐stage design, which is applied to the four commonly used parametric survival distributions. The proposed method has advantages compared with existing methods in that the choice of underlying survival model is more flexible and the power of the study is more adequately addressed. Therefore, the proposed two‐stage design can be routinely used for single‐arm phase II trial designs with a time‐to‐event endpoint as a complement to the commonly used Simon's two‐stage design for the binary outcome.  相似文献   

17.
In the analysis of semi‐competing risks data interest lies in estimation and inference with respect to a so‐called non‐terminal event, the observation of which is subject to a terminal event. Multi‐state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non‐terminal and terminal events specified, in part, by a unit‐specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi‐competing risks analysis that permit the non‐parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi‐parametric efficient score under the complete data setting and propose a non‐parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small‐sample operating characteristics evaluated via simulation. Although the proposed semi‐parametric transformation model and non‐parametric score imputation method are motivated by the analysis of semi‐competing risks data, they are broadly applicable to any analysis of multivariate time‐to‐event outcomes in which a unit‐specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer.  相似文献   

18.
In the traditional study design of a single‐arm phase II cancer clinical trial, the one‐sample log‐rank test has been frequently used. A common practice in sample size calculation is to assume that the event time in the new treatment follows exponential distribution. Such a study design may not be suitable for immunotherapy cancer trials, when both long‐term survivors (or even cured patients from the disease) and delayed treatment effect are present, because exponential distribution is not appropriate to describe such data and consequently could lead to severely underpowered trial. In this research, we proposed a piecewise proportional hazards cure rate model with random delayed treatment effect to design single‐arm phase II immunotherapy cancer trials. To improve test power, we proposed a new weighted one‐sample log‐rank test and provided a sample size calculation formula for designing trials. Our simulation study showed that the proposed log‐rank test performs well and is robust of misspecified weight and the sample size calculation formula also performs well.  相似文献   

19.
Subgroup detection has received increasing attention recently in different fields such as clinical trials, public management and market segmentation analysis. In these fields, people often face time‐to‐event data, which are commonly subject to right censoring. This paper proposes a semiparametric Logistic‐Cox mixture model for subgroup analysis when the interested outcome is event time with right censoring. The proposed method mainly consists of a likelihood ratio‐based testing procedure for testing the existence of subgroups. The expectation–maximization iteration is applied to improve the testing power, and a model‐based bootstrap approach is developed to implement the testing procedure. When there exist subgroups, one can also use the proposed model to estimate the subgroup effect and construct predictive scores for the subgroup membership. The large sample properties of the proposed method are studied. The finite sample performance of the proposed method is assessed by simulation studies. A real data example is also provided for illustration.  相似文献   

20.
In randomized clinical trials with time‐to‐event outcomes, the hazard ratio is commonly used to quantify the treatment effect relative to a control. The Cox regression model is commonly used to adjust for relevant covariates to obtain more accurate estimates of the hazard ratio between treatment groups. However, it is well known that the treatment hazard ratio based on a covariate‐adjusted Cox regression model is conditional on the specific covariates and differs from the unconditional hazard ratio that is an average across the population. Therefore, covariate‐adjusted Cox models cannot be used when the unconditional inference is desired. In addition, the covariate‐adjusted Cox model requires the relatively strong assumption of proportional hazards for each covariate. To overcome these challenges, a nonparametric randomization‐based analysis of covariance method was proposed to estimate the covariate‐adjusted hazard ratios for multivariate time‐to‐event outcomes. However, empirical evaluations of the performance (power and type I error rate) of the method have not been studied. Although the method is derived for multivariate situations, for most registration trials, the primary endpoint is a univariate outcome. Therefore, this approach is applied to univariate outcomes, and performance is evaluated through a simulation study in this paper. Stratified analysis is also investigated. As an illustration of the method, we also apply the covariate‐adjusted and unadjusted analyses to an oncology trial. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号