首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For clinical trials with time‐to‐event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre‐specified number of deaths. Often, correlated surrogate information, such as time‐to‐progression (TTP) and progression‐free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression‐free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
In many disease areas, commonly used long-term clinical endpoints are becoming increasingly difficult to implement due to long follow-up times and/or increased costs. Shorter-term surrogate endpoints are urgently needed to expedite drug development, the evaluation of which requires robust and reliable statistical methodology to drive meaningful clinical conclusions about the strength of relationship with the true long-term endpoint. This paper uses a simulation study to explore one such previously proposed method, based on information theory, for evaluation of time to event surrogate and long-term endpoints, including the first examination within a meta-analytic setting of multiple clinical trials with such endpoints. The performance of the information theory method is examined for various scenarios including different dependence structures, surrogate endpoints, censoring mechanisms, treatment effects, trial and sample sizes, and for surrogate and true endpoints with a natural time-ordering. Results allow us to conclude that, contrary to some findings in the literature, the approach provides estimates of surrogacy that may be substantially lower than the true relationship between surrogate and true endpoints, and rarely reach a level that would enable confidence in the strength of a given surrogate endpoint. As a result, care is needed in the assessment of time to event surrogate and true endpoints based only on this methodology.  相似文献   

3.
For clinical trials with multiple endpoints, the primary interest is usually to evaluate the relationship of these endpoints and treatment interventions. Studying the correlation of two clinical trial endpoints can also be of interests. For example, the association between patient‐reported outcome and clinically assessed endpoint could answer important research questions and also generate interesting hypothesis for future research. However, it is not straightforward to quantify such association. In this article, we proposed a multiple event approach to profile such association with a temporal correlation function, visualized by a correlation function plot over time with a confidence band. We developed this approach by extending the existing methodology in recurrent event literature. This approach was shown to be generally unbiased and could be a useful tool for data visualization and inference. We demonstrated the use of this method with data from a real clinical trial. Although this approach was developed to evaluate the association between patient‐reported outcome and adverse events, it can also be used to evaluate the association of any two endpoints that can be translated to time‐to‐event endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
A draft addendum to ICH E9 has been released for public consultation in August 2017. The addendum focuses on two topics particularly relevant for randomized confirmatory clinical trials: estimands and sensitivity analyses. The need to amend ICH E9 grew out of the realization of a lack of alignment between the objectives of a clinical trial stated in the protocol and the accompanying quantification of the “treatment effect” reported in a regulatory submission. We embed time‐to‐event endpoints in the estimand framework and discuss how the four estimand attributes described in the addendum apply to time‐to‐event endpoints. We point out that if the proportional hazards assumption is not met, the estimand targeted by the most prevalent methods used to analyze time‐to‐event endpoints, logrank test, and Cox regression depends on the censoring distribution. We discuss for a large randomized clinical trial how the analyses for the primary and secondary endpoints as well as the sensitivity analyses actually performed in the trial can be seen in the context of the addendum. To the best of our knowledge, this is the first attempt to do so for a trial with a time‐to‐event endpoint. Questions that remain open with the addendum for time‐to‐event endpoints and beyond are formulated, and recommendations for planning of future trials are given. We hope that this will provide a contribution to developing a common framework based on the final version of the addendum that can be applied to design, protocols, statistical analysis plans, and clinical study reports in the future.  相似文献   

5.
Adaptive trial methodology for multiarmed trials and enrichment designs has been extensively discussed in the past. A general principle to construct test procedures that control the family‐wise Type I error rate in the strong sense is based on combination tests within a closed test. Using survival data, a problem arises when using information of patients for adaptive decision making, which are under risk at interim. With the currently available testing procedures, either no testing of hypotheses in interim analyses is possible or there are restrictions on the interim data that can be used in the adaptation decisions as, essentially, only the interim test statistics of the primary endpoint may be used. We propose a general adaptive testing procedure, covering multiarmed and enrichment designs, which does not have these restrictions. An important application are clinical trials, where short‐term surrogate endpoints are used as basis for trial adaptations, and we illustrate how such trials can be designed. We propose statistical models to assess the impact of effect sizes, the correlation structure between the short‐term and the primary endpoint, the sample size, the timing of interim analyses, and the selection rule on the operating characteristics.  相似文献   

6.
In many therapeutic areas, the identification and validation of surrogate endpoints is of prime interest to reduce the duration and/or size of clinical trials. Buyse et al. [Biostatistics 2000; 1:49-67] proposed a meta-analytic approach to the validation. In this approach, the validity of a surrogate is quantified by the coefficient of determination Rtrial2 obtained from a model, which allows for prediction of the treatment effect on the endpoint of interest ('true' endpoint) from the effect on the surrogate. One problem related to the use of Rtial2 is the difficulty in interpreting its value. To address this difficulty, in this paper we introduce a new concept, the so-called surrogate threshold effect (STE), defined as the minimum treatment effect on the surrogate necessary to predict a non-zero effect on the true endpoint. One of its interesting features, apart from providing information relevant to the practical use of a surrogate endpoint, is its natural interpretation from a clinical point of view.  相似文献   

7.
Over the past years, significant progress has been made in developing statistically rigorous methods to implement clinically interpretable sensitivity analyses for assumptions about the missingness mechanism in clinical trials for continuous and (to a lesser extent) for binary or categorical endpoints. Studies with time‐to‐event outcomes have received much less attention. However, such studies can be similarly challenged with respect to the robustness and integrity of primary analysis conclusions when a substantial number of subjects withdraw from treatment prematurely prior to experiencing an event of interest. We discuss how the methods that are widely used for primary analyses of time‐to‐event outcomes could be extended in a clinically meaningful and interpretable way to stress‐test the assumption of ignorable censoring. We focus on a ‘tipping point’ approach, the objective of which is to postulate sensitivity parameters with a clear clinical interpretation and to identify a setting of these parameters unfavorable enough towards the experimental treatment to nullify a conclusion that was favorable to that treatment. Robustness of primary analysis results can then be assessed based on clinical plausibility of the scenario represented by the tipping point. We study several approaches for conducting such analyses based on multiple imputation using parametric, semi‐parametric, and non‐parametric imputation models and evaluate their operating characteristics via simulation. We argue that these methods are valuable tools for sensitivity analyses of time‐to‐event data and conclude that the method based on piecewise exponential imputation model of survival has some advantages over other methods studied here. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   

9.
In a clinical trial comparing two treatment groups, one commonly‐used endpoint is time to death. Another is time until the first nonfatal event (if there is one) or until death (if not). Both endpoints have drawbacks. The wrong choice may adversely affect the value of the study by impairing power if deaths are too few (with the first endpoint) or by lessening the role of mortality if not (with the second endpoint). We propose a compromise that provides a simple test based on the time to death if the patient has died or time since randomization augmented by an increment otherwise. The test applies the ordinary two‐sample Wilcoxon statistic to these values. The formula for the increment (the same for experimental and control patients) must be specified before the trial starts. In the simplest (and perhaps most useful) case, the increment assumes only two values, according to whether or not the (surviving) patient had a nonfatal event. More generally, the increment depends on the time of the first nonfatal event, if any, and the time since randomization. The test has correct Type I error even though it does not handle censoring in a customary way. For conditions where investigators would face no easy (advance) choice between the two older tests, simulation results favor the new test. An example using a renal‐cancer trial is presented. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
For a number of reasons, surrogate endpoints are considered instead of the so-called true endpoint in clinical studies, especially when such endpoints can be measured earlier, and/or with less burden for patient and experimenter. Surrogate endpoints may occur more frequently than their standard counterparts. For these reasons, it is not surprising that the use of surrogate endpoints in clinical practice is increasing.  相似文献   

11.
The use of surrogate endpoint, which is becoming widely popular now a days, was introduced in medical research to reduce the time of experiment required for the approval of a drug. Through cost cutting and time saving, the surrogate endpoints can bring profit to the medicine producers. We obtain the expression for the reduction of true samples, in proportion, at an expense of surrogates to achieve a fixed power while comparing the two treatments. We present our discussion in the two-treatment set up in the context of odds ratio as a measure of treatment difference. We illustrate the methodology with some real dataset.  相似文献   

12.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

13.
We present an introductory survey of the use of surrogates in cancer research, in particular in clinical trials. The concept of a surrogate endpoint is introduced and contrasted with that of a biomarker. It is emphasized that a surrogate endpoint is not universal for an indication but will depend on the mechanism of treatment. We discuss the measures of validity of a surrogate and give examples of both cancer surrogates and biomarkers on the path to surrogacy. Circumstances in which a surrogate endpoint may actually be preferred to the clinical endpoint are described. We provide pointers to the recent substantive literature on surrogates. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
Recently, molecularly targeted agents and immunotherapy have been advanced for the treatment of relapse or refractory cancer patients, where disease progression‐free survival or event‐free survival is often a primary endpoint for the trial design. However, methods to evaluate two‐stage single‐arm phase II trials with a time‐to‐event endpoint are currently processed under an exponential distribution, which limits application of real trial designs. In this paper, we developed an optimal two‐stage design, which is applied to the four commonly used parametric survival distributions. The proposed method has advantages compared with existing methods in that the choice of underlying survival model is more flexible and the power of the study is more adequately addressed. Therefore, the proposed two‐stage design can be routinely used for single‐arm phase II trial designs with a time‐to‐event endpoint as a complement to the commonly used Simon's two‐stage design for the binary outcome.  相似文献   

15.
The objective of this paper is to extend the surrogate endpoint validation methodology proposed by Buyse et al. (2000) to the case of a longitudinally measured surrogate marker when the endpoint of interest is time to some key clinical event. A joint model for longitudinal and event time data is required. To this end, the model formulation of Henderson et al. (2000) is adopted. The methodology is applied to a set of two randomized clinical trials in advanced prostate cancer to evaluate the usefulness of prostate-specific antigen (PSA) level as a surrogate for survival.  相似文献   

16.
Failure to adjust for informative non‐compliance, a common phenomenon in endpoint trials, can lead to a considerably underpowered study. However, standard methods for sample size calculation assume that non‐compliance is non‐informative. One existing method to account for informative non‐compliance, based on a two‐subpopulation model, is limited with respect to the degree of association between the risk of non‐compliance and the risk of a study endpoint that can be modelled, and with respect to the maximum allowable rates of non‐compliance and endpoints. In this paper, we introduce a new method that largely overcomes these limitations. This method is based on a model in which time to non‐compliance and time to endpoint are assumed to follow a bivariate exponential distribution. Parameters of the distribution are obtained by equating them with the study design parameters. The impact of informative non‐compliance is investigated across a wide range of conditions, and the method is illustrated by recalculating the sample size of a published clinical trial. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
In the analysis of semi‐competing risks data interest lies in estimation and inference with respect to a so‐called non‐terminal event, the observation of which is subject to a terminal event. Multi‐state models are commonly used to analyse such data, with covariate effects on the transition/intensity functions typically specified via the Cox model and dependence between the non‐terminal and terminal events specified, in part, by a unit‐specific shared frailty term. To ensure identifiability, the frailties are typically assumed to arise from a parametric distribution, specifically a Gamma distribution with mean 1.0 and variance, say, σ2. When the frailty distribution is misspecified, however, the resulting estimator is not guaranteed to be consistent, with the extent of asymptotic bias depending on the discrepancy between the assumed and true frailty distributions. In this paper, we propose a novel class of transformation models for semi‐competing risks analysis that permit the non‐parametric specification of the frailty distribution. To ensure identifiability, the class restricts to parametric specifications of the transformation and the error distribution; the latter are flexible, however, and cover a broad range of possible specifications. We also derive the semi‐parametric efficient score under the complete data setting and propose a non‐parametric score imputation method to handle right censoring; consistency and asymptotic normality of the resulting estimators is derived and small‐sample operating characteristics evaluated via simulation. Although the proposed semi‐parametric transformation model and non‐parametric score imputation method are motivated by the analysis of semi‐competing risks data, they are broadly applicable to any analysis of multivariate time‐to‐event outcomes in which a unit‐specific shared frailty is used to account for correlation. Finally, the proposed model and estimation procedures are applied to a study of hospital readmission among patients diagnosed with pancreatic cancer.  相似文献   

18.
The win ratio has been studied methodologically and applied in data analysis and in designing clinical trials. Researchers have pointed out that the results depend on follow‐up time and censoring time, which are sometimes used interchangeably. In this article, we distinguish between follow‐up time and censoring time, show theoretically the impact of censoring on the win ratio, and illustrate the impact of follow‐up time. We then point out that, if the treatment has long‐term benefit from a more important but less frequent endpoint (eg, death), the win ratio can show that benefit by following patients longer, avoiding masking by more frequent but less important outcomes, which occurs in conventional time‐to‐first‐event analyses. For the situation of nonproportional hazards, we demonstrate that the win ratio can be a good alternative to methods such as landmark survival rate, restricted mean survival time, and weighted log‐rank tests.  相似文献   

19.
In some exceptional circumstances, as in very rare diseases, nonrandomized one‐arm trials are the sole source of evidence to demonstrate efficacy and safety of a new treatment. The design of such studies needs a sound methodological approach in order to provide reliable information, and the determination of the appropriate sample size still represents a critical step of this planning process. As, to our knowledge, no method exists for sample size calculation in one‐arm trials with a recurrent event endpoint, we propose here a closed sample size formula. It is derived assuming a mixed Poisson process, and it is based on the asymptotic distribution of the one‐sample robust nonparametric test recently developed for the analysis of recurrent events data. The validity of this formula in managing a situation with heterogeneity of event rates, both in time and between patients, and time‐varying treatment effect was demonstrated with exhaustive simulation studies. Moreover, although the method requires the specification of a process for events generation, it seems to be robust under erroneous definition of this process, provided that the number of events at the end of the study is similar to the one assumed in the planning phase. The motivating clinical context is represented by a nonrandomized one‐arm study on gene therapy in a very rare immunodeficiency in children (ADA‐SCID), where a major endpoint is the recurrence of severe infections. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
There is currently much interest in the use of surrogate endpoints in clinical trials and intermediate endpoints in epidemiology. Freedman et al. [Statist. Med. 11 (1992) 167] proposed the use of a validation ratio for judging the evidence of the validity of a surrogate endpoint. The method involves calculation of a confidence interval for the ratio. In this paper, I compare through computer simulations the performance of Fieller's method with the delta method for this calculation. In typical situations, the numerator and denominator of the ratio are highly correlated. I find that the Fieller method is superior to the delta method in coverage properties and in statistical power of the validation test. In addition, the formula for predicting statistical power seems to be much more accurate for the Fieller method than for the delta method. The simulations show that the role of validation analysis is likely to be limited in evaluating the reliability of using surrogate endpoints in clinical trials; however, it is likely to be a useful tool in epidemiology for identifying intermediate endpoints.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号