首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In many therapeutic areas, the identification and validation of surrogate endpoints is of prime interest to reduce the duration and/or size of clinical trials. Buyse et al. [Biostatistics 2000; 1:49-67] proposed a meta-analytic approach to the validation. In this approach, the validity of a surrogate is quantified by the coefficient of determination Rtrial2 obtained from a model, which allows for prediction of the treatment effect on the endpoint of interest ('true' endpoint) from the effect on the surrogate. One problem related to the use of Rtial2 is the difficulty in interpreting its value. To address this difficulty, in this paper we introduce a new concept, the so-called surrogate threshold effect (STE), defined as the minimum treatment effect on the surrogate necessary to predict a non-zero effect on the true endpoint. One of its interesting features, apart from providing information relevant to the practical use of a surrogate endpoint, is its natural interpretation from a clinical point of view.  相似文献   

2.
In many disease areas, commonly used long-term clinical endpoints are becoming increasingly difficult to implement due to long follow-up times and/or increased costs. Shorter-term surrogate endpoints are urgently needed to expedite drug development, the evaluation of which requires robust and reliable statistical methodology to drive meaningful clinical conclusions about the strength of relationship with the true long-term endpoint. This paper uses a simulation study to explore one such previously proposed method, based on information theory, for evaluation of time to event surrogate and long-term endpoints, including the first examination within a meta-analytic setting of multiple clinical trials with such endpoints. The performance of the information theory method is examined for various scenarios including different dependence structures, surrogate endpoints, censoring mechanisms, treatment effects, trial and sample sizes, and for surrogate and true endpoints with a natural time-ordering. Results allow us to conclude that, contrary to some findings in the literature, the approach provides estimates of surrogacy that may be substantially lower than the true relationship between surrogate and true endpoints, and rarely reach a level that would enable confidence in the strength of a given surrogate endpoint. As a result, care is needed in the assessment of time to event surrogate and true endpoints based only on this methodology.  相似文献   

3.

The linear mixed-effects model (Verbeke and Molenberghs, 2000) has become a standard tool for the analysis of continuous hierarchical data such as, for example, repeated measures or data from meta-analyses. However, in certain situations the model does pose insurmountable computational problems. Precisely this has been the experience of Buyse et al. (2000a) who proposed an estimation- and prediction-based approach for evaluating surrogate endpoints. Their approach requires fitting linear mixed models to data from several clinical trials. In doing so, these authors built on the earlier, single-trial based, work by Prentice (1989), Freedman et al. (1992), and Buyse and Molenberghs (1998). While Buyse et al. (2000a) claim their approach has a number of advantages over the classical single-trial methods, a solution needs to be found for the computational complexity of the corresponding linear mixed model. In this paper, we propose and study a number of possible simplifications. This is done by means of a simulation study and by applying the various strategies to data from three clinical studies: Pharmacological Therapy for Macular Degeneration Study Group (1977), Ovarian Cancer Meta-analysis Project (1991) and Corfu-A Study Group (1995).  相似文献   

4.
A composite endpoint consists of multiple endpoints combined in one outcome. It is frequently used as the primary endpoint in randomized clinical trials. There are two main disadvantages associated with the use of composite endpoints: a) in conventional analyses, all components are treated equally important; and b) in time‐to‐event analyses, the first event considered may not be the most important component. Recently Pocock et al. (2012) introduced the win ratio method to address these disadvantages. This method has two alternative approaches: the matched pair approach and the unmatched pair approach. In the unmatched pair approach, the confidence interval is constructed based on bootstrap resampling, and the hypothesis testing is based on the non‐parametric method by Finkelstein and Schoenfeld (1999). Luo et al. (2015) developed a close‐form variance estimator of the win ratio for the unmatched pair approach, based on a composite endpoint with two components and a specific algorithm determining winners, losers and ties. We extend the unmatched pair approach to provide a generalized analytical solution to both hypothesis testing and confidence interval construction for the win ratio, based on its logarithmic asymptotic distribution. This asymptotic distribution is derived via U‐statistics following Wei and Johnson (1985). We perform simulations assessing the confidence intervals constructed based on our approach versus those per the bootstrap resampling and per Luo et al. We have also applied our approach to a liver transplant Phase III study. This application and the simulation studies show that the win ratio can be a better statistical measure than the odds ratio when the importance order among components matters; and the method per our approach and that by Luo et al., although derived based on large sample theory, are not limited to a large sample, but are also good for relatively small sample sizes. Different from Pocock et al. and Luo et al., our approach is a generalized analytical method, which is valid for any algorithm determining winners, losers and ties. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
Sargent et al (J Clin Oncol 23: 8664–8670, 2005) concluded that 3-year disease-free survival (DFS) can be considered a valid surrogate (replacement) endpoint for 5-year overall survival (OS) in clinical trials of adjuvant chemotherapy for colorectal cancer. We address the question whether the conclusion holds for trials involving other classes of treatments than those considered by Sargent et al. Additionally, we assess if the 3-year cutpoint is an optimal one. To this aim, we investigate whether the results reported by Sargent et al. could have been used to predict treatment effects in three centrally randomized adjuvant colorectal cancer trials performed by the Japanese Foundation for Multidisciplinary Treatment for Cancer (JFMTC) (Sakamoto et al. J Clin Oncol 22:484–492, 2004). Our analysis supports the conclusion of Sargent et al. and shows that using DFS at 2 or 3 years would be the best option for the prediction of OS at 5 years.  相似文献   

6.
Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter‐term surrogate endpoints as substitutes for costly long‐term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long‐term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time‐to‐event surrogates for a time‐to‐event true endpoint. This two‐stage meta‐analytic copula method has been extensively studied for time‐to‐event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two‐stage meta‐analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use.  相似文献   

7.
There is currently much interest in the use of surrogate endpoints in clinical trials and intermediate endpoints in epidemiology. Freedman et al. [Statist. Med. 11 (1992) 167] proposed the use of a validation ratio for judging the evidence of the validity of a surrogate endpoint. The method involves calculation of a confidence interval for the ratio. In this paper, I compare through computer simulations the performance of Fieller's method with the delta method for this calculation. In typical situations, the numerator and denominator of the ratio are highly correlated. I find that the Fieller method is superior to the delta method in coverage properties and in statistical power of the validation test. In addition, the formula for predicting statistical power seems to be much more accurate for the Fieller method than for the delta method. The simulations show that the role of validation analysis is likely to be limited in evaluating the reliability of using surrogate endpoints in clinical trials; however, it is likely to be a useful tool in epidemiology for identifying intermediate endpoints.  相似文献   

8.
Variable selection is an important issue in all regression analysis and in this paper, we discuss this in the context of regression analysis of recurrent event data. Recurrent event data often occur in long-term studies in which individuals may experience the events of interest more than once and their analysis has recently attracted a great deal of attention (Andersen et al., Statistical models based on counting processes, 1993; Cook and Lawless, Biometrics 52:1311–1323, 1996, The analysis of recurrent event data, 2007; Cook et al., Biometrics 52:557–571, 1996; Lawless and Nadeau, Technometrics 37:158-168, 1995; Lin et al., J R Stat Soc B 69:711–730, 2000). However, it seems that there are no established approaches to the variable selection with respect to recurrent event data. For the problem, we adopt the idea behind the nonconcave penalized likelihood approach proposed in Fan and Li (J Am Stat Assoc 96:1348–1360, 2001) and develop a nonconcave penalized estimating function approach. The proposed approach selects variables and estimates regression coefficients simultaneously and an algorithm is presented for this process. We show that the proposed approach performs as well as the oracle procedure in that it yields the estimates as if the correct submodel was known. Simulation studies are conducted for assessing the performance of the proposed approach and suggest that it works well for practical situations. The proposed methodology is illustrated by using the data from a chronic granulomatous disease study.  相似文献   

9.
For clinical trials with time‐to‐event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre‐specified number of deaths. Often, correlated surrogate information, such as time‐to‐progression (TTP) and progression‐free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression‐free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Recurrent event data occur in many clinical and observational studies (Cook and Lawless, Analysis of recurrent event data, 2007) and in these situations, there may exist a terminal event such as death that is related to the recurrent event of interest (Ghosh and Lin, Biometrics 56:554–562, 2000; Wang et al., J Am Stat Assoc 96:1057–1065, 2001; Huang and Wang, J Am Stat Assoc 99:1153–1165, 2004; Ye et al., Biometrics 63:78–87, 2007). In addition, sometimes there may exist more than one type of recurrent events, that is, one faces multivariate recurrent event data with some dependent terminal event (Chen and Cook, Biostatistics 5:129–143, 2004). It is apparent that for the analysis of such data, one has to take into account the dependence both among different types of recurrent events and between the recurrent and terminal events. In this paper, we propose a joint modeling approach for regression analysis of the data and both finite and asymptotic properties of the resulting estimates of unknown parameters are established. The methodology is applied to a set of bivariate recurrent event data arising from a study of leukemia patients.  相似文献   

11.
Missing data in clinical trials can result in biased treatment effect estimates and tests if the analysis includes only the observed data. Two simple methods of compensating for this potential bias in trials with a binary endpoint were suggested by Wittes et al. (Statist. Med. 8 (1989) 415–425). We study the statistical properties of these procedures and show that they are robust against certain model departures.  相似文献   

12.
The individual causal association (ICA) has recently been introduced as a metric of surrogacy in a causal‐inference framework. The ICA is defined on the unit interval and quantifies the association between the individual causal effect on the surrogate (ΔS) and true (ΔT) endpoint. In addition, the ICA offers a general assessment of the surrogate predictive value, taking value 1 when there is a deterministic relationship between ΔT and ΔS, and value 0 when both causal effects are independent. However, when one moves away from the previous two extreme scenarios, the interpretation of the ICA becomes challenging. In the present work, a new metric of surrogacy, the minimum probability of a prediction error (PPE), is introduced when both endpoints are binary, ie, the probability of erroneously predicting the value of ΔT using ΔS. Although the PPE has a more straightforward interpretation than the ICA, its magnitude is bounded above by a quantity that depends on the true endpoint. For this reason, the reduction in prediction error (RPE) attributed to the surrogate is defined. The RPE always lies in the unit interval, taking value 1 if prediction is perfect and 0 if ΔS conveys no information on ΔT. The methodology is illustrated using data from two clinical trials and a user‐friendly R package Surrogate is provided to carry out the validation exercise.  相似文献   

13.
We present an introductory survey of the use of surrogates in cancer research, in particular in clinical trials. The concept of a surrogate endpoint is introduced and contrasted with that of a biomarker. It is emphasized that a surrogate endpoint is not universal for an indication but will depend on the mechanism of treatment. We discuss the measures of validity of a surrogate and give examples of both cancer surrogates and biomarkers on the path to surrogacy. Circumstances in which a surrogate endpoint may actually be preferred to the clinical endpoint are described. We provide pointers to the recent substantive literature on surrogates. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
The linear transformation model is a semiparametric model which contains the Cox proportional hazards model and the proportional odds model as special cases. Cai et al. (Biometrika 87:867-878, 2000) have proposed an inference procedure for the linear transformation model with correlated censored observations. In this article, we develop formal and graphical model checking techniques for the linear transformation models based on cumulative sums of martingale-type residuals. The proposed method is illustrated with a clinical trial data.  相似文献   

15.
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully utilize all available information including the historical and phase II information of the treatment as well as external data of the other treatments. It is not uncommon that a phase II study may use a surrogate endpoint as the primary endpoint and has no or limited data for the final endpoint. On the other hand, external information from the other studies for the other treatments on the surrogate and final endpoints may be available to establish a relationship between the treatment effects on the two endpoints. Through this relationship, making full use of the surrogate information may enhance the estimate of the treatment effect on the final endpoint. In this research, we propose a bivariate Bayesian analysis approach to comprehensively deal with the problem. A dynamic borrowing approach is considered to regulate the amount of historical data and surrogate information borrowing based on the level of consistency. A much simpler frequentist method is also discussed. Simulations are conducted to compare the performances of different approaches. An example is used to illustrate the applications of the methods.  相似文献   

16.
Growth curve analysis is beneficial in longitudinal studies, where the pattern of response variables measured repeatedly over time is of interest, yet unknown. In this article, we propose generalized growth curve models under a polynomial regression framework and offer a complete process that identifies the parsimonious growth curves for different groups of interest, as well as compares the curves. A higher order of a polynomial degree generally provides more flexible regression, yet it may suffer from the complicated and overfitted model in practice. Therefore, we employ the model selection procedure that chooses the optimal degree of a polynomial consistently. Consideration of a quadratic inference function (Qu et al., 2000) for estimation on regression parameters is addressed and estimation efficiency is improved by incorporating the within-subject correlation commonly existing in longitudinal data. In biomedical studies, it is of particular interest to compare multiple treatments and provide an effective one. We further conduct the hypothesis test that assesses the equality of the growth curves through an asymptotic chi-square test statistic. The proposed methodology is employed on a randomized controlled longitudinal dataset on depression. The effectiveness of our procedure is also confirmed with simulation studies.  相似文献   

17.
The use of surrogate endpoint, which is becoming widely popular now a days, was introduced in medical research to reduce the time of experiment required for the approval of a drug. Through cost cutting and time saving, the surrogate endpoints can bring profit to the medicine producers. We obtain the expression for the reduction of true samples, in proportion, at an expense of surrogates to achieve a fixed power while comparing the two treatments. We present our discussion in the two-treatment set up in the context of odds ratio as a measure of treatment difference. We illustrate the methodology with some real dataset.  相似文献   

18.
This work develops a new methodology in order to discriminate models for interval- censored data based on bootstrap residual simulation by observing the deviance difference from one model in relation to another, according to Hinde (1992). Generally, this sort of data can generate a large number of tied observations and, in this case, survival time can be regarded as discrete. Therefore, the Cox proportional hazards model for grouped data (Prentice & Gloeckler, 1978) and the logistic model (Lawless, 1982) can be fitted by means of generalized linear models. Whitehead (1989) considered censoring to be an indicative variable with a binomial distribution and fitted the Cox proportional hazards model using complementary log-log as a link function. In addition, a logistic model can be fitted using logit as a link function. The proposed methodology arises as an alternative to the score tests developed by Colosimo et al. (2000), where such models can be obtained for discrete binary data as particular cases from the Aranda-Ordaz distribution asymmetric family. These tests are thus developed with a basis on link functions to generate such a fit. The example that motivates this study was the dataset from an experiment carried out on a flax cultivar planted on four substrata susceptible to the pathogen Fusarium oxysoprum . The response variable, which is the time until blighting, was observed in intervals during 52 days. The results were compared with the model fit and the AIC values.  相似文献   

19.
Consider a subject entered on a clinicaltrial in which the major endpoint is a time metric such as deathor time to reach a well defined event. During the observationalperiod the subject may experience an intermediate clinical event.The intermediate clinical event may induce a change in the survivaldistribution. We consider models for the one and two sample problem.The model for the one sample problem enables one to test if theoccurrence of the intermediate event changed the survival distribution.This models provides a way of carrying out non-randomized clinicaltrial to determine if a therapy has benefit. The two sample problemconsiders testing if the probability distributions, with andwithout an intermediate event, are the same. Statistical testsare derived using a semi-Markov or a time dependent mixture model.Simulation studies are carried out to compare these new procedureswith the log rank, stratified log rank and landmark tests. Thenew tests appear to have uniformly greater power than these competitortests. The methods are applied to a randomized clinical trialcarried out by the Aids Clinical Trial Group (ACTG) which comparedlow versus high doses of zidovudine (AZT).  相似文献   

20.
Many assumptions, including assumptions regarding treatment effects, are made at the design stage of a clinical trial for power and sample size calculations. It is desirable to check these assumptions during the trial by using blinded data. Methods for sample size re‐estimation based on blinded data analyses have been proposed for normal and binary endpoints. However, there is a debate that no reliable estimate of the treatment effect can be obtained in a typical clinical trial situation. In this paper, we consider the case of a survival endpoint and investigate the feasibility of estimating the treatment effect in an ongoing trial without unblinding. We incorporate information of a surrogate endpoint and investigate three estimation procedures, including a classification method and two expectation–maximization (EM) algorithms. Simulations and a clinical trial example are used to assess the performance of the procedures. Our studies show that the expectation–maximization algorithms highly depend on the initial estimates of the model parameters. Despite utilization of a surrogate endpoint, all three methods have large variations in the treatment effect estimates and hence fail to provide a precise conclusion about the treatment effect. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号