首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Before a surrogate end point can replace a final (true) end point in the evaluation of an experimental treatment, it must be formally 'validated'. The validation will typically require large numbers of observations. It is therefore useful to consider situations in which data are available from several randomized experiments. For two normally distributed end points Buyse and co-workers suggested a new definition of validity in terms of the quality of both trial level and individual level associations between the surrogate and true end points. This paper extends this approach to the important case of two failure time end points, using bivariate survival modelling. The method is illustrated by using two actual sets of data from cancer clinical trials.  相似文献   

2.
The objective of this paper is to extend the surrogate endpoint validation methodology proposed by Buyse et al. (2000) to the case of a longitudinally measured surrogate marker when the endpoint of interest is time to some key clinical event. A joint model for longitudinal and event time data is required. To this end, the model formulation of Henderson et al. (2000) is adopted. The methodology is applied to a set of two randomized clinical trials in advanced prostate cancer to evaluate the usefulness of prostate-specific antigen (PSA) level as a surrogate for survival.  相似文献   

3.
In many therapeutic areas, the identification and validation of surrogate endpoints is of prime interest to reduce the duration and/or size of clinical trials. Buyse et al. [Biostatistics 2000; 1:49-67] proposed a meta-analytic approach to the validation. In this approach, the validity of a surrogate is quantified by the coefficient of determination Rtrial2 obtained from a model, which allows for prediction of the treatment effect on the endpoint of interest ('true' endpoint) from the effect on the surrogate. One problem related to the use of Rtial2 is the difficulty in interpreting its value. To address this difficulty, in this paper we introduce a new concept, the so-called surrogate threshold effect (STE), defined as the minimum treatment effect on the surrogate necessary to predict a non-zero effect on the true endpoint. One of its interesting features, apart from providing information relevant to the practical use of a surrogate endpoint, is its natural interpretation from a clinical point of view.  相似文献   

4.
Summary.  When a treatment has a positive average causal effect (ACE) on an intermediate variable or surrogate end point which in turn has a positive ACE on a true end point, the treatment may have a negative ACE on the true end point due to the presence of unobserved confounders, which is called the surrogate paradox. A criterion for surrogate end points based on ACEs has recently been proposed to avoid the surrogate paradox. For a continuous or ordinal discrete end point, the distributional causal effect (DCE) may be a more appropriate measure for a causal effect than the ACE. We discuss criteria for surrogate end points based on DCEs. We show that commonly used models, such as generalized linear models and Cox's proportional hazard models, can make the sign of the DCE of the treatment on the true end point determinable by the sign of the DCE of the treatment on the surrogate even if the models include unobserved confounders. Furthermore, for a general distribution without any assumption of parametric models, we give a sufficient condition for a distributionally consistent surrogate and prove that it is almost necessary.  相似文献   

5.
In a recent paper Day and Duffy proposed a strategy for designing a randomized trial of different breast cancer screening schedules. Their strategy was based on the use of predictors of mortality determined by patients' factors at diagnosis as surrogates for true mortality. On the basis of the Prentice criterion for validity of a surrogate end point, and data from earlier studies of breast cancer case survival, they showed that, not only would the trial require a much shorter follow-up, but also that the information (i.e. inverse variance) for evaluating a treatment effect on mortality would be greater by a factor of nearly 3 if the predictors of mortality were used, compared with a trial in which mortality was actually observed. Although these results are technically correct, we believe that the conceptual strategy on which they are based is flawed, and that the fundamental problem is the Prentice criterion itself. In this paper the technical issues are discussed in detail, and an alternative structure for evaluating the validity of surrogate end points is proposed.  相似文献   

6.
Summary. Many biomedical studies involve the analysis of multiple events. The dependence between the times to these end points is often of scientific interest. We investigate a situation when one end point is subject to censoring by the other. The model assumptions of Day and co-workers and Fine and co-workers are extended to more general structures where the level of association may vary with time. Two types of estimating function are proposed. Asymptotic properties of the proposed estimators are derived. Their finite sample performance is studied via simulations. The inference procedures are applied to two real data sets for illustration.  相似文献   

7.
We extend the bivariate Wiener process considered by Whitmore and co-workers and model the joint process of a marker and health status. The health status process is assumed to be latent or unobservable. The time to reach the primary end point or failure (death, onset of disease, etc.) is the time when the latent health status process first crosses a failure threshold level. Inferences for the model are based on two kinds of data: censored survival data and marker measurements. Covariates, such as treatment variables, risk factors and base-line conditions, are related to the model parameters through generalized linear regression functions. The model offers a much richer potential for the study of treatment efficacy than do conventional models. Treatment effects can be assessed in terms of their influence on both the failure threshold and the health status process parameters. We derive an explicit formula for the prediction of residual failure times given the current marker level. Also we discuss model validation. This model does not require the proportional hazards assumption and hence can be widely used. To demonstrate the usefulness of the model, we apply the methods in analysing data from the protocol 116a of the AIDS Clinical Trials Group.  相似文献   

8.
A stochastic model wiuh exponential components is used to describe our data collected from a phase III cancer clinical trial. Criteria which guarantee that disease-free survival (DFS) can be used as a surrogate for overall survival are explored under this model. We examine several colorectal adjuvant clinical trials and find that these conditions are not satisfied. The relationship between the hazard ratio of DFS for an active treatment versus a control treatment and the cumulative hazard ratio of survival for the same two treatments is then explored. An almost linear relationship is found such that a hazard ratio for DFS of less than a threshold R corresponds to a non-null treatment effect on survival The threshold value R is determined for our colorectal adjuvant trial data. Based on this relationship, a one-sided test of equal hazard rate of survival is equivalent to a test of hazard ratio of DFS small than R This approach assumes that recurrence information is unbiasedly and accurately assessed; an assumpion which is sometimes difficult to ensure for multicenter clinical trials, particularly for interim analyses.  相似文献   

9.
Sargent et al (J Clin Oncol 23: 8664–8670, 2005) concluded that 3-year disease-free survival (DFS) can be considered a valid surrogate (replacement) endpoint for 5-year overall survival (OS) in clinical trials of adjuvant chemotherapy for colorectal cancer. We address the question whether the conclusion holds for trials involving other classes of treatments than those considered by Sargent et al. Additionally, we assess if the 3-year cutpoint is an optimal one. To this aim, we investigate whether the results reported by Sargent et al. could have been used to predict treatment effects in three centrally randomized adjuvant colorectal cancer trials performed by the Japanese Foundation for Multidisciplinary Treatment for Cancer (JFMTC) (Sakamoto et al. J Clin Oncol 22:484–492, 2004). Our analysis supports the conclusion of Sargent et al. and shows that using DFS at 2 or 3 years would be the best option for the prediction of OS at 5 years.  相似文献   

10.
In longitudinal clinical trials, when outcome variables at later time points are only defined for patients who survive to those times, the evaluation of the causal effect of treatment is complicated. In this paper, we describe an approach that can be used to obtain the causal effect of three treatment arms with ordinal outcomes in the presence of death using a principal stratification approach. We introduce a set of flexible assumptions to identify the causal effect and implement a sensitivity analysis for non-identifiable assumptions which we parameterize parsimoniously. Methods are illustrated on quality of life data from a recent colorectal cancer clinical trial.  相似文献   

11.
The use of surrogate end points has become increasingly common in medical and biological research. This is primarily because, in many studies, the primary end point of interest is too expensive or too difficult to obtain. There is now a large volume of statistical methods for analysing studies with surrogate end point data. However, to our knowledge, there has not been a comprehensive review of these methods to date. This paper reviews some existing methods and summarizes the strengths and weaknesses of each method. It also discusses the assumptions that are made by each method and critiques how likely these assumptions are met in practice.  相似文献   

12.
In the development of many diseases there are often associated variables which continuously measure the progress of an individual towards the final expression of the disease (failure). Such variables are stochastic processes, here called marker processes, and, at a given point in time, they may provide information about the current hazard and subsequently on the remaining time to failure. Here we consider a simple additive model for the relationship between the hazard function at time t and the history of the marker process up until time t. We develop some basic calculations based on this model. Interest is focused on statistical applications for markers related to estimation of the survival distribution of time to failure, including (i) the use of markers as surrogate responses for failure with censored data, and (ii) the use of markers as predictors of the time elapsed since onset of a survival process in prevalent individuals. Particular attention is directed to potential gains in efficiency incurred by using marker process information.  相似文献   

13.
There is currently much interest in the use of surrogate endpoints in clinical trials and intermediate endpoints in epidemiology. Freedman et al. [Statist. Med. 11 (1992) 167] proposed the use of a validation ratio for judging the evidence of the validity of a surrogate endpoint. The method involves calculation of a confidence interval for the ratio. In this paper, I compare through computer simulations the performance of Fieller's method with the delta method for this calculation. In typical situations, the numerator and denominator of the ratio are highly correlated. I find that the Fieller method is superior to the delta method in coverage properties and in statistical power of the validation test. In addition, the formula for predicting statistical power seems to be much more accurate for the Fieller method than for the delta method. The simulations show that the role of validation analysis is likely to be limited in evaluating the reliability of using surrogate endpoints in clinical trials; however, it is likely to be a useful tool in epidemiology for identifying intermediate endpoints.  相似文献   

14.
Panel count data often occur in a long-term study where the primary end point is the time to a specific event and each subject may experience multiple recurrences of this event. Furthermore, suppose that it is not feasible to keep subjects under observation continuously and the numbers of recurrences for each subject are only recorded at several distinct time points over the study period. Moreover, the set of observation times may vary from subject to subject. In this paper, regression methods, which are derived under simple semiparametric models, are proposed for the analysis of such longitudinal count data. Especially, we consider the situation when both observation and censoring times may depend on covariates. The new procedures are illustrated with data from a well-known cancer study.  相似文献   

15.
The efficient use of surrogate or auxiliary information has been investigated within both model-based and design-based approaches to data analysis, particularly in the context of missing data. Here we consider the use of such data in epidemiological studies of disease incidence in which surrogate measures of disease status are available for all subjects at two time points, but definitive diagnoses are available only in stratified subsamples. We briefly review methods for the analysis of two-phase studies of disease prevalence at a single time point, and we discuss the extension of four of these methods to the analysis of incidence studies. Their performance is compared with special reference to a study of the incidence of senile dementia.  相似文献   

16.
Traditionally, in clinical development plan, phase II trials are relatively small and can be expected to result in a large degree of uncertainty in the estimates based on which Phase III trials are planned. Phase II trials are also to explore appropriate primary efficacy endpoint(s) or patient populations. When the biology of the disease and pathogenesis of disease progression are well understood, the phase II and phase III studies may be performed in the same patient population with the same primary endpoint, e.g. efficacy measured by HbA1c in non-insulin dependent diabetes mellitus trials with treatment duration of at least three months. In the disease areas that molecular pathways are not well established or the clinical outcome endpoint may not be observed in a short-term study, e.g. mortality in cancer or AIDS trials, the treatment effect may be postulated through use of intermediate surrogate endpoint in phase II trials. However, in many cases, we generally explore the appropriate clinical endpoint in the phase II trials. An important question is how much of the effect observed in the surrogate endpoint in the phase II study can be translated into the clinical effect in the phase III trial. Another question is how much of the uncertainty remains in phase III trials. In this work, we study the utility of adaptation by design (not by statistical test) in the sense of adapting the phase II information for planning the phase III trials. That is, we investigate the impact of using various phase II effect size estimates on the sample size planning for phase III trials. In general, if the point estimate of the phase II trial is used for planning, it is advisable to size the phase III trial by choosing a smaller alpha level or a higher power level. The adaptation via using the lower limit of the one standard deviation confidence interval from the phase II trial appears to be a reasonable choice since it balances well between the empirical power of the launched trials and the proportion of trials not launched if a threshold lower than the true effect size of phase III trial can be chosen for determining whether the phase III trial is to be launched.  相似文献   

17.
Resolving paradoxes involving surrogate end points   总被引:1,自引:0,他引:1  
Summary.  We define a surrogate end point as a measure or indicator of a biological process that is obtained sooner, at less cost or less invasively than a true end point of health outcome and is used to make conclusions about the effect of an intervention on the true end point. Prentice presented criteria for valid hypothesis testing of a surrogate end point that replaces a true end point. For using the surrogate end point to estimate the predicted effect of intervention on the true end point, Day and Duffy assumed the Prentice criterion and arrived at two paradoxical results: the estimated predicted intervention effect by using a surrogate can give more precise estimates than the usual estimate of the intervention effect by using the true end point and the variance is greatest when the surrogate end point perfectly predicts the true end point. Begg and Leung formulated similar paradoxes and concluded that they indicate a flawed conceptual strategy arising from the Prentice criterion. We resolve the paradoxes as follows. Day and Duffy compared a surrogate-based estimate of the effect of intervention on the true end point with an estimate of the effect of intervention on the true end point that uses the true end point. Their paradox arose because the former estimate assumes the Prentice criterion whereas the latter does not. If both or neither of these estimates assume the Prentice criterion, there is no paradox. The paradoxes of Begg and Leung, although similar to those of Day and Duffy, arise from ignoring the variability of the parameter estimates irrespective of the Prentice criterion and disappear when the variability is included. Our resolution of the paradoxes provides a firm foundation for future meta-analytic extensions of the approach of Day and Duffy.  相似文献   

18.
Summary.  In the USA cancer as a whole is the second leading cause of death and a major burden to health care; thus medical progress against cancer is a major public health goal. There are many individual studies to suggest that cancer treatment breakthroughs and early diagnosis have significantly improved the prognosis of cancer patients. To understand better the relationship between medical improvements and the survival experience for the patient population at large, it is useful to evaluate cancer survival trends on the population level, e.g. to find out when and how much the cancer survival rates changed. We analyse population-based grouped cancer survival data by incorporating join points into the survival models. A join point survival model facilitates the identification of trends with significant change-points in cancer survival, when related to cancer treatments or interventions. The Bayesian information criterion is used to select the number of join points. The performance of the join point survival models is evaluated with respect to cancer prognosis, join point locations, annual percentage changes in death rates by year of diagnosis and sample sizes through intensive simulation studies. The model is then applied to grouped relative survival data for several major cancer sites from the 'Surveillance, epidemiology and end results' programme of the National Cancer Institute. The change-points in the survival trends for several major cancer sites are identified and the potential driving forces behind such change-points are discussed.  相似文献   

19.
This article discusses regression analysis of mixed interval-censored failure time data. Such data frequently occur across a variety of settings, including clinical trials, epidemiologic investigations, and many other biomedical studies with a follow-up component. For example, mixed failure times are commonly found in the two largest studies of long-term survivorship after childhood cancer, the datasets that motivated this work. However, most existing methods for failure time data consider only right-censored or only interval-censored failure times, not the more general case where times may be mixed. Additionally, among regression models developed for mixed interval-censored failure times, the proportional hazards formulation is generally assumed. It is well-known that the proportional hazards model may be inappropriate in certain situations, and alternatives are needed to analyze mixed failure time data in such cases. To fill this need, we develop a maximum likelihood estimation procedure for the proportional odds regression model with mixed interval-censored data. We show that the resulting estimators are consistent and asymptotically Gaussian. An extensive simulation study is performed to assess the finite-sample properties of the method, and this investigation indicates that the proposed method works well for many practical situations. We then apply our approach to examine the impact of age at cranial radiation therapy on risk of growth hormone deficiency in long-term survivors of childhood cancer.  相似文献   

20.
The National Cancer Institute (NCI) suggests a sudden reduction in prostate cancer mortality rates, likely due to highly successful treatments and screening methods for early diagnosis. We are interested in understanding the impact of medical breakthroughs, treatments, or interventions, on the survival experience for a population. For this purpose, estimating the underlying hazard function, with possible time change points, would be of substantial interest, as it will provide a general picture of the survival trend and when this trend is disrupted. Increasing attention has been given to testing the assumption of a constant failure rate against a failure rate that changes at a single point in time. We expand the set of alternatives to allow for the consideration of multiple change-points, and propose a model selection algorithm using sequential testing for the piecewise constant hazard model. These methods are data driven and allow us to estimate not only the number of change points in the hazard function but where those changes occur. Such an analysis allows for better understanding of how changing medical practice affects the survival experience for a patient population. We test for change points in prostate cancer mortality rates using the NCI Surveillance, Epidemiology, and End Results dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号