首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Summary.  When a treatment has a positive average causal effect (ACE) on an intermediate variable or surrogate end point which in turn has a positive ACE on a true end point, the treatment may have a negative ACE on the true end point due to the presence of unobserved confounders, which is called the surrogate paradox. A criterion for surrogate end points based on ACEs has recently been proposed to avoid the surrogate paradox. For a continuous or ordinal discrete end point, the distributional causal effect (DCE) may be a more appropriate measure for a causal effect than the ACE. We discuss criteria for surrogate end points based on DCEs. We show that commonly used models, such as generalized linear models and Cox's proportional hazard models, can make the sign of the DCE of the treatment on the true end point determinable by the sign of the DCE of the treatment on the surrogate even if the models include unobserved confounders. Furthermore, for a general distribution without any assumption of parametric models, we give a sufficient condition for a distributionally consistent surrogate and prove that it is almost necessary.  相似文献   

2.
Summary.  In many therapeutic areas, the identification and validation of surrogate end points is of prime interest to reduce the duration and/or size of clinical trials. Buyse and co-workers and Burzykowski and co-workers have proposed a validation strategy for end points that are either normally distributed or (possibly censored) failure times. In this paper, we address the problem of validating an ordinal categorical or binary end point as a surrogate for a failure time true end point. In particular, we investigate the validity of tumour response as a surrogate for survival time in evaluating fluoropyrimidine-based experimental therapies for advanced colorectal cancer. Our analysis is performed on data from 28 randomized trials in advanced colorectal cancer, which are available through the Meta-Analysis Group in Cancer.  相似文献   

3.
Resolving paradoxes involving surrogate end points   总被引:1,自引:0,他引:1  
Summary.  We define a surrogate end point as a measure or indicator of a biological process that is obtained sooner, at less cost or less invasively than a true end point of health outcome and is used to make conclusions about the effect of an intervention on the true end point. Prentice presented criteria for valid hypothesis testing of a surrogate end point that replaces a true end point. For using the surrogate end point to estimate the predicted effect of intervention on the true end point, Day and Duffy assumed the Prentice criterion and arrived at two paradoxical results: the estimated predicted intervention effect by using a surrogate can give more precise estimates than the usual estimate of the intervention effect by using the true end point and the variance is greatest when the surrogate end point perfectly predicts the true end point. Begg and Leung formulated similar paradoxes and concluded that they indicate a flawed conceptual strategy arising from the Prentice criterion. We resolve the paradoxes as follows. Day and Duffy compared a surrogate-based estimate of the effect of intervention on the true end point with an estimate of the effect of intervention on the true end point that uses the true end point. Their paradox arose because the former estimate assumes the Prentice criterion whereas the latter does not. If both or neither of these estimates assume the Prentice criterion, there is no paradox. The paradoxes of Begg and Leung, although similar to those of Day and Duffy, arise from ignoring the variability of the parameter estimates irrespective of the Prentice criterion and disappear when the variability is included. Our resolution of the paradoxes provides a firm foundation for future meta-analytic extensions of the approach of Day and Duffy.  相似文献   

4.
In a recent paper Day and Duffy proposed a strategy for designing a randomized trial of different breast cancer screening schedules. Their strategy was based on the use of predictors of mortality determined by patients' factors at diagnosis as surrogates for true mortality. On the basis of the Prentice criterion for validity of a surrogate end point, and data from earlier studies of breast cancer case survival, they showed that, not only would the trial require a much shorter follow-up, but also that the information (i.e. inverse variance) for evaluating a treatment effect on mortality would be greater by a factor of nearly 3 if the predictors of mortality were used, compared with a trial in which mortality was actually observed. Although these results are technically correct, we believe that the conceptual strategy on which they are based is flawed, and that the fundamental problem is the Prentice criterion itself. In this paper the technical issues are discussed in detail, and an alternative structure for evaluating the validity of surrogate end points is proposed.  相似文献   

5.
The use of surrogate end points has become increasingly common in medical and biological research. This is primarily because, in many studies, the primary end point of interest is too expensive or too difficult to obtain. There is now a large volume of statistical methods for analysing studies with surrogate end point data. However, to our knowledge, there has not been a comprehensive review of these methods to date. This paper reviews some existing methods and summarizes the strengths and weaknesses of each method. It also discusses the assumptions that are made by each method and critiques how likely these assumptions are met in practice.  相似文献   

6.
For classification problems where the test data are labeled sequentially, the point at which all true positives are first identified is often of critical importance. This article develops hypothesis tests to assess whether all true positives have been labeled in the test data. The tests use a partial receiver operating characteristic (ROC) that is generated from a labeled subset of the test data. These methods are developed in the context of unexploded ordnance (UXO) classification, but are applicable to any binary classification problem. First, the likelihood of the observed ROC given binormal model parameters is derived using order statistics, leading to a nonlinear parameter estimation problem. I then derive the approximate distribution of the point on the ROC at which all true instances are found. Using estimated binormal parameters, this distribution can be integrated up to a desired confidence level to define a critical false alarm rate (FAR). If the selected operating point is before this critical point, then additional labels out to the critical point are required. A second test uses the uncertainty in binormal parameters to determine the critical FAR. These tests are demonstrated with UXO classification examples and both approaches are recommended for testing operating points.  相似文献   

7.
Representative points (RPs) are a set of points that optimally represents a distribution in terms of mean square error. When the prior data is location biased, the direct methods such as the k-means algorithm may be inefficient to obtain the RPs. In this article, a new indirect algorithm is proposed to search the RPs based on location-biased datasets. Such an algorithm does not constrain the parameter model of the true distribution. The empirical study shows that such algorithm can obtain better RPs than the k-means algorithm.  相似文献   

8.
Summary.  In the USA cancer as a whole is the second leading cause of death and a major burden to health care; thus medical progress against cancer is a major public health goal. There are many individual studies to suggest that cancer treatment breakthroughs and early diagnosis have significantly improved the prognosis of cancer patients. To understand better the relationship between medical improvements and the survival experience for the patient population at large, it is useful to evaluate cancer survival trends on the population level, e.g. to find out when and how much the cancer survival rates changed. We analyse population-based grouped cancer survival data by incorporating join points into the survival models. A join point survival model facilitates the identification of trends with significant change-points in cancer survival, when related to cancer treatments or interventions. The Bayesian information criterion is used to select the number of join points. The performance of the join point survival models is evaluated with respect to cancer prognosis, join point locations, annual percentage changes in death rates by year of diagnosis and sample sizes through intensive simulation studies. The model is then applied to grouped relative survival data for several major cancer sites from the 'Surveillance, epidemiology and end results' programme of the National Cancer Institute. The change-points in the survival trends for several major cancer sites are identified and the potential driving forces behind such change-points are discussed.  相似文献   

9.
Abrupt changes often occur for environmental and financial time series. Most often, these changes are due to human intervention. Change point analysis is a statistical tool used to analyze sudden changes in observations along the time series. In this paper, we propose a Bayesian model for extreme values for environmental and economic datasets that present a typical change point behavior. The model proposed in this paper addresses the situation in which more than one change point can occur in a time series. By analyzing maxima, the distribution of each regime is a generalized extreme value distribution. In this model, the change points are unknown and considered parameters to be estimated. Simulations of extremes with two change points showed that the proposed algorithm can recover the true values of the parameters, in addition to detecting the true change points in different configurations. Also, the number of change points was a problem to be considered, and the Bayesian estimation can correctly identify the correct number of change points for each application. Environmental and financial data were analyzed and results showed the importance of considering the change point in the data and revealed that this change of regime brought about an increase in the return levels, increasing the number of floods in cities around the rivers. Stock market levels showed the necessity of a model with three different regimes.  相似文献   

10.
Summary.  Multivariate meta-analysis allows the joint synthesis of summary estimates from multiple end points and accounts for their within-study and between-study correlation. Yet practitioners usually meta-analyse each end point independently. I examine the role of within-study correlation in multivariate meta-analysis, to elicit the consequences of ignoring it. Using analytic reasoning and a simulation study, the within-study correlation is shown to influence the 'borrowing of strength' across end points, and wrongly ignoring it gives meta-analysis results with generally inferior statistical properties; for example, on average it increases the mean-square error and standard error of pooled estimates, and for non-ignorable missing data it increases their bias. The influence of within-study correlation is only negligible when the within-study variation is small relative to the between-study variation, or when very small differences exist across studies in the within-study covariance matrices. The findings are demonstrated by applied examples within medicine, dentistry and education. Meta-analysts are thus encouraged to account for the correlation between end points. To facilitate this, I conclude by reviewing options for multivariate meta-analysis when within-study correlations are unknown; these include obtaining individual patient data, using external information, performing sensitivity analyses and using alternatively parameterized models.  相似文献   

11.
The mark variogram [Cressie, 1993. Statistics for Spatial Data. Wiley, New York] is a useful tool to analyze data from marked point processes. In this paper, we investigate the asymptotic properties of its estimator. Our main findings are that the sample mark variogram is a consistent estimator for the true mark variogram and is asymptotically normal under some mild conditions. These results hold for both the geostatistical marking case (i.e., the case where the marks and points are independent) and the non-geostatistical marking case (i.e., the case where the marks and points are dependent). As an application we develop a general test for spatial isotropy and study our methodology through a simulation study and an application to a data set on long leaf pine trees.  相似文献   

12.
We consider two problems concerning locating change points in a linear regression model. One involves jump discontinuities (change-point) in a regression model and the other involves regression lines connected at unknown points. We compare four methods for estimating single or multiple change points in a regression model, when both the error variance and regression coefficients change simultaneously at the unknown point(s): Bayesian, Julious, grid search, and the segmented methods. The proposed methods are evaluated via a simulation study and compared via some standard measures of estimation bias and precision. Finally, the methods are illustrated and compared using three real data sets. The simulation and empirical results overall favor both the segmented and Bayesian methods of estimation, which simultaneously estimate the change point and the other model parameters, though only the Bayesian method is able to handle both continuous and dis-continuous change point problems successfully. If it is known that regression lines are continuous then the segmented method ranked first among methods.  相似文献   

13.
Summary. We use a multipath (multistate) model to describe data with multiple end points. Statistical inference based on the intermediate end point is challenging because of the problems of nonidentifiability and dependent censoring. We study nonparametric estimation for the path probability and the sojourn time distributions between the states. The methodology proposed can be applied to analyse cure models which account for the competing risk of death. Asymptotic properties of the estimators proposed are derived. Simulation shows that the methods proposed have good finite sample performance. The methodology is applied to two data sets.  相似文献   

14.
In many clinical trials, the assessment of the response to interventions can include a large variety of outcome variables which are generally correlated. The use of multiple significance tests is likely to increase the chance of detecting a difference in at least one of the outcomes between two treatments. Furthermore, univariate tests do not take into account the correlation structure. A new test is proposed that uses information from the interim analysis in a two-stage design to form the rejection region boundaries at the second stage. Initially, the test uses Hotelling’s T2 at the end of the first stage allowing only, for early acceptance of the null hypothesis and an O’Brien ‘type’ procedure at the end of the second stage. This test allows one to ‘cheat’ and look at the data at the interim analysis to form rejection regions at the second stage, provided one uses the correct distribution of the final test statistic. This distribution is derived and the power of the new test is compared to the power of three common procedures for testing multiple outcomes: Bonferroni’s inequality, Hotelling’s T2and O’Brien’s test. O’Brien’s test has the best power to detect a difference when the outcomes are thought to be affected in exactly the same direction and the same magnitude or in exactly the same relative effects as those proposed prior to data collection. However, the statistic is not robust to deviations in the alternative parameters proposed a priori, especially for correlated outcomes. The proposed new statistic and the derivation of its distribution allows investigators to consider information from the first stage of a two-stage design and consequently base the final test on the direction observed at the first stage or modify the statistic if the direction differs significantly from what was expected a prior.  相似文献   

15.
Estimating turning points using polynomial regression   总被引:1,自引:1,他引:0  
SUMMARY This paper describes a method for estimating regime switches in non-monotonic relationships, using polynomial regressions. Data from the UK financial services industry are used to illustrate the technique. The methodology provides a means of statistically ascertaining the existence of turning points, as well as a means of locating them, should they exist. While the methodology is most suited to applications that involve cross-sectional data, it may also be useful in short-horizon time series turning point prediction.  相似文献   

16.
The objective of this paper is to extend the surrogate endpoint validation methodology proposed by Buyse et al. (2000) to the case of a longitudinally measured surrogate marker when the endpoint of interest is time to some key clinical event. A joint model for longitudinal and event time data is required. To this end, the model formulation of Henderson et al. (2000) is adopted. The methodology is applied to a set of two randomized clinical trials in advanced prostate cancer to evaluate the usefulness of prostate-specific antigen (PSA) level as a surrogate for survival.  相似文献   

17.
The efficient use of surrogate or auxiliary information has been investigated within both model-based and design-based approaches to data analysis, particularly in the context of missing data. Here we consider the use of such data in epidemiological studies of disease incidence in which surrogate measures of disease status are available for all subjects at two time points, but definitive diagnoses are available only in stratified subsamples. We briefly review methods for the analysis of two-phase studies of disease prevalence at a single time point, and we discuss the extension of four of these methods to the analysis of incidence studies. Their performance is compared with special reference to a study of the incidence of senile dementia.  相似文献   

18.
19.
Two statistical issues that have arisen in the course of a study of mortality and disease related to the human immunodeficiency virus (HIV) in the haemophilia population of the UK are discussed. The first of these concerns methods of standardization for age and it is shown that, when the mortality of HIV-infected individuals with different severities of haemophilia are compared, an analysis based on the ratio of observed to national expected deaths suggests that mortality in HIV-infected individuals depends on the severity of their haemophilia. This conclusion is inappropriate and mortality in HIV-infected individuals is, in fact, similar regardless of severity of haemophilia. The second part of the paper discusses the effect of using various end points for studies of survival and progression of HIV-related disease. In the present example it was possible to calculate relative survival in HIV-infected individuals, i.e. survival after correcting for mortality expected in the absence of HIV infection. An analysis based on absolute survival gave a very similar picture of the effect of age at infection to an analysis based on relative survival, whereas an analysis based on the time to diagnosis of acquired immune deficiency syndrome (AIDS) underestimated the effect substantially and the possible alternative end point of time to AIDS or HIV-related death was shown to be subject to considerable misclassification error.  相似文献   

20.
Breakdown point is one measure of the robustness of an estimate. This paper discusses some unusual properties of the breakdown points of M-estimates of location.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号