首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully utilize all available information including the historical and phase II information of the treatment as well as external data of the other treatments. It is not uncommon that a phase II study may use a surrogate endpoint as the primary endpoint and has no or limited data for the final endpoint. On the other hand, external information from the other studies for the other treatments on the surrogate and final endpoints may be available to establish a relationship between the treatment effects on the two endpoints. Through this relationship, making full use of the surrogate information may enhance the estimate of the treatment effect on the final endpoint. In this research, we propose a bivariate Bayesian analysis approach to comprehensively deal with the problem. A dynamic borrowing approach is considered to regulate the amount of historical data and surrogate information borrowing based on the level of consistency. A much simpler frequentist method is also discussed. Simulations are conducted to compare the performances of different approaches. An example is used to illustrate the applications of the methods.  相似文献   

2.
In the competing risks analysis, most inferences have been developed based on continuous failure time data. However, failure times are sometimes observed as being discrete. We propose nonparametric inferences for the cumulative incidence function for pure discrete data with competing risks. When covariate information is available, we propose semiparametric inferences for direct regression modelling of the cumulative incidence function for grouped discrete failure time data with competing risks. Simulation studies show that the procedures perform well. The proposed methods are illustrated with a study of contraceptive use in Indonesia.  相似文献   

3.
For clinical trials with time‐to‐event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre‐specified number of deaths. Often, correlated surrogate information, such as time‐to‐progression (TTP) and progression‐free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precision of the analysis time prediction. Currently available methods in the literature for predicting analysis timings do not consider utilizing the surrogate information. In this article, using OS and TTP as an example, a general parametric model for OS and TTP is proposed, with the assumption that disease progression could change the course of the overall survival. Progression‐free survival, related both to OS and TTP, will be handled separately, as it can be derived from OS and TTP. The authors seek to develop a prediction procedure using a Bayesian method and provide detailed implementation strategies under certain assumptions. Simulations are performed to evaluate the performance of the proposed method. An application to a real study is also provided. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
Surveillance data provide a vital source of information for assessing the spread of a health problem or disease of interest and for planning for future health-care needs. However, the use of surveillance data requires proper adjustments of the reported caseload due to underreporting caused by reporting delays within a limited observation period. Although methods are available to address this classic statistical problem, they are largely focused on inference for the reporting delay distribution, with inference about caseload of disease incidence based on estimates for the delay distribution. This approach limits the complexity of models for disease incidence to provide reliable estimates and projections of incidence. Also, many of the available methods lack robustness since they require parametric distribution assumptions. We propose a new approach to overcome such limitations by allowing for separate models for the incidence and the reporting delay in a distribution-free fashion, but with joint inference for both modeling components, based on functional response models. In addition, we discuss inference about projections of future disease incidence to help identify significant shifts in temporal trends modeled based on the observed data. This latter issue on detecting ‘change points’ is not sufficiently addressed in the literature, despite the fact that such warning signs of potential outbreak are critically important for prevention purposes. We illustrate the approach with both simulated and real data, with the latter involving data for suicide attempts from the Veteran Healthcare Administration.  相似文献   

5.
Summary.  In many therapeutic areas, the identification and validation of surrogate end points is of prime interest to reduce the duration and/or size of clinical trials. Buyse and co-workers and Burzykowski and co-workers have proposed a validation strategy for end points that are either normally distributed or (possibly censored) failure times. In this paper, we address the problem of validating an ordinal categorical or binary end point as a surrogate for a failure time true end point. In particular, we investigate the validity of tumour response as a surrogate for survival time in evaluating fluoropyrimidine-based experimental therapies for advanced colorectal cancer. Our analysis is performed on data from 28 randomized trials in advanced colorectal cancer, which are available through the Meta-Analysis Group in Cancer.  相似文献   

6.
Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter‐term surrogate endpoints as substitutes for costly long‐term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long‐term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time‐to‐event surrogates for a time‐to‐event true endpoint. This two‐stage meta‐analytic copula method has been extensively studied for time‐to‐event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two‐stage meta‐analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use.  相似文献   

7.
Competing risks are common in clinical cancer research, as patients are subject to multiple potential failure outcomes, such as death from the cancer itself or from complications arising from the disease. In the analysis of competing risks, several regression methods are available for the evaluation of the relationship between covariates and cause-specific failures, many of which are based on Cox’s proportional hazards model. Although a great deal of research has been conducted on estimating competing risks, less attention has been devoted to linear regression modeling, which is often referred to as the accelerated failure time (AFT) model in survival literature. In this article, we address the use and interpretation of linear regression analysis with regard to the competing risks problem. We introduce two types of AFT modeling framework, where the influence of a covariate can be evaluated in relation to either a cause-specific hazard function, referred to as cause-specific AFT (CS-AFT) modeling in this study, or the cumulative incidence function of a particular failure type, referred to as crude-risk AFT (CR-AFT) modeling. Simulation studies illustrate that, as in hazard-based competing risks analysis, these two models can produce substantially different effects, depending on the relationship between the covariates and both the failure type of principal interest and competing failure types. We apply the AFT methods to data from non-Hodgkin lymphoma patients, where the dataset is characterized by two competing events, disease relapse and death without relapse, and non-proportionality. We demonstrate how the data can be analyzed and interpreted, using linear competing risks regression models.  相似文献   

8.
ABSTRACT

Convergence problems often arise when complex linear mixed-effects models are fitted. Previous simulation studies (see, e.g. [Buyse M, Molenberghs G, Burzykowski T, Renard D, Geys H. The validation of surrogate endpoints in meta-analyses of randomized experiments. Biostatistics. 2000;1:49–67, Renard D, Geys H, Molenberghs G, Burzykowski T, Buyse M. Validation of surrogate endpoints in multiple randomized clinical trials with discrete outcomes. Biom J. 2002;44:921–935]) have shown that model convergence rates were higher (i) when the number of available clusters in the data increased, and (ii) when the size of the between-cluster variability increased (relative to the size of the residual variability). The aim of the present simulation study is to further extend these findings by examining the effect of an additional factor that is hypothesized to affect model convergence, i.e. imbalance in cluster size. The results showed that divergence rates were substantially higher for data sets with unbalanced cluster sizes – in particular when the model at hand had a complex hierarchical structure. Furthermore, the use of multiple imputation to restore ‘balance’ in unbalanced data sets reduces model convergence problems.  相似文献   

9.
Summary.  We consider the application of Markov chain Monte Carlo (MCMC) estimation methods to random-effects models and in particular the family of discrete time survival models. Survival models can be used in many situations in the medical and social sciences and we illustrate their use through two examples that differ in terms of both substantive area and data structure. A multilevel discrete time survival analysis involves expanding the data set so that the model can be cast as a standard multilevel binary response model. For such models it has been shown that MCMC methods have advantages in terms of reducing estimate bias. However, the data expansion results in very large data sets for which MCMC estimation is often slow and can produce chains that exhibit poor mixing. Any way of improving the mixing will result in both speeding up the methods and more confidence in the estimates that are produced. The MCMC methodological literature is full of alternative algorithms designed to improve mixing of chains and we describe three reparameterization techniques that are easy to implement in available software. We consider two examples of multilevel survival analysis: incidence of mastitis in dairy cattle and contraceptive use dynamics in Indonesia. For each application we show where the reparameterization techniques can be used and assess their performance.  相似文献   

10.
Current survival techniques do not provide a good method for handling clinical trials with a large percent of censored observations. This research proposes using time-dependent surrogates of survival as outcome variables, in conjunction with observed survival time, to improve the precision in comparing the relative effects of two treatments on the distribution of survival time. This is in contrast to the standard method used today which uses the marginal density of survival time, T. only, or the marginal density of a surrogate, X, only, therefore, ignoring some available information. The surrogate measure, X, may be a fixed value or a time-dependent variable, X(t). X is a summary measure of some of the covariates measured throughout the trial that provide additional information on a subject's survival time. It is possible to model these time-dependent covariate values and relate the parameters in the model to the parameters in the distribution of T given X. The result is that three new models are available for the analysis of clinical trials. All three models use the joint density of survival time and a surrogate measure. Given one of three different assumed mechanisms of the potential treatment effect, each of the three methods improves the precision of the treatment estimate.  相似文献   

11.
The use of surrogate end points has become increasingly common in medical and biological research. This is primarily because, in many studies, the primary end point of interest is too expensive or too difficult to obtain. There is now a large volume of statistical methods for analysing studies with surrogate end point data. However, to our knowledge, there has not been a comprehensive review of these methods to date. This paper reviews some existing methods and summarizes the strengths and weaknesses of each method. It also discusses the assumptions that are made by each method and critiques how likely these assumptions are met in practice.  相似文献   

12.
There is a growing need for study designs that can evaluate efficacy and toxicity outcomes simultaneously in phase I or phase I/II cancer clinical trials. Many dose‐finding approaches have been proposed; however, most of these approaches assume binary efficacy and toxicity outcomes, such as dose‐limiting toxicity (DLT), and objective responses. DLTs are often defined for short time periods. In contrast, objective responses are often defined for longer periods because of practical limitations on confirmation and the criteria used to define ‘confirmation’. This means that studies have to be carried out for unacceptably long periods of time. Previous studies have not proposed a satisfactory solution to this specific problem. Furthermore, this problem may be a barrier for practitioners who want to implement notable previous dose‐finding approaches. To cope with this problem, we propose an approach using unconfirmed early responses as the surrogate efficacy outcome for the confirmed outcome. Because it is reasonable to expect moderate positive correlation between the two outcomes and the method replaces the surrogate outcome with the confirmed outcome once it becomes available, the proposed approach can reduce irrelevant dose selection and accumulation of bias. Moreover, it is also expected that it can significantly shorten study duration. Using simulation studies, we demonstrate the positive utility of the proposed approach and provide three variations of it, all of which can be easily implemented with modified likelihood functions and outcome variable definitions. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Methods have been developed by several authors to address the problem of bias in regression coefficients due to errors in exposure measurement. These approaches typically assume that there is one surrogate for each exposure. Occupational exposures are quite complex and are often described by characteristics of the workplace and the amount of time that one has worked in a particular area. In this setting, there are several surrogates which are used to define an individual's exposure. To analyze this type of data, regression calibration methodology is extended to adjust the estimates of exposure-response associations for the bias and additional uncertainty due to exposure measurement error from multiple surrogates. The health outcome is assumed to be binary and related to the quantitative measure of exposure by a logistic link function. The model for the conditional mean of the quantitative exposure measurement in relation to job characteristics is assumed to be linear. This approach is applied to a cross-sectional epidemiologic study of lung function in relation to metal working fluid exposure and the corresponding exposure assessment study with quantitative measurements from personal monitors. A simulation study investigates the performance of the proposed estimator for various values of the baseline prevalence of disease, exposure effect and measurement error variance. The efficiency of the proposed estimator relative to the one proposed by Carroll et al. [1995. Measurement Error in Nonlinear Models. Chapman & Hall, New York] is evaluated numerically for the motivating example. User-friendly and fully documented Splus and SAS routines implementing these methods are available (http://www.hsph.harvard.edu/faculty/spiegelman/multsurr.html).  相似文献   

14.
Recently-developed genotype imputation methods are a powerful tool for detecting untyped genetic variants that affect disease susceptibility in genetic association studies. However, existing imputation methods require individual-level genotype data, whereas in practice it is often the case that only summary data are available. For example this may occur because, for reasons of privacy or politics, only summary data are made available to the research community at large; or because only summary data are collected, as in DNA pooling experiments. In this article, we introduce a new statistical method that can accurately infer the frequencies of untyped genetic variants in these settings, and indeed substantially improve frequency estimates at typed variants in pooling experiments where observations are noisy. Our approach, which predicts each allele frequency using a linear combination of observed frequencies, is statistically straight-forward, and related to a long history of the use of linear methods for estimating missing values (e.g. Kriging). The main statistical novelty is our approach to regularizing the covariance matrix estimates, and the resulting linear predictors, which is based on methods from population genetics. We find that, besides being both fast and flexible - allowing new problems to be tackled that cannot be handled by existing imputation approaches purpose-built for the genetic context - these linear methods are also very accurate. Indeed, imputation accuracy using this approach is similar to that obtained by state-of-the art imputation methods that use individual-level data, but at a fraction of the computational cost.  相似文献   

15.
Consider assessing the evidence for an exposure variable and a disease variable being associated, when the true exposure variable is more costly to obtain than an error‐prone but nondifferential surrogate exposure variable. From a study design perspective, there are choices regarding the best use of limited resources. Should one acquire the true exposure status for fewer subjects or the surrogate exposure status for more subjects? The issue of validation is also central, i.e., should we simultaneously measure the true and surrogate exposure variables on a subset of study subjects? Using large‐sample theory, we provide a framework for quantifying the power of testing for an exposure–disease association as a function of study cost. This enables us to present comparisons of different study designs under different suppositions about both the relative cost and the performance (sensitivity and specificity) of the surrogate variable. We present simulations to show the applicability of our theoretical framework, and we provide a case‐study comparing results from an actual study to what could have been seen had true exposure status been ascertained for a different proportion of study subjects. We also describe an extension of our ideas to a more complex situation involving covariates. The Canadian Journal of Statistics 47: 222–237; 2019 © 2019 Statistical Society of Canada  相似文献   

16.
We discuss maximum likelihood and estimating equations methods for combining results from multiple studies in pooling projects and data consortia using a meta-analysis model, when the multivariate estimates with their covariance matrices are available. The estimates to be combined are typically regression slopes, often from relative risk models in biomedical and epidemiologic applications. We generalize the existing univariate meta-analysis model and investigate the efficiency advantages of the multivariate methods, relative to the univariate ones. We generalize a popular univariate test for between-studies homogeneity to a multivariate test. The methods are applied to a pooled analysis of type of carotenoids in relation to lung cancer incidence from seven prospective studies. In these data, the expected gain in efficiency was evident, sometimes to a large extent. Finally, we study the finite sample properties of the estimators and compare the multivariate ones to their univariate counterparts.  相似文献   

17.
Summary.  The paper develops a data augmentation method to estimate the distribution function of a variable, which is partially observed, under a non-ignorable missing data mechanism, and where surrogate data are available. An application to the estimation of hourly pay distributions using UK Labour Force Survey data provides the main motivation. In addition to considering a standard parametric data augmentation method, we consider the use of hot deck imputation methods as part of the data augmentation procedure to improve the robustness of the method. The method proposed is compared with standard methods that are based on an ignorable missing data mechanism, both in a simulation study and in the Labour Force Survey application. The focus is on reducing bias in point estimation, but variance estimation using multiple imputation is also considered briefly.  相似文献   

18.
Measurement error and misclassification arise commonly in various data collection processes. It is well-known that ignoring these features in the data analysis usually leads to biased inference. With the generalized linear model setting, Yi et al. [Functional and structural methods with mixed measurement error and misclassification in covariates. J Am Stat Assoc. 2015;110:681–696] developed inference methods to adjust for the effects of measurement error in continuous covariates and misclassification in discrete covariates simultaneously for the scenario where validation data are available. The augmented simulation-extrapolation (SIMEX) approach they developed generalizes the usual SIMEX method which is only applicable to handle continuous error-prone covariates. To implement this method, we develop an R package, augSIMEX, for public use. Simulation studies are conducted to illustrate the use of the algorithm. This package is available at CRAN.  相似文献   

19.
In the development of many diseases there are often associated variables which continuously measure the progress of an individual towards the final expression of the disease (failure). Such variables are stochastic processes, here called marker processes, and, at a given point in time, they may provide information about the current hazard and subsequently on the remaining time to failure. Here we consider a simple additive model for the relationship between the hazard function at time t and the history of the marker process up until time t. We develop some basic calculations based on this model. Interest is focused on statistical applications for markers related to estimation of the survival distribution of time to failure, including (i) the use of markers as surrogate responses for failure with censored data, and (ii) the use of markers as predictors of the time elapsed since onset of a survival process in prevalent individuals. Particular attention is directed to potential gains in efficiency incurred by using marker process information.  相似文献   

20.
In prospective cohort studies, individuals are usually recruited according to a certain cross-sectional sampling criterion. The prevalent cohort is defined as a group of individuals who are alive but possibly with disease at the beginning of the study. It is appealing to incorporate the prevalent cases to estimate the incidence rate of disease before the enrollment. The method of back calculation of incidence rate has been used to estimate the incubation time from human immunodeficiency virus (HIV) infection to AIDS. The time origin is defined as the time of HIV infection. In aging cohort studies, the primary time scale is age of disease onset, subjects have to survive certain years to be enrolled into the study, thus creating left truncation (delay entry). The current methods usually assume that either the disease incidence is rare or the excess mortality due to disease is small compared with the healthy subjects. So far the validity of the results based on these assumptions has not been examined. In this paper, a simple alternative method is proposed to estimate dementia incidence rate before enrollment using prevalent cohort data with left truncation. Furthermore, simulations are used to examine the performance of the estimation of disease incidence under different assumptions of disease incidence rates and excess mortality hazards due to disease. As application, the method is applied to the prevalent cases of dementia from the Honolulu-Asia Aging Study to estimate the dementia incidence rate and to assess the effect of hypertension, Apoe 4 and education on dementia onset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号