首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
In the context of vaccine efficacy trial where the incidence rate is very low and a very large sample size is usually expected, incorporating historical data into a new trial is extremely attractive to reduce sample size and increase estimation precision. Nevertheless, for some infectious diseases, seasonal change in incidence rates poses a huge challenge in borrowing historical data and a critical question is how to properly take advantage of historical data borrowing with acceptable tolerance to between-trials heterogeneity commonly from seasonal disease transmission. In this article, we extend a probability-based power prior which determines the amount of information to be borrowed based on the agreement between the historical and current data, to make it applicable for either a single or multiple historical trials available, with constraint on the amount of historical information to be borrowed. Simulations are conducted to compare the performance of the proposed method with other methods including modified power prior (MPP), meta-analytic-predictive (MAP) prior and the commensurate prior methods. Furthermore, we illustrate the application of the proposed method for trial design in a practical setting.  相似文献   

3.
贝叶斯统计推断及其主要进展   总被引:2,自引:3,他引:2  
贝叶斯统计推断作为现代统计分析方法的重要内容,对于统计学理论的发展具有里程碑的作用。深入总结其研究的主要进展,具有重要的现实意义。在查阅国内外重要学术研究资料的基础上,从贝叶斯统计推断的思想、与古典统计的研究思路比较和贝叶斯统计推断研究的主要进展三个方面作了综述与介绍,力图达到认识贝叶斯统计推断及其研究现状的目的。  相似文献   

4.
To design a phase III study with a final endpoint and calculate the required sample size for the desired probability of success, we need a good estimate of the treatment effect on the endpoint. It is prudent to fully utilize all available information including the historical and phase II information of the treatment as well as external data of the other treatments. It is not uncommon that a phase II study may use a surrogate endpoint as the primary endpoint and has no or limited data for the final endpoint. On the other hand, external information from the other studies for the other treatments on the surrogate and final endpoints may be available to establish a relationship between the treatment effects on the two endpoints. Through this relationship, making full use of the surrogate information may enhance the estimate of the treatment effect on the final endpoint. In this research, we propose a bivariate Bayesian analysis approach to comprehensively deal with the problem. A dynamic borrowing approach is considered to regulate the amount of historical data and surrogate information borrowing based on the level of consistency. A much simpler frequentist method is also discussed. Simulations are conducted to compare the performances of different approaches. An example is used to illustrate the applications of the methods.  相似文献   

5.
Leveraging historical data into the design and analysis of phase 2 randomized controlled trials can improve efficiency of drug development programs. Such approaches can reduce sample size without loss of power. Potential issues arise when the current control arm is inconsistent with historical data, which may lead to biased estimates of treatment efficacy, loss of power, or inflated type 1 error. Consideration as to how to borrow historical information is important, and in particular, adjustment for prognostic factors should be considered. This paper will illustrate two motivating case studies of oncology Bayesian augmented control (BAC) trials. In the first example, a glioblastoma study, an informative prior was used for the control arm hazard rate. Sample size savings were 15% to 20% by using a BAC design. In the second example, a pancreatic cancer study, a hierarchical model borrowing method was used, which enabled the extent of borrowing to be determined by consistency of observed study data with historical studies. Supporting Bayesian analyses also adjusted for prognostic factors. Incorporating historical data via Bayesian trial design can provide sample size savings, reduce study duration, and enable a more scientific approach to development of novel therapies by avoiding excess recruitment to a control arm. Various sensitivity analyses are necessary to interpret results. Current industry efforts for data transparency have meaningful implications for access to patient‐level historical data, which, while not critical, is helpful to adjust for potential imbalances in prognostic factors.  相似文献   

6.
Abstract

The Poisson distribution is here used to illustrate Bayesian inference concepts with the ultimate goal to construct credible intervals for a mean. The evaluation of the resulting intervals is in terms of “mismatched” priors and posteriors. The discussion is in the form of an imaginary dialog between a teacher and a student, who have met earlier, discussing and evaluating the Wald and score confidence intervals, as well as confidence intervals based on transformation and bootstrap techniques. From the perspective of the student the learning process is akin to a real research situation. The student is supposed to have studied mathematical statistics for at least two semesters.  相似文献   

7.
The aim of a phase II clinical trial is to decide whether or not to develop an experimental therapy further through phase III clinical evaluation. In this paper, we present a Bayesian approach to the phase II trial, although we assume that subsequent phase III clinical trials will have standard frequentist analyses. The decision whether to conduct the phase III trial is based on the posterior predictive probability of a significant result being obtained. This fusion of Bayesian and frequentist techniques accepts the current paradigm for expressing objective evidence of therapeutic value, while optimizing the form of the phase II investigation that leads to it. By using prior information, we can assess whether a phase II study is needed at all, and how much or what sort of evidence is required. The proposed approach is illustrated by the design of a phase II clinical trial of a multi‐drug resistance modulator used in combination with standard chemotherapy in the treatment of metastatic breast cancer. Copyright © 2005 John Wiley & Sons, Ltd  相似文献   

8.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

9.
Credible and highest posterior density intervals for the reliability function and parameters of a two-parameter Weibull process are obtained and the estimates compared with their corresponding classical counterparts.  相似文献   

10.
This paper synthesizes a global approach to both Bayesian and likelihood treatments of the estimation of the parameters of a hidden Markov model in the cases of normal and Poisson distributions. The first step of this global method is to construct a non-informative prior based on a reparameterization of the model; this prior is to be considered as a penalizing and bounding factor from a likelihood point of view. The second step takes advantage of the special structure of the posterior distribution to build up a simple Gibbs algorithm. The maximum likelihood estimator is then obtained by an iterative procedure replicating the original sample until the corresponding Bayes posterior expectation stabilizes on a local maximum of the original likelihood function.  相似文献   

11.
This paper develops an objective Bayesian analysis method for estimating unknown parameters of the half-logistic distribution when a sample is available from the progressively Type-II censoring scheme. Noninformative priors such as Jeffreys and reference priors are derived. In addition, derived priors are checked to determine whether they satisfy probability-matching criteria. The Metropolis–Hasting algorithm is applied to generate Markov chain Monte Carlo samples from these posterior density functions because marginal posterior density functions of each parameter cannot be expressed in an explicit form. Monte Carlo simulations are conducted to investigate frequentist properties of estimated models under noninformative priors. For illustration purposes, a real data set is presented, and the quality of models under noninformative priors is evaluated through posterior predictive checking.  相似文献   

12.
For any decision-making study, there are two sorts of errors that can be made, declaring a positive result when the truth is negative, and declaring a negative result when the truth is positive. Traditionally, the primary analysis of a study is a two-sided hypothesis test, the type I error rate will be set to 5% and the study is designed to give suitably low type II error – typically 10 or 20% – to detect a given effect size. These values are standard, arbitrary and, other than the choice between 10 and 20%, do not reflect the context of the study, such as the relative costs of making type I and II errors and the prior belief the drug will be placebo-like. Several authors have challenged this paradigm, typically for the scenario where the planned analysis is frequentist. When resource is limited, there will always be a trade-off between the type I and II error rates, and this article explores optimising this trade-off for a study with a planned Bayesian statistical analysis. This work provides a scientific basis for a discussion between stakeholders as to what type I and II error rates may be appropriate and some algebraic results for normally distributed data.  相似文献   

13.
14.
Bayesian dynamic borrowing designs facilitate borrowing information from historical studies. Historical data, when perfectly commensurate with current data, have been shown to reduce the trial duration and the sample size, while inflation in the type I error and reduction in the power have been reported, when imperfectly commensurate. These results, however, were obtained without considering that Bayesian designs are calibrated to meet regulatory requirements in practice and even no‐borrowing designs may use information from historical data in the calibration. The implicit borrowing of historical data suggests that imperfectly commensurate historical data may similarly impact no‐borrowing designs negatively. We will provide a fair appraiser of Bayesian dynamic borrowing and no‐borrowing designs. We used a published selective adaptive randomization design and real clinical trial setting and conducted simulation studies under varying degrees of imperfectly commensurate historical control scenarios. The type I error was inflated under the null scenario of no intervention effect, while larger inflation was noted with borrowing. The larger inflation in type I error under the null setting can be offset by the greater probability to stop early correctly under the alternative. Response rates were estimated more precisely and the average sample size was smaller with borrowing. The expected increase in bias with borrowing was noted, but was negligible. Using Bayesian dynamic borrowing designs may improve trial efficiency by stopping trials early correctly and reducing trial length at the small cost of inflated type I error.  相似文献   

15.
The authors consider Bayesian analysis for continuous‐time Markov chain models based on a conditional reference prior. For such models, inference of the elapsed time between chain observations depends heavily on the rate of decay of the prior as the elapsed time increases. Moreover, improper priors on the elapsed time may lead to improper posterior distributions. In addition, an infinitesimal rate matrix also characterizes this class of models. Experts often have good prior knowledge about the parameters of this matrix. The authors show that the use of a proper prior for the rate matrix parameters together with the conditional reference prior for the elapsed time yields a proper posterior distribution. The authors also demonstrate that, when compared to analyses based on priors previously proposed in the literature, a Bayesian analysis on the elapsed time based on the conditional reference prior possesses better frequentist properties. The type of prior thus represents a better default prior choice for estimation software.  相似文献   

16.
Modelling and simulation are buzz words in clinical drug development. But is clinical trial simulation (CTS) really a revolutionary technique? There is not much more to CTS than applying standard methods of modelling, statistics and decision theory. However, doing this in a systematic way can mean a significant improvement in pharmaceutical research. This paper describes in simple examples how modelling could be used in clinical development. Four steps are identified: gathering relevant information about a drug and the disease; building a mathematical model; predicting the results of potential future trials; and optimizing clinical trials and the entire clinical programme. We discuss these steps and give a number of examples of model components, demonstrating that relatively unsophisticated models may also prove useful. We stress that modelling and simulation are decision tools and point out the benefits of integrating them with decision analysis. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

18.
19.
In this paper, we consider the Bayesian analysis of competing risks data, when the data are partially complete in both time and type of failures. It is assumed that the latent cause of failures have independent Weibull distributions with the common shape parameter, but different scale parameters. When the shape parameter is known, it is assumed that the scale parameters have Beta–Gamma priors. In this case, the Bayes estimates and the associated credible intervals can be obtained in explicit forms. When the shape parameter is also unknown, it is assumed that it has a very flexible log-concave prior density functions. When the common shape parameter is unknown, the Bayes estimates of the unknown parameters and the associated credible intervals cannot be obtained in explicit forms. We propose to use Markov Chain Monte Carlo sampling technique to compute Bayes estimates and also to compute associated credible intervals. We further consider the case when the covariates are also present. The analysis of two competing risks data sets, one with covariates and the other without covariates, have been performed for illustrative purposes. It is observed that the proposed model is very flexible, and the method is very easy to implement in practice.  相似文献   

20.
The authors develop default priors for the Gaussian random field model that includes a nugget parameter accounting for the effects of microscale variations and measurement errors. They present the independence Jeffreys prior, the Jeffreys‐rule prior and a reference prior and study posterior propriety of these and related priors. They show that the uniform prior for the correlation parameters yields an improper posterior. In case of known regression and variance parameters, they derive the Jeffreys prior for the correlation parameters. They prove posterior propriety and obtain that the predictive distributions at ungauged locations have finite variance. Moreover, they show that the proposed priors have good frequentist properties, except for those based on the marginal Jeffreys‐rule prior for the correlation parameters, and illustrate their approach by analyzing a dataset of zinc concentrations along the river Meuse. The Canadian Journal of Statistics 40: 304–327; 2012 © 2012 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号