首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Traditionally, noninferiority hypotheses have been tested using a frequentist method with a fixed margin. Given that information for the control group is often available from previous studies, it is interesting to consider a Bayesian approach in which information is “borrowed” for the control group to improve efficiency. However, construction of an appropriate informative prior can be challenging. In this paper, we consider a hybrid Bayesian approach for testing noninferiority hypotheses in studies with a binary endpoint. To account for heterogeneity between the historical information and the current trial for the control group, a dynamic P value–based power prior parameter is proposed to adjust the amount of information borrowed from the historical data. This approach extends the simple test‐then‐pool method to allow a continuous discounting power parameter. An adjusted α level is also proposed to better control the type I error. Simulations are conducted to investigate the performance of the proposed method and to make comparisons with other methods including test‐then‐pool and hierarchical modeling. The methods are illustrated with data from vaccine clinical trials.  相似文献   

2.
Bayesian hierarchical models typically involve specifying prior distributions for one or more variance components. This is rather removed from the observed data, so specification based on expert knowledge can be difficult. While there are suggestions for “default” priors in the literature, often a conditionally conjugate inverse‐gamma specification is used, despite documented drawbacks of this choice. The authors suggest “conservative” prior distributions for variance components, which deliberately give more weight to smaller values. These are appropriate for investigators who are skeptical about the presence of variability in the second‐stage parameters (random effects) and want to particularly guard against inferring more structure than is really present. The suggested priors readily adapt to various hierarchical modelling settings, such as fitting smooth curves, modelling spatial variation and combining data from multiple sites.  相似文献   

3.
Decision making is a critical component of a new drug development process. Based on results from an early clinical trial such as a proof of concept trial, the sponsor can decide whether to continue, stop, or defer the development of the drug. To simplify and harmonize the decision‐making process, decision criteria have been proposed in the literature. One of them is to exam the location of a confidence bar relative to the target value and lower reference value of the treatment effect. In this research, we modify an existing approach by moving some of the “stop” decision to “consider” decision so that the chance of directly terminating the development of a potentially valuable drug can be reduced. As Bayesian analysis has certain flexibilities and can borrow historical information through an inferential prior, we apply the Bayesian analysis to the trial planning and decision making. Via a design prior, we can also calculate the probabilities of various decision outcomes in relationship with the sample size and the other parameters to help the study design. An example and a series of computations are used to illustrate the applications, assess the operating characteristics, and compare the performances of different approaches.  相似文献   

4.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   

5.
Variable selection over a potentially large set of covariates in a linear model is quite popular. In the Bayesian context, common prior choices can lead to a posterior expectation of the regression coefficients that is a sparse (or nearly sparse) vector with a few nonzero components, those covariates that are most important. This article extends the “global‐local” shrinkage idea to a scenario where one wishes to model multiple response variables simultaneously. Here, we have developed a variable selection method for a K‐outcome model (multivariate regression) that identifies the most important covariates across all outcomes. The prior for all regression coefficients is a mean zero normal with coefficient‐specific variance term that consists of a predictor‐specific factor (shared local shrinkage parameter) and a model‐specific factor (global shrinkage term) that differs in each model. The performance of our modeling approach is evaluated through simulation studies and a data example.  相似文献   

6.
Whilst innovative Bayesian approaches are increasingly used in clinical studies, in the preclinical area Bayesian methods appear to be rarely used in the reporting of pharmacology data. This is particularly surprising in the context of regularly repeated in vivo studies where there is a considerable amount of data from historical control groups, which has potential value. This paper describes our experience with introducing Bayesian analysis for such studies using a Bayesian meta‐analytic predictive approach. This leads naturally either to an informative prior for a control group as part of a full Bayesian analysis of the next study or using a predictive distribution to replace a control group entirely. We use quality control charts to illustrate study‐to‐study variation to the scientists and describe informative priors in terms of their approximate effective numbers of animals. We describe two case studies of animal models: the lipopolysaccharide‐induced cytokine release model used in inflammation and the novel object recognition model used to screen cognitive enhancers, both of which show the advantage of a Bayesian approach over the standard frequentist analysis. We conclude that using Bayesian methods in stable repeated in vivo studies can result in a more effective use of animals, either by reducing the total number of animals used or by increasing the precision of key treatment differences. This will lead to clearer results and supports the “3Rs initiative” to Refine, Reduce and Replace animals in research. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
The feasibility of a new clinical trial may be increased by incorporating historical data of previous trials. In the particular case where only data from a single historical trial are available, there exists no clear recommendation in the literature regarding the most favorable approach. A main problem of the incorporation of historical data is the possible inflation of the type I error rate. A way to control this type of error is the so‐called power prior approach. This Bayesian method does not “borrow” the full historical information but uses a parameter 0 ≤ δ ≤ 1 to determine the amount of borrowed data. Based on the methodology of the power prior, we propose a frequentist framework that allows incorporation of historical data from both arms of two‐armed trials with binary outcome, while simultaneously controlling the type I error rate. It is shown that for any specific trial scenario a value δ > 0 can be determined such that the type I error rate falls below the prespecified significance level. The magnitude of this value of δ depends on the characteristics of the data observed in the historical trial. Conditionally on these characteristics, an increase in power as compared to a trial without borrowing may result. Similarly, we propose methods how the required sample size can be reduced. The results are discussed and compared to those obtained in a Bayesian framework. Application is illustrated by a clinical trial example.  相似文献   

8.
A Bayesian reference analysis for determining the posterior distribution of the strength of a radiation source is performed. The only pieces of information available are the numbers of counts gathered in a gross and a background measurement along with the respective counting times and a state-of-knowledge distribution for the efficiency. This situation is addressed by combining the calculations of a “one-at-a-time” reference prior and a reference prior with partial information. The posterior distribution of the source strength obtained with the reference prior leads to credible intervals that have better frequentist coverage than corresponding intervals founded on uniform or Jeffreys’ priors.  相似文献   

9.
Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well‐known model‐based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the marginal likelihood but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross‐validation and full Bayes and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and p, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters, which model a priori information on variables termed “co‐data”. In particular, we present two novel examples that allow for co‐data: first, a Bayesian spike‐and‐slab setting that facilitates inclusion of multiple co‐data sources and types and, second, a hybrid empirical Bayes–full Bayes ridge regression approach for estimation of the posterior predictive interval.  相似文献   

10.
Information in a statistical procedure arising from sources other than sampling is called prior information, and its incorporation into the procedure forms the basis of the Bayesian approach to statistics. Under hypergeometric sampling, methodology is developed which quantifies the amount of information provided by the sample data relative to that provided by the prior distribution and allows for a ranking of prior distributions with respect to conservativeness, where conservatism refers to restraint of extraneous information embedded in any prior distribution. The most conservative prior distribution from a specified class (each member of which carries the available prior information) is that prior distribution within the class over which the likelihood function has the greatest average domination. Four different families of prior distributions are developed by considering a Bayesian approach to the formation of lots. The most conservative prior distribution from each of the four families of prior distributions is determined and compared for the situation when no prior information is available. The results of the comparison advocate the use of the Polya (beta-binomial) prior distribution in hypergeometric sampling.  相似文献   

11.
Various methodologies proposed for some inference problems associated with two‐arm trails are known to suffer from difficulties, as documented in Senn (2001). We propose an alternative Bayesian approach to these problems that deals with these difficulties through providing an explicit measure of statistical evidence and the strength of this evidence. Bayesian methods are often criticized for their intrinsic subjectivity. We show how these concerns can be dealt with through assessing the bias induced by a prior model checking and checking for prior‐data conflict. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
This article focused on the definition and the study of a binary Bayesian criterion which measures a statistical agreement between a subjective prior and data information. The setting of this work is concrete Bayesian studies. It is an alternative and a complementary tool to the method recently proposed by Evans and Moshonov, [M. Evans and H. Moshonov, Checking for Prior-data conflict, Bayesian Anal. 1 (2006), pp. 893–914]. Both methods try to help the work of the Bayesian analyst, from preliminary to the posterior computation. Our criterion is defined as a ratio of Kullback–Leibler divergences; two of its main features are to make easy the check of a hierarchical prior and be used as a default calibration tool to obtain flat but proper priors in applications. Discrete and continuous distributions exemplify the approach and an industrial case study in reliability, involving the Weibull distribution, is highlighted.  相似文献   

13.
14.
Construction methods for prior densities are investigated from a predictive viewpoint. Predictive densities for future observables are constructed by using observed data. The simultaneous distribution of future observables and observed data is assumed to belong to a parametric submodel of a multinomial model. Future observables and data are possibly dependent. The discrepancy of a predictive density to the true conditional density of future observables given observed data is evaluated by the Kullback-Leibler divergence. It is proved that limits of Bayesian predictive densities form an essentially complete class. Latent information priors are defined as priors maximizing the conditional mutual information between the parameter and the future observables given the observed data. Minimax predictive densities are constructed as limits of Bayesian predictive densities based on prior sequences converging to the latent information priors.  相似文献   

15.
Incorporating historical information into the design and analysis of a new clinical trial has been the subject of much discussion as a way to increase the feasibility of trials in situations where patients are difficult to recruit. The best method to include this data is not yet clear, especially in the case when few historical studies are available. This paper looks at the power prior technique afresh in a binomial setting and examines some previously unexamined properties, such as Box P values, bias, and coverage. Additionally, it proposes an empirical Bayes‐type approach to estimating the prior weight parameter by marginal likelihood. This estimate has advantages over previously criticised methods in that it varies commensurably with differences in the historical and current data and can choose weights near 1 when the data are similar enough. Fully Bayesian approaches are also considered. An analysis of the operating characteristics shows that the adaptive methods work well and that the various approaches have different strengths and weaknesses.  相似文献   

16.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

17.
The Dirichlet process can be regarded as a random probability measure for which the authors examine various sum representations. They consider in particular the gamma process construction of Ferguson (1973) and the “stick‐breaking” construction of Sethuraman (1994). They propose a Dirichlet finite sum representation that strongly approximates the Dirichlet process. They assess the accuracy of this approximation and characterize the posterior that this new prior leads to in the context of Bayesian nonpara‐metric hierarchical models.  相似文献   

18.
Gene copy number (GCN) changes are common characteristics of many genetic diseases. Comparative genomic hybridization (CGH) is a new technology widely used today to screen the GCN changes in mutant cells with high resolution genome-wide. Statistical methods for analyzing such CGH data have been evolving. Existing methods are either frequentist's or full Bayesian. The former often has computational advantage, while the latter can incorporate prior information into the model, but could be misleading when one does not have sound prior information. In an attempt to take full advantages of both approaches, we develop a Bayesian-frequentist hybrid approach, in which a subset of the model parameters is inferred by the Bayesian method, while the rest parameters by the frequentist's. This new hybrid approach provides advantages over those of the Bayesian or frequentist's method used alone. This is especially the case when sound prior information is available on part of the parameters, and the sample size is relatively small. Spatial dependence and false discovery rate are also discussed, and the parameter estimation is efficient. As an illustration, we used the proposed hybrid approach to analyze a real CGH data.  相似文献   

19.
In the Bayesian analysis of a multiple-recapture census, different diffuse prior distributions can lead to markedly different inferences about the population size N. Through consideration of the Fisher information matrix it is shown that the number of captures in each sample typically provides little information about N. This suggests that if there is no prior information about capture probabilities, then knowledge of just the sample sizes and not the number of recaptures should leave the distribution of Nunchanged. A prior model that has this property is identified and the posterior distribution is examined. In particular, asymptotic estimates of the posterior mean and variance are derived. Differences between Bayesian and classical point and interval estimators are illustrated through examples.  相似文献   

20.
The author considers a reparameterized version of the Bayesian ordinal cumulative link regression model as a tool for exploring relationships between covariates and “cutpoint” parameters. The use of this parameterization allows one to fit models using the leapfrog hybrid Monte Carlo method, and to bypass latent variable data augmentation and the slow convergence of the cutpoints which it usually entails. The proposed Gibbs sampler is not model specific and can be easily modified to handle different link functions. The approach is illustrated by considering data from a pediatric radiology study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号