首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

3.
Whilst innovative Bayesian approaches are increasingly used in clinical studies, in the preclinical area Bayesian methods appear to be rarely used in the reporting of pharmacology data. This is particularly surprising in the context of regularly repeated in vivo studies where there is a considerable amount of data from historical control groups, which has potential value. This paper describes our experience with introducing Bayesian analysis for such studies using a Bayesian meta‐analytic predictive approach. This leads naturally either to an informative prior for a control group as part of a full Bayesian analysis of the next study or using a predictive distribution to replace a control group entirely. We use quality control charts to illustrate study‐to‐study variation to the scientists and describe informative priors in terms of their approximate effective numbers of animals. We describe two case studies of animal models: the lipopolysaccharide‐induced cytokine release model used in inflammation and the novel object recognition model used to screen cognitive enhancers, both of which show the advantage of a Bayesian approach over the standard frequentist analysis. We conclude that using Bayesian methods in stable repeated in vivo studies can result in a more effective use of animals, either by reducing the total number of animals used or by increasing the precision of key treatment differences. This will lead to clearer results and supports the “3Rs initiative” to Refine, Reduce and Replace animals in research. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
This paper deals with the Bayesian analysis of the additive mixed model experiments. Consider b randomly chosen subjects who respond once to each of t treatments. The subjects are treated as random effects and the treatment effects are fixed. Suppose that some prior information is available, thus motivating a Bayesian analysis. The Bayesian computation, however, can be difficult in this situation, especially when a large number of treatments is involved. Three computational methods are suggested to perform the analysis. The exact posterior density of any parameter of interest can be simulated based on random realizations taken from a restricted multivariate t distribution. The density can also be simulated using Markov chain Monte Carlo methods. The simulated density is accurate when a large number of random realizations is taken. However, it may take substantial amount of computer time when many treatments are involved. An alternative Laplacian approximation is discussed. The Laplacian method produces smooth and very accurate approximates to posterior densities, and takes only seconds of computer time. An example of a pipeline cracks experiment is used to illustrate the Bayesian approaches and the computational methods.  相似文献   

5.
A novel framework is proposed for the estimation of multiple sinusoids from irregularly sampled time series. This spectral analysis problem is addressed as an under-determined inverse problem, where the spectrum is discretized on an arbitrarily thin frequency grid. As we focus on line spectra estimation, the solution must be sparse, i.e. the amplitude of the spectrum must be zero almost everywhere. Such prior information is taken into account within the Bayesian framework. Two models are used to account for the prior sparseness of the solution, namely a Laplace prior and a Bernoulli–Gaussian prior, associated to optimization and stochastic sampling algorithms, respectively. Such approaches are efficient alternatives to usual sequential prewhitening methods, especially in case of strong sampling aliases perturbating the Fourier spectrum. Both methods should be intensively tested on real data sets by physicists.  相似文献   

6.
Traditionally, noninferiority hypotheses have been tested using a frequentist method with a fixed margin. Given that information for the control group is often available from previous studies, it is interesting to consider a Bayesian approach in which information is “borrowed” for the control group to improve efficiency. However, construction of an appropriate informative prior can be challenging. In this paper, we consider a hybrid Bayesian approach for testing noninferiority hypotheses in studies with a binary endpoint. To account for heterogeneity between the historical information and the current trial for the control group, a dynamic P value–based power prior parameter is proposed to adjust the amount of information borrowed from the historical data. This approach extends the simple test‐then‐pool method to allow a continuous discounting power parameter. An adjusted α level is also proposed to better control the type I error. Simulations are conducted to investigate the performance of the proposed method and to make comparisons with other methods including test‐then‐pool and hierarchical modeling. The methods are illustrated with data from vaccine clinical trials.  相似文献   

7.
Models that involve an outcome variable, covariates, and latent variables are frequently the target for estimation and inference. The presence of missing covariate or outcome data presents a challenge, particularly when missingness depends on the latent variables. This missingness mechanism is called latent ignorable or latent missing at random and is a generalisation of missing at random. Several authors have previously proposed approaches for handling latent ignorable missingness, but these methods rely on prior specification of the joint distribution for the complete data. In practice, specifying the joint distribution can be difficult and/or restrictive. We develop a novel sequential imputation procedure for imputing covariate and outcome data for models with latent variables under latent ignorable missingness. The proposed method does not require a joint model; rather, we use results under a joint model to inform imputation with less restrictive modelling assumptions. We discuss identifiability and convergence‐related issues, and simulation results are presented in several modelling settings. The method is motivated and illustrated by a study of head and neck cancer recurrence. Imputing missing data for models with latent variables under latent‐dependent missingness without specifying a full joint model.  相似文献   

8.
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow‐up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.  相似文献   

9.
Empirical Bayes is a versatile approach to “learn from a lot” in two ways: first, from a large number of variables and, second, from a potentially large amount of prior information, for example, stored in public repositories. We review applications of a variety of empirical Bayes methods to several well‐known model‐based prediction methods, including penalized regression, linear discriminant analysis, and Bayesian models with sparse or dense priors. We discuss “formal” empirical Bayes methods that maximize the marginal likelihood but also more informal approaches based on other data summaries. We contrast empirical Bayes to cross‐validation and full Bayes and discuss hybrid approaches. To study the relation between the quality of an empirical Bayes estimator and p, the number of variables, we consider a simple empirical Bayes estimator in a linear model setting. We argue that empirical Bayes is particularly useful when the prior contains multiple parameters, which model a priori information on variables termed “co‐data”. In particular, we present two novel examples that allow for co‐data: first, a Bayesian spike‐and‐slab setting that facilitates inclusion of multiple co‐data sources and types and, second, a hybrid empirical Bayes–full Bayes ridge regression approach for estimation of the posterior predictive interval.  相似文献   

10.
When historical data are available, incorporating them in an optimal way into the current data analysis can improve the quality of statistical inference. In Bayesian analysis, one can achieve this by using quality-adjusted priors of Zellner, or using power priors of Ibrahim and coauthors. These rules are constructed by raising the prior and/or the sample likelihood to some exponent values, which act as measures of compatibility of their quality or proximity of historical data to current data. This paper presents a general, optimum procedure that unifies these rules and is derived by minimizing a Kullback–Leibler divergence under a divergence constraint. We show that the exponent values are directly related to the divergence constraint set by the user and investigate the effect of this choice theoretically and also through sensitivity analysis. We show that this approach yields ‘100% efficient’ information processing rules in the sense of Zellner. Monte Carlo experiments are conducted to investigate the effect of historical and current sample sizes on the optimum rule. Finally, we illustrate these methods by applying them on real data sets.  相似文献   

11.
This article considers the Phase I analysis of data when the quality of a process or product is characterized by a multiple linear regression model. This is usually referred to as the analysis of linear profiles in the statistical quality control literature. The literature includes several approaches for the analysis of simple linear regression profiles. Little work, however, has been done in the analysis of multiple linear regression profiles. This article proposes a new approach for the analysis of Phase I multiple linear regression profiles. Using this approach, regardless of the number of explanatory variables used to describe it, the profile response is monitored using only three parameters, an intercept, a slope, and a variance. Using simulation, the performance of the proposed method is compared to that of the existing methods for monitoring multiple linear profiles data in terms of the probability of a signal. The advantage of the proposed method over the existing methods is greatly improved detection of changes in the process parameters of linear profiles with high-dimensional space. The article also proposes useful diagnostic aids based on F-statistics to help in identifying the source of profile variation and the locations of out-of-control samples. Finally, the use of multiple linear profile methods is illustrated by a data set from a calibration application at National Aeronautics and Space Administration (NASA) Langley Research Center.  相似文献   

12.
The posterior predictive p value (ppp) was invented as a Bayesian counterpart to classical p values. The methodology can be applied to discrepancy measures involving both data and parameters and can, hence, be targeted to check for various modeling assumptions. The interpretation can, however, be difficult since the distribution of the ppp value under modeling assumptions varies substantially between cases. A calibration procedure has been suggested, treating the ppp value as a test statistic in a prior predictive test. In this paper, we suggest that a prior predictive test may instead be based on the expected posterior discrepancy, which is somewhat simpler, both conceptually and computationally. Since both these methods require the simulation of a large posterior parameter sample for each of an equally large prior predictive data sample, we furthermore suggest to look for ways to match the given discrepancy by a computation‐saving conflict measure. This approach is also based on simulations but only requires sampling from two different distributions representing two contrasting information sources about a model parameter. The conflict measure methodology is also more flexible in that it handles non‐informative priors without difficulty. We compare the different approaches theoretically in some simple models and in a more complex applied example.  相似文献   

13.
Bayesian methods are often used to reduce the sample sizes and/or increase the power of clinical trials. The right choice of the prior distribution is a critical step in Bayesian modeling. If the prior not completely specified, historical data may be used to estimate it. In the empirical Bayesian analysis, the resulting prior can be used to produce the posterior distribution. In this paper, we describe a Bayesian Poisson model with a conjugate Gamma prior. The parameters of Gamma distribution are estimated in the empirical Bayesian framework under two estimation schemes. The straightforward numerical search for the maximum likelihood (ML) solution using the marginal negative binomial distribution is unfeasible occasionally. We propose a simplification to the maximization procedure. The Markov Chain Monte Carlo method is used to create a set of Poisson parameters from the historical count data. These Poisson parameters are used to uniquely define the Gamma likelihood function. Easily computable approximation formulae may be used to find the ML estimations for the parameters of gamma distribution. For the sample size calculations, the ML solution is replaced by its upper confidence limit to reflect an incomplete exchangeability of historical trials as opposed to current studies. The exchangeability is measured by the confidence interval for the historical rate of the events. With this prior, the formula for the sample size calculation is completely defined. Published in 2009 by John Wiley & Sons, Ltd.  相似文献   

14.
In the context of vaccine efficacy trial where the incidence rate is very low and a very large sample size is usually expected, incorporating historical data into a new trial is extremely attractive to reduce sample size and increase estimation precision. Nevertheless, for some infectious diseases, seasonal change in incidence rates poses a huge challenge in borrowing historical data and a critical question is how to properly take advantage of historical data borrowing with acceptable tolerance to between-trials heterogeneity commonly from seasonal disease transmission. In this article, we extend a probability-based power prior which determines the amount of information to be borrowed based on the agreement between the historical and current data, to make it applicable for either a single or multiple historical trials available, with constraint on the amount of historical information to be borrowed. Simulations are conducted to compare the performance of the proposed method with other methods including modified power prior (MPP), meta-analytic-predictive (MAP) prior and the commensurate prior methods. Furthermore, we illustrate the application of the proposed method for trial design in a practical setting.  相似文献   

15.
Summary.  In recent years, advances in Markov chain Monte Carlo techniques have had a major influence on the practice of Bayesian statistics. An interesting but hitherto largely underexplored corollary of this fact is that Markov chain Monte Carlo techniques make it practical to consider broader classes of informative priors than have been used previously. Conjugate priors, long the workhorse of classic methods for eliciting informative priors, have their roots in a time when modern computational methods were unavailable. In the current environment more attractive alternatives are practicable. A reappraisal of these classic approaches is undertaken, and principles for generating modern elicitation methods are described. A new prior elicitation methodology in accord with these principles is then presented.  相似文献   

16.
This paper presents a method of fitting factorial models to recidivism data consisting of the (possibly censored) time to ‘fail’ of individuals, in order to test for differences between groups. Here ‘failure’ means rearrest, reconviction or reincarceration, etc. A proportion P of the sample is assumed to be ‘susceptible’ to failure, i.e. to fail eventually, while the remaining 1-P are ‘immune’, and never fail. Thus failure may be described in two ways: by the probability P that an individual ever fails again (‘probability of recidivism’), and by the rate of failure Λ for the susceptibles. Related analyses have been proposed previously: this paper argues that a factorial approach, as opposed to regression approaches advocated previously, offers simplified analysis and interpretation of these kinds of data. The methods proposed, which are also applicable in medical statistics and reliability analyses, are demonstrated on data sets in which the factors are Parole Type (released to freedom or on parole), Age group (≤ 20 years, 20–40 years, > 40 years), and Marital Status. The outcome (failure) is a return to prison following first or second release.  相似文献   

17.
This paper considers the problem of undertaking a predictive analysis from a regression model when proper conjugate priors are used. It shows how the prior information can be incorporated as a virtual experiment by augmenting the data, and it derives expressions for both the prior and the posterior predictive densities. The results obtained are of considerable practical importance to practitioners of Bayesian regression methods.  相似文献   

18.
This article is concerned with the comparison of P-value and Bayesian measure in point null hypothesis for the variance of Normal distribution with unknown mean. First, using fixed prior for test parameter, the posterior probability is obtained and compared with the P-value when an appropriate prior is used for the mean parameter. In the second, lower bounds of the posterior probability of H0 under a reasonable class of prior are compared with the P-value. It has been shown that even in the presence of nuisance parameters, these two approaches can lead to different results in the statistical inference.  相似文献   

19.
Gene copy number (GCN) changes are common characteristics of many genetic diseases. Comparative genomic hybridization (CGH) is a new technology widely used today to screen the GCN changes in mutant cells with high resolution genome-wide. Statistical methods for analyzing such CGH data have been evolving. Existing methods are either frequentist's or full Bayesian. The former often has computational advantage, while the latter can incorporate prior information into the model, but could be misleading when one does not have sound prior information. In an attempt to take full advantages of both approaches, we develop a Bayesian-frequentist hybrid approach, in which a subset of the model parameters is inferred by the Bayesian method, while the rest parameters by the frequentist's. This new hybrid approach provides advantages over those of the Bayesian or frequentist's method used alone. This is especially the case when sound prior information is available on part of the parameters, and the sample size is relatively small. Spatial dependence and false discovery rate are also discussed, and the parameter estimation is efficient. As an illustration, we used the proposed hybrid approach to analyze a real CGH data.  相似文献   

20.
It is important to study historical temperature time series prior to the industrial revolution so that one can view the current global warming trend from a long‐term historical perspective. Because there are no instrumental records of such historical temperature data, climatologists have been interested in reconstructing historical temperatures using various proxy time series. In this paper, the authors examine a state‐space model approach for historical temperature reconstruction which not only makes use of the proxy data but also information on external forcings. A challenge in the implementation of this approach is the estimation of the parameters in the state‐space model. The authors developed two maximum likelihood methods for parameter estimation and studied the efficiency and asymptotic properties of the associated estimators through a combination of theoretical and numerical investigations. The Canadian Journal of Statistics 38: 488–505; 2010 © 2010 Crown in the right of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号