首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Practical Bayesian data analysis involves manipulating and summarizing simulations from the posterior distribution of the unknown parameters. By manipulation we mean computing posterior distributions of functions of the unknowns, and generating posterior predictive distributions. The results need to be summarized both numerically and graphically. We introduce, and implement in R, an object-oriented programming paradigm based on a random variable object type that is implicitly represented by simulations. This makes it possible to define vector and array objects that may contain both random and deterministic quantities, and syntax rules that allow to treat these objects like any numeric vectors or arrays, providing a solution to various problems encountered in Bayesian computing involving posterior simulations. We illustrate the use of this new programming environment with examples of Bayesian computing, demonstrating missing-value imputation, nonlinear summary of regression predictions, and posterior predictive checking.  相似文献   

2.
This paper explores the use of data augmentation in settings beyond the standard Bayesian one. In particular, we show that, after proposing an appropriate generalised data-augmentation principle, it is possible to extend the range of sampling situations in which fiducial methods can be applied by constructing Markov chains whose stationary distributions represent valid posterior inferences on model parameters. Some properties of these chains are presented and a number of open questions are discussed. We also use the approach to draw out connections between classical and Bayesian approaches in some standard settings.  相似文献   

3.
Copula, marginal distributions and model selection: a Bayesian note   总被引:3,自引:0,他引:3  
Copula functions and marginal distributions are combined to produce multivariate distributions. We show advantages of estimating all parameters of these models using the Bayesian approach, which can be done with standard Markov chain Monte Carlo algorithms. Deviance-based model selection criteria are also discussed when applied to copula models since they are invariant under monotone increasing transformations of the marginals. We focus on the deviance information criterion. The joint estimation takes into account all dependence structure of the parameters’ posterior distributions in our chosen model selection criteria. Two Monte Carlo studies are conducted to show that model identification improves when the model parameters are jointly estimated. We study the Bayesian estimation of all unknown quantities at once considering bivariate copula functions and three known marginal distributions.  相似文献   

4.
The computational demand required to perform inference using Markov chain Monte Carlo methods often obstructs a Bayesian analysis. This may be a result of large datasets, complex dependence structures, or expensive computer models. In these instances, the posterior distribution is replaced by a computationally tractable approximation, and inference is based on this working model. However, the error that is introduced by this practice is not well studied. In this paper, we propose a methodology that allows one to examine the impact on statistical inference by quantifying the discrepancy between the intractable and working posterior distributions. This work provides a structure to analyse model approximations with regard to the reliability of inference and computational efficiency. We illustrate our approach through a spatial analysis of yearly total precipitation anomalies where covariance tapering approximations are used to alleviate the computational demand associated with inverting a large, dense covariance matrix.  相似文献   

5.
ABSTRACT

In Bayesian theory, calculating a posterior probability distribution is highly important but typically difficult. Therefore, some methods have been proposed to deal with such problem, among which, the most popular one is the asymptotic expansions of posterior distributions. In this paper, we propose an alternative approach, named a random weighting method, for scaled posterior distributions, and give an ideal convergence rate, o(n( ? 1/2)), which serves as the theoretical guarantee for methods of numerical simulations.  相似文献   

6.
Suppose in a distribution problem, the sample information W is split into two pieces W 1 and W 2, and the parameters involved are split into two sets, π containing the parameters of interest, and θ containing nuisance parameters. It is shown that, under certain conditions, the posterior distribution of π does not depend on the data W 2, which can thus be ignored. This also has consequences for the predictive distribution of future (or missing) observations. In fact, under similar conditions, the predictive distributions using W or just W 1 are identical.  相似文献   

7.
Introduction: We use data from Spain on roads and motorways traffic accidents in May 2004 to quantify the statistical association between quick medical response time and mortality rate. Method: Probit and logit parameters are estimated by a Bayesian method in which samples from the posterior densities are obtained through an MCMC simulation scheme. We provide posterior credible intervals and posterior partial effects of a quick medical response at several time levels over which we express our prior beliefs. Results: A reduction of 5 min, from a 25-min response-time level, is associated with lower posterior probabilities of death in roads and motorways accidents of 24% and 30%, respectively.  相似文献   

8.
ABSTRACT

In queuing theory, a major interest of researchers is studying the behavior and formation process and analyzing the performance characteristics of queues, particularly the traffic intensity, which is defined as the ratio between the arrival rate and the service rate. How these parameters can be estimated using some statistical inferential method is the mathematical problem treated here. This article aims to obtain better Bayesian estimates for the traffic intensity of M/M/1 queues, which, in Kendall notation, stand for Markovian single-server infinity queues. The Jeffreys prior is proposed to obtain the posterior and predictive distributions of some parameters of interest. Samples are obtained through simulation and some performance characteristics are analyzed. It is observed from the Bayes factor that Jeffreys prior is competitive, among informative and non-informative prior distributions, and presents the best performance in many of the cases tested.  相似文献   

9.
In this article, a subjective Bayesian approach is followed to derive estimators for the parameters of the normal model by assuming a gamma-mixture class of prior distributions, which includes the gamma and the noncentral gamma as special cases. An innovative approach is proposed to find the analytical expression of the posterior density function when a complicated prior structure is ensued. The simulation studies and a real dataset illustrate the modeling advantages of this proposed prior and support some of the findings.  相似文献   

10.
We introduce two classes of multivariate log-skewed distributions with normal kernel: the log canonical fundamental skew-normal (log-CFUSN) and the log unified skew-normal. We also discuss some properties of the log-CFUSN family of distributions. These new classes of log-skewed distributions include the log-normal and multivariate log-skew normal families as particular cases. We discuss some issues related to Bayesian inference in the log-CFUSN family of distributions, mainly we focus on how to model the prior uncertainty about the skewing parameter. Based on the stochastic representation of the log-CFUSN family, we propose a data augmentation strategy for sampling from the posterior distributions. This proposed family is used to analyse the US national monthly precipitation data. We conclude that a high-dimensional skewing function lead to a better model fit.  相似文献   

11.
Kontkanen  P.  Myllymäki  P.  Silander  T.  Tirri  H.  Grünwald  P. 《Statistics and Computing》2000,10(1):39-54
In this paper we are interested in discrete prediction problems for a decision-theoretic setting, where the task is to compute the predictive distribution for a finite set of possible alternatives. This question is first addressed in a general Bayesian framework, where we consider a set of probability distributions defined by some parametric model class. Given a prior distribution on the model parameters and a set of sample data, one possible approach for determining a predictive distribution is to fix the parameters to the instantiation with the maximum a posteriori probability. A more accurate predictive distribution can be obtained by computing the evidence (marginal likelihood), i.e., the integral over all the individual parameter instantiations. As an alternative to these two approaches, we demonstrate how to use Rissanen's new definition of stochastic complexity for determining predictive distributions, and show how the evidence predictive distribution with Jeffrey's prior approaches the new stochastic complexity predictive distribution in the limit with increasing amount of sample data. To compare the alternative approaches in practice, each of the predictive distributions discussed is instantiated in the Bayesian network model family case. In particular, to determine Jeffrey's prior for this model family, we show how to compute the (expected) Fisher information matrix for a fixed but arbitrary Bayesian network structure. In the empirical part of the paper the predictive distributions are compared by using the simple tree-structured Naive Bayes model, which is used in the experiments for computational reasons. The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense. The evidence-based methods are also quite robust in the sense that they predict surprisingly well even when only a small fraction of the full training set is used.  相似文献   

12.
13.
Some statistical data are most easily accessed in terms of record values. Examples include meteorology, hydrology and athletic events. Also, there are a number of industrial situations where experimental outcomes are a sequence of record-breaking observations. In this paper, Bayesian estimation for the two parameters of some life distributions, including Exponential, Weibull, Pareto and Burr type XII, are obtained based on upper record values. Prediction, either point or interval, for future upper record values is also presented from a Bayesian view point. Some of the non-Bayesian results can be achieved as limiting cases from our results. Numerical computations are given to illustrate the results.  相似文献   

14.
The paper proposes a Markov Chain Monte Carlo method for Bayesian analysis of general regression models with disturbances from the family of stable distributions with arbitrary characteristic exponent and skewness parameter. The method does not require data augmentation and is based on combining fast Fourier transforms of the characteristic function to get the likelihood function and a Metropolis random walk chain to perform posterior analysis. Both a validation nonlinear regression and a nonlinear model for the Standard and Poor’s composite price index illustrate the method.  相似文献   

15.
Statistical meta‐analysis is mostly carried out with the help of the random effect normal model, including the case of discrete random variables. We argue that the normal approximation is not always able to adequately capture the underlying uncertainty of the original discrete data. Furthermore, when we examine the influence of the prior distributions considered, in the presence of rare events, the results from this approximation can be very poor. In order to assess the robustness of the quantities of interest in meta‐analysis with respect to the choice of priors, this paper proposes an alternative Bayesian model for binomial random variables with several zero responses. Particular attention is paid to the coherence between the prior distributions of the study model parameters and the meta‐parameter. Thus, our method introduces a simple way to examine the sensitivity of these quantities to the structure dependence selected for study. For illustrative purposes, an example with real data is analysed, using the proposed Bayesian meta‐analysis model for binomial sparse data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
In this article, we develop exact inference for two populations that have a two-parameter exponential distribution with the same location parameter and different scale parameters when Type-II censoring is implemented on the two samples in a combined manner. We obtain the conditional maximum likelihood estimators (MLEs) of the three parameters. We then derive the exact distributions of these MLEs along with their moment generating functions. Based on general entropy loss function, Bayesian study about the parameters is presented. Finally, some simulation results and an illustrative example are presented to illustrate the methods of inference developed here.  相似文献   

17.
In this article we consider the sample size determination problem in the context of robust Bayesian parameter estimation of the Bernoulli model. Following a robust approach, we consider classes of conjugate Beta prior distributions for the unknown parameter. We assume that inference is robust if posterior quantities of interest (such as point estimates and limits of credible intervals) do not change too much as the prior varies in the selected classes of priors. For the sample size problem, we consider criteria based on predictive distributions of lower bound, upper bound and range of the posterior quantity of interest. The sample size is selected so that, before observing the data, one is confident to observe a small value for the posterior range and, depending on design goals, a large (small) value of the lower (upper) bound of the quantity of interest. We also discuss relationships with and comparison to non robust and non informative Bayesian methods.  相似文献   

18.
The problem of approximating an interval null or imprecise hypothesis test by a point null or precise hypothesis test under a Bayesian framework is considered. In the literature, some of the methods for solving this problem have used the Bayes factor for testing a point null and justified it as an approximation to the interval null. However, many authors recommend evaluating tests through the posterior odds, a Bayesian measure of evidence against the null hypothesis. It is of interest then to determine whether similar results hold when using the posterior odds as the primary measure of evidence. For the prior distributions under which the approximation holds with respect to the Bayes factor, it is shown that the posterior odds for testing the point null hypothesis does not approximate the posterior odds for testing the interval null hypothesis. In fact, in order to obtain convergence of the posterior odds, a number of restrictive conditions need to be placed on the prior structure. Furthermore, under a non-symmetrical prior setup, neither the Bayes factor nor the posterior odds for testing the imprecise hypothesis converges to the Bayes factor or posterior odds respectively for testing the precise hypothesis. To rectify this dilemma, it is shown that constraints need to be placed on the priors. In both situations, the class of priors constructed to ensure convergence of the posterior odds are not practically useful, thus questioning, from a Bayesian perspective, the appropriateness of point null testing in a problem better represented by an interval null. The theories developed are also applied to an epidemiological data set from White et al. (Can. Veterinary J. 30 (1989) 147–149.) in order to illustrate and study priors for which the point null hypothesis test approximates the interval null hypothesis test. AMS Classification: Primary 62F15; Secondary 62A15  相似文献   

19.
The subject of this paper is Bayesian inference about the fixed and random effects of a mixed-effects linear statistical model with two variance components. It is assumed that a priori the fixed effects have a noninformative distribution and that the reciprocals of the variance components are distributed independently (of each other and of the fixed effects) as gamma random variables. It is shown that techniques similar to those employed in a ridge analysis of a response surface can be used to construct a one-dimensional curve that contains all of the stationary points of the posterior density of the random effects. The “ridge analysis” (of the posterior density) can be useful (from a computational standpoint) in finding the number and the locations of the stationary points and can be very informative about various features of the posterior density. Depending on what is revealed by the ridge analysis, a multivariate normal or multivariate-t distribution that is centered at a posterior mode may provide a satisfactory approximation to the posterior distribution of the random effects (which is of the poly-t form).  相似文献   

20.
Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also statistically consistent and efficient. We provide a brief overview of MML inductive inference (Wallace C.S. and Boulton D.M. 1968. Computer Journal, 11: 185–194; Wallace C.S. and Freeman P.R. 1987. J. Royal Statistical Society (Series B), 49: 240–252; Wallace C.S. and Dowe D.L. (1999). Computer Journal), and how it has both an information-theoretic and a Bayesian interpretation. We then outline how MML is used for statistical parameter estimation, and how the MML mixture modelling program, Snob (Wallace C.S. and Boulton D.M. 1968. Computer Journal, 11: 185–194; Wallace C.S. 1986. In: Proceedings of the Nineteenth Australian Computer Science Conference (ACSC-9), Vol. 8, Monash University, Australia, pp. 357–366; Wallace C.S. and Dowe D.L. 1994b. In: Zhang C. et al. (Eds.), Proc. 7th Australian Joint Conf. on Artif. Intelligence. World Scientific, Singapore, pp. 37–44. See http://www.csse.monash.edu.au/-dld/Snob.html) uses the message lengths from various parameter estimates to enable it to combine parameter estimation with selection of the number of components and estimation of the relative abundances of the components. The message length is (to within a constant) the logarithm of the posterior probability (not a posterior density) of the theory. So, the MML theory can also be regarded as the theory with the highest posterior probability. Snob currently assumes that variables are uncorrelated within each component, and permits multi-variate data from Gaussian, discrete multi-category (or multi-state or multinomial), Poisson and von Mises circular distributions, as well as missing data. Additionally, Snob can do fully-parameterised mixture modelling, estimating the latent class assignments in addition to estimating the number of components, the relative abundances of the parameters and the component parameters. We also report on extensions of Snob for data which has sequential or spatial correlations between observations, or correlations between attributes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号