首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract. This article combines the best of both objective and subjective Bayesian inference in specifying priors for inequality and equality constrained analysis of variance models. Objectivity can be found in the use of training data to specify a prior distribution, subjectivity can be found in restrictions on the prior to formulate models. The aim of this article is to find the best model in a set of models specified using inequality and equality constraints on the model parameters. For the evaluation of the models an encompassing prior approach is used. The advantage of this approach is that only a prior for the unconstrained encompassing model needs to be specified. The priors for all constrained models can be derived from this encompassing prior. Different choices for this encompassing prior will be considered and evaluated.  相似文献   

2.
Empirical estimates of source statistical economic data such as trade flows, greenhouse gas emissions, or employment figures are always subject to uncertainty (stemming from measurement errors or confidentiality) but information concerning that uncertainty is often missing. This article uses concepts from Bayesian inference and the maximum entropy principle to estimate the prior probability distribution, uncertainty, and correlations of source data when such information is not explicitly provided. In the absence of additional information, an isolated datum is described by a truncated Gaussian distribution, and if an uncertainty estimate is missing, its prior equals the best guess. When the sum of a set of disaggregate data is constrained to match an aggregate datum, it is possible to determine the prior correlations among disaggregate data. If aggregate uncertainty is missing, all prior correlations are positive. If aggregate uncertainty is available, prior correlations can be either all positive, all negative, or a mix of both. An empirical example is presented, which reports relative uncertainties and correlation priors for the County Business Patterns database. In this example, relative uncertainties range from 1% to 80% and 20% of data pairs exhibit correlations below ?0.9 or above 0.9. Supplementary materials for this article are available online.  相似文献   

3.
Vector autoregressive (VAR) models are frequently used for forecasting and impulse response analysis. For both applications, shrinkage priors can help improving inference. In this article, we apply the Normal-Gamma shrinkage prior to the VAR with stochastic volatility case and derive its relevant conditional posterior distributions. This framework imposes a set of normally distributed priors on the autoregressive coefficients and the covariance parameters of the VAR along with Gamma priors on a set of local and global prior scaling parameters. In a second step, we modify this prior setup by introducing another layer of shrinkage with scaling parameters that push certain regions of the parameter space to zero. Two simulation exercises show that the proposed framework yields more precise estimates of model parameters and impulse response functions. In addition, a forecasting exercise applied to U.S. data shows that this prior performs well relative to other commonly used specifications in terms of point and density predictions. Finally, performing structural inference suggests that responses to monetary policy shocks appear to be reasonable.  相似文献   

4.
Just as frequentist hypothesis tests have been developed to check model assumptions, prior predictive p-values and other Bayesian p-values check prior distributions as well as other model assumptions. These model checks not only suffer from the usual threshold dependence of p-values, but also from the suppression of model uncertainty in subsequent inference. One solution is to transform Bayesian and frequentist p-values for model assessment into a fiducial distribution across the models. Averaging the Bayesian or frequentist posterior distributions with respect to the fiducial distribution can reproduce results from Bayesian model averaging or classical fiducial inference.  相似文献   

5.
We propose a general procedure for constructing nonparametric priors for Bayesian inference. Under very general assumptions, the proposed prior selects absolutely continuous distribution functions, hence it can be useful with continuous data. We use the notion ofFeller-type approximation, with a random scheme based on the natural exponential family, in order to construct a large class of distribution functions. We show how one can assign a probability to such a class and discuss the main properties of the proposed prior, namedFeller prior. Feller priors are related to mixture models with unknown number of components or, more generally, to mixtures with unknown weight distribution. Two illustrations relative to the estimation of a density and of a mixing distribution are carried out with respect to well known data-set in order to evaluate the performance of our procedure. Computations are performed using a modified version of an MCMC algorithm which is briefly described.  相似文献   

6.
The Bayesian design approach accounts for uncertainty of the parameter values on which optimal design depends, but Bayesian designs themselves depend on the choice of a prior distribution for the parameter values. This article investigates Bayesian D-optimal designs for two-parameter logistic models, using numerical search. We show three things: (1) a prior with large variance leads to a design that remains highly efficient under other priors, (2) uniform and normal priors lead to equally efficient designs, and (3) designs with four or five equidistant equally weighted design points are highly efficient relative to the Bayesian D-optimal designs.  相似文献   

7.
Our purpose is to explore the intrinsic Bayesian inference on the rate of a Poisson distribution and on the ratio of the rates of two independent Poisson distributions, with the natural conjugate family of priors in the first case and the semi-conjugate family of priors defined by Laurent and Legrand (2011) in the second case. Intrinsic Bayesian inference is derived from the Bayesian decision theory framework based on the intrinsic discrepancy loss function. We cover in particular the case of some objective Bayesian procedures suggested by Bernardo when considering reference priors.  相似文献   

8.
We propose methods for Bayesian inference for missing covariate data with a novel class of semi-parametric survival models with a cure fraction. We allow the missing covariates to be either categorical or continuous and specify a parametric distribution for the covariates that is written as a sequence of one dimensional conditional distributions. We assume that the missing covariates are missing at random (MAR) throughout. We propose an informative class of joint prior distributions for the regression coefficients and the parameters arising from the covariate distributions. The proposed class of priors are shown to be useful in recovering information on the missing covariates especially in situations where the missing data fraction is large. Properties of the proposed prior and resulting posterior distributions are examined. Also, model checking techniques are proposed for sensitivity analyses and for checking the goodness of fit of a particular model. Specifically, we extend the Conditional Predictive Ordinate (CPO) statistic to assess goodness of fit in the presence of missing covariate data. Computational techniques using the Gibbs sampler are implemented. A real data set involving a melanoma cancer clinical trial is examined to demonstrate the methodology.  相似文献   

9.
In order to robustify posterior inference, besides the use of large classes of priors, it is necessary to consider uncertainty about the sampling model. In this article we suggest that a convenient and simple way to incorporate model robustness is to consider a discrete set of competing sampling models, and combine it with a suitable large class of priors. This set reflects foreseeable departures of the base model, like thinner or heavier tails or asymmetry. We combine the models with different classes of priors that have been proposed in the vast literature on Bayesian robustness with respect to the prior. Also we explore links with the related literature of stable estimation and precise measurement theory, now with more than one model entertained. To these ends it will be necessary to introduce a procedure for model comparison that does not depend on an arbitrary constant or scale. We utilize a recent development on automatic Bayes factors with self-adjusted scale, the ‘intrinsic Bayes factor’ (Berger and Pericchi, Technical Report, 1993).  相似文献   

10.
We study objective Bayesian inference for linear regression models with residual errors distributed according to the class of two-piece scale mixtures of normal distributions. These models allow for capturing departures from the usual assumption of normality of the errors in terms of heavy tails, asymmetry, and certain types of heteroscedasticity. We propose a general non-informative, scale-invariant, prior structure and provide sufficient conditions for the propriety of the posterior distribution of the model parameters, which cover cases when the response variables are censored. These results allow us to apply the proposed models in the context of survival analysis. This paper represents an extension to the Bayesian framework of the models proposed in [16]. We present a simulation study that shows good frequentist properties of the posterior credible intervals as well as point estimators associated to the proposed priors. We illustrate the performance of these models with real data in the context of survival analysis of cancer patients.  相似文献   

11.
Let X has a p-dimensional normal distribution with mean vector θ and identity covariance matrix I. In a compound decision problem consisting of squared-error estimation of θ, Strawderman (1971) placed a Beta (α, 1) prior distribution on a normal class of priors to produce a family of Bayes minimax estimators. We propose an incomplete Gamma(α, β) prior distribution on the same normal class of priors to produce a larger family of Bayes minimax estimators. We present the results of a Monte Carlo study to demonstrate the reduced risk of our estimators in comparison with the Strawderman estimators when θ is away from the zero vector.  相似文献   

12.
Prediction limits for Poisson distribution are useful in real life when predicting the occurrences of some phenomena, for example, the number of infections from a disease per year among school children, or the number of hospitalizations per year among patients with cardiovascular disease. In order to allocate the right resources and to estimate the associated cost, one would want to know the worst (i.e., an upper limit) and the best (i.e., the lower limit) scenarios. Under the Poisson distribution, we construct the optimal frequentist and Bayesian prediction limits, and assess frequentist properties of the Bayesian prediction limits. We show that Bayesian upper prediction limit derived from uniform prior distribution and Bayesian lower prediction limit derived from modified Jeffreys non informative prior coincide with their respective frequentist limits. This is not the case for the Bayesian lower prediction limit derived from a uniform prior and the Bayesian upper prediction limit derived from a modified Jeffreys prior distribution. Furthermore, it is shown that not all Bayesian prediction limits derived from a proper prior can be interpreted in a frequentist context. Using a counterexample, we state a sufficient condition and show that Bayesian prediction limits derived from proper priors satisfying our condition cannot be interpreted in a frequentist context. Analysis of simulated data and data on Atlantic tropical storm occurrences are presented.  相似文献   

13.
In this paper, we proposed a new two-parameter lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk scenario. The properties of the proposed distribution are discussed, including a formal proof of its density function and an explicit algebraic formulae for its quantiles and survival and hazard functions. Also, we have discussed inference aspects of the model proposed via Bayesian inference by using Markov chain Monte Carlo simulation. A simulation study investigates the frequentist properties of the proposed estimators obtained under the assumptions of non-informative priors. Further, some discussions on models selection criteria are given. The developed methodology is illustrated on a real data set.  相似文献   

14.
We propose a more efficient version of the slice sampler for Dirichlet process mixture models described by Walker (Commun. Stat., Simul. Comput. 36:45–54, 2007). This new sampler allows for the fitting of infinite mixture models with a wide-range of prior specifications. To illustrate this flexibility we consider priors defined through infinite sequences of independent positive random variables. Two applications are considered: density estimation using mixture models and hazard function estimation. In each case we show how the slice efficient sampler can be applied to make inference in the models. In the mixture case, two submodels are studied in detail. The first one assumes that the positive random variables are Gamma distributed and the second assumes that they are inverse-Gaussian distributed. Both priors have two hyperparameters and we consider their effect on the prior distribution of the number of occupied clusters in a sample. Extensive computational comparisons with alternative “conditional” simulation techniques for mixture models using the standard Dirichlet process prior and our new priors are made. The properties of the new priors are illustrated on a density estimation problem.  相似文献   

15.
The Bayesian CART (classification and regression tree) approach proposed by Chipman, George and McCulloch (1998) entails putting a prior distribution on the set of all CART models and then using stochastic search to select a model. The main thrust of this paper is to propose a new class of hierarchical priors which enhance the potential of this Bayesian approach. These priors indicate a preference for smooth local mean structure, resulting in tree models which shrink predictions from adjacent terminal node towards each other. Past methods for tree shrinkage have searched for trees without shrinking, and applied shrinkage to the identified tree only after the search. By using hierarchical priors in the stochastic search, the proposed method searches for shrunk trees that fit well and improves the tree through shrinkage of predictions.  相似文献   

16.
The Bayes factor is a key tool in hypothesis testing. Nevertheless, the important issue of which priors should be used to develop objective Bayes factors remains open. The authors consider this problem in the context of the one-way random effects model. They use concepts such as orthogonality, predictive matching and invariance to justify a specific form of the priors for common parameters and derive the intrinsic and divergence based prior for the new parameter. The authors show that both intrinsic priors or divergence-based priors produce consistent Bayes factors. They illustrate the methods and compare them with other proposals.  相似文献   

17.
This paper develops an objective Bayesian analysis method for estimating unknown parameters of the half-logistic distribution when a sample is available from the progressively Type-II censoring scheme. Noninformative priors such as Jeffreys and reference priors are derived. In addition, derived priors are checked to determine whether they satisfy probability-matching criteria. The Metropolis–Hasting algorithm is applied to generate Markov chain Monte Carlo samples from these posterior density functions because marginal posterior density functions of each parameter cannot be expressed in an explicit form. Monte Carlo simulations are conducted to investigate frequentist properties of estimated models under noninformative priors. For illustration purposes, a real data set is presented, and the quality of models under noninformative priors is evaluated through posterior predictive checking.  相似文献   

18.
For the balanced variance component model when the intraclass correlation coefficient is of interest, Bayesian analysis is often appropriate. Berger and Bernardo’s (1992a) grouped ordering reference prior approach is used to analyze this model. The reference priors are developed and compared for the posterior inference with real and simulated data. We examine whether the reference priors satisfy the probability-matching criterion. Further, the reference prior is shown to be good in the sense of correct frequentist coverage probability of the posterior quantile.  相似文献   

19.
In this article, the Bayes estimates of two-parameter gamma distribution are considered. It is well known that the Bayes estimators of the two-parameter gamma distribution do not have compact form. In this paper, it is assumed that the scale parameter has a gamma prior and the shape parameter has any log-concave prior, and they are independently distributed. Under the above priors, we use Gibbs sampling technique to generate samples from the posterior density function. Based on the generated samples, we can compute the Bayes estimates of the unknown parameters and can also construct HPD credible intervals. We also compute the approximate Bayes estimates using Lindley's approximation under the assumption of gamma priors of the shape parameter. Monte Carlo simulations are performed to compare the performances of the Bayes estimators with the classical estimators. One data analysis is performed for illustrative purposes. We further discuss the Bayesian prediction of future observation based on the observed sample and it is seen that the Gibbs sampling technique can be used quite effectively for estimating the posterior predictive density and also for constructing predictive intervals of the order statistics from the future sample.  相似文献   

20.
Abstract. We study the problem of deciding which of two normal random samples, at least one of them of small size, has greater expected value. Unlike in the standard Bayesian approach, in which a single prior distribution and a single loss function are declared, we assume that a set of plausible priors and a set of plausible loss functions are elicited from the expert (the client or the sponsor of the analysis). The choice of the sample that has greater expected value is based on equilibrium priors, allowing for an impasse if for some plausible priors and loss functions choosing one and for others the other sample is associated with smaller expected loss.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号