首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article considers the objective Bayesian testing in the normal regression models with first-order autoregressive residuals. We propose some solutions based on a Bayesian model selection procedure to this problem where no subjective input is considered. We construct the proper priors for testing the autocorrelation coefficient based on measures of divergence between competing models, which is called the divergence-based (DB) priors and then propose the objective Bayesian decision-theoretic rule, which is called the Bayesian reference criterion (BRC). Finally, we derive the intrinsic test statistic for testing the autocorrelation coefficient. The behavior of the Bayes factor-based DB priors is examined by comparing with the BRC in a simulation study and an example.  相似文献   

2.
The Generalized gamma (GG) distribution plays an important role in statistical analysis. For this distribution, we derive non-informative priors using formal rules, such as Jeffreys prior, maximal data information prior and reference priors. We have shown that these most popular formal rules with natural ordering of parameters, lead to priors with improper posteriors. This problem is overcome by considering a prior averaging approach discussed in Berger et al. [Overall objective priors. Bayesian Analysis. 2015;10(1):189–221]. The obtained hybrid Jeffreys-reference prior is invariant under one-to-one transformations and yields a proper posterior distribution. We obtained good frequentist properties of the proposed prior using a detailed simulation study. Finally, an analysis of the maximum annual discharge of the river Rhine at Lobith is presented.  相似文献   

3.
Our purpose is to explore the intrinsic Bayesian inference on the rate of a Poisson distribution and on the ratio of the rates of two independent Poisson distributions, with the natural conjugate family of priors in the first case and the semi-conjugate family of priors defined by Laurent and Legrand (2011) in the second case. Intrinsic Bayesian inference is derived from the Bayesian decision theory framework based on the intrinsic discrepancy loss function. We cover in particular the case of some objective Bayesian procedures suggested by Bernardo when considering reference priors.  相似文献   

4.
For certain mixture models, improper priors are undesirable because they yield improper posteriors. However, proper priors may be undesirable because they require subjective input. We propose the use of specially chosen data-dependent priors. We show that, in some cases, data-dependent priors are the only priors that produce intervals with second-order correct frequentist coverage. The resulting posterior also has another interpretation: it is the product of a fixed prior and a pseudolikelihood.  相似文献   

5.
The generalized lognormal distribution plays an important role in analysing data from different life testing experiments. In this paper, we consider Bayesian analysis of this distribution using various objective priors for the model parameters. Specifically, we derive expressions for the Jeffreys-type priors, the reference priors with different group orderings of the parameters, and the first-order matching priors. We also study the properties of the posterior distributions of the parameters under these improper priors. It is shown that only two of them result in proper posterior distributions. Numerical simulation studies are conducted to compare the performances of the Bayesian estimators under the considered priors and the maximum likelihood estimates. Finally, a real-data application is also provided for illustrative purposes.  相似文献   

6.
Abstract. This article combines the best of both objective and subjective Bayesian inference in specifying priors for inequality and equality constrained analysis of variance models. Objectivity can be found in the use of training data to specify a prior distribution, subjectivity can be found in restrictions on the prior to formulate models. The aim of this article is to find the best model in a set of models specified using inequality and equality constraints on the model parameters. For the evaluation of the models an encompassing prior approach is used. The advantage of this approach is that only a prior for the unconstrained encompassing model needs to be specified. The priors for all constrained models can be derived from this encompassing prior. Different choices for this encompassing prior will be considered and evaluated.  相似文献   

7.
We investigate certain objective priors for the parameters in a normal linear regression models with one of the explanatory variables subject to measurement error. We first show that the use of the standard non informative prior for normal linear regression without measurement error leads to an improper posterior in the measurement error model. We then derive the Jeffreys prior and reference priors, and show that they lead to proper posteriors. We use simulation study to compare the frequentist performance of the estimates derived using these priors, and the MLE.  相似文献   

8.
The implementation of the Bayesian paradigm to model comparison can be problematic. In particular, prior distributions on the parameter space of each candidate model require special care. While it is well known that improper priors cannot be routinely used for Bayesian model comparison, we claim that also the use of proper conventional priors under each model should be regarded as suspicious, especially when comparing models having different dimensions. The basic idea is that priors should not be assigned separately under each model; rather they should be related across models, in order to acquire some degree of compatibility, and thus allow fairer and more robust comparisons. In this connection, the intrinsic prior as well as the expected posterior prior (EPP) methodology represent a useful tool. In this paper we develop a procedure based on EPP to perform Bayesian model comparison for discrete undirected decomposable graphical models, although our method could be adapted to deal also with directed acyclic graph models. We present two possible approaches. One based on imaginary data, and one which makes use of a limited number of actual data. The methodology is illustrated through the analysis of a 2×3×4 contingency table.  相似文献   

9.
Feature selection arises in many areas of modern science. For example, in genomic research, we want to find the genes that can be used to separate tissues of different classes (e.g. cancer and normal). One approach is to fit regression/classification models with certain penalization. In the past decade, hyper-LASSO penalization (priors) have received increasing attention in the literature. However, fully Bayesian methods that use Markov chain Monte Carlo (MCMC) for regression/classification with hyper-LASSO priors are still in lack of development. In this paper, we introduce an MCMC method for learning multinomial logistic regression with hyper-LASSO priors. Our MCMC algorithm uses Hamiltonian Monte Carlo in a restricted Gibbs sampling framework. We have used simulation studies and real data to demonstrate the superior performance of hyper-LASSO priors compared to LASSO, and to investigate the issues of choosing heaviness and scale of hyper-LASSO priors.  相似文献   

10.
The Jeffreys-rule prior and the marginal independence Jeffreys prior are recently proposed in Fonseca et al. [Objective Bayesian analysis for the Student-t regression model, Biometrika 95 (2008), pp. 325–333] as objective priors for the Student-t regression model. The authors showed that the priors provide proper posterior distributions and perform favourably in parameter estimation. Motivated by a practical financial risk management application, we compare the performance of the two Jeffreys priors with other priors proposed in the literature in a problem of estimating high quantiles for the Student-t model with unknown degrees of freedom. Through an asymptotic analysis and a simulation study, we show that both Jeffreys priors perform better in using a specific quantile of the Bayesian predictive distribution to approximate the true quantile.  相似文献   

11.
Bayesian analyses often take for granted the assumption that the posterior distribution has at least a first moment. They often include computed or estimated posterior means. In this note, the authors show an example of a Weibull distribution parameter where the theoretical posterior mean fails to exist for commonly used proper semi–conjugate priors. They also show that posterior moments can fail to exist with commonly used noninformative priors including Jeffreys, reference and matching priors, despite the fact that the posteriors are proper. Moreover, within a broad class of priors, the predictive distribution also has no mean. The authors illustrate the problem with a simulated example. Their results demonstrate that the unwitting use of estimated posterior means may yield unjustified conclusions.  相似文献   

12.
This paper surveys various shrinkage, smoothing and selection priors from a unifying perspective and shows how to combine them for Bayesian regularisation in the general class of structured additive regression models. As a common feature, all regularisation priors are conditionally Gaussian, given further parameters regularising model complexity. Hyperpriors for these parameters encourage shrinkage, smoothness or selection. It is shown that these regularisation (log-) priors can be interpreted as Bayesian analogues of several well-known frequentist penalty terms. Inference can be carried out with unified and computationally efficient MCMC schemes, estimating regularised regression coefficients and basis function coefficients simultaneously with complexity parameters and measuring uncertainty via corresponding marginal posteriors. For variable and function selection we discuss several variants of spike and slab priors which can also be cast into the framework of conditionally Gaussian priors. The performance of the Bayesian regularisation approaches is demonstrated in a hazard regression model and a high-dimensional geoadditive regression model.  相似文献   

13.
A Bayes factor between two models can be greatly affected by the prior distributions on the model parameters. When prior information is weak, very dispersed proper prior distributions are known to create a problem for the Bayes factor when competing models differ in dimension, and it is of even greater concern when one of the models is of infinite dimension. Therefore, we propose an innovative method which uses training samples to calibrate the prior distributions so that they achieve a reasonable level of ‘information’. Then the calibrated Bayes factor can be computed over the remaining data. This method makes no assumption on model forms (parametric or nonparametric) and can be used with both proper and improper priors. We illustrate, through simulation studies and a real data example, that the calibrated Bayes factor yields robust and reliable model preferences under various situations.  相似文献   

14.
David R. Bickel 《Statistics》2018,52(3):552-570
Learning from model diagnostics that a prior distribution must be replaced by one that conflicts less with the data raises the question of which prior should instead be used for inference and decision. The same problem arises when a decision maker learns that one or more reliable experts express unexpected beliefs. In both cases, coherence of the solution would be guaranteed by applying Bayes's theorem to a distribution of prior distributions that effectively assigns the initial prior distribution a probability arbitrarily close to 1. The new distribution for inference would then be the distribution of priors conditional on the insight that the prior distribution lies in a closed convex set that does not contain the initial prior. A readily available distribution of priors needed for such conditioning is the law of the empirical distribution of sufficiently large number of independent parameter values drawn from the initial prior. According to the Gibbs conditioning principle from the theory of large deviations, the resulting new prior distribution minimizes the entropy relative to the initial prior. While minimizing relative entropy accommodates the necessity of going beyond the initial prior without departing from it any more than the insight demands, the large-deviation derivation also ensures the advantages of Bayesian coherence. This approach is generalized to uncertain insights by allowing the closed convex set of priors to be random.  相似文献   

15.
This paper develops an objective Bayesian analysis method for estimating unknown parameters of the half-logistic distribution when a sample is available from the progressively Type-II censoring scheme. Noninformative priors such as Jeffreys and reference priors are derived. In addition, derived priors are checked to determine whether they satisfy probability-matching criteria. The Metropolis–Hasting algorithm is applied to generate Markov chain Monte Carlo samples from these posterior density functions because marginal posterior density functions of each parameter cannot be expressed in an explicit form. Monte Carlo simulations are conducted to investigate frequentist properties of estimated models under noninformative priors. For illustration purposes, a real data set is presented, and the quality of models under noninformative priors is evaluated through posterior predictive checking.  相似文献   

16.
Network meta-analysis synthesizes several studies of multiple treatment comparisons to simultaneously provide inference for all treatments in the network. It can often strengthen inference on pairwise comparisons by borrowing evidence from other comparisons in the network. Current network meta-analysis approaches are derived from either conventional pairwise meta-analysis or hierarchical Bayesian methods. This paper introduces a new approach for network meta-analysis by combining confidence distributions (CDs). Instead of combining point estimators from individual studies in the conventional approach, the new approach combines CDs, which contain richer information than point estimators, and thus achieves greater efficiency in its inference. The proposed CD approach can efficiently integrate all studies in the network and provide inference for all treatments, even when individual studies contain only comparisons of subsets of the treatments. Through numerical studies with real and simulated data sets, the proposed approach is shown to outperform or at least equal the traditional pairwise meta-analysis and a commonly used Bayesian hierarchical model. Although the Bayesian approach may yield comparable results with a suitably chosen prior, it is highly sensitive to the choice of priors (especially for the between-trial covariance structure), which is often subjective. The CD approach is a general frequentist approach and is prior-free. Moreover, it can always provide a proper inference for all the treatment effects regardless of the between-trial covariance structure.  相似文献   

17.
This paper discusses characteristics of standard conjugate priors and their induced posteriors in Bayesian inference for von Mises–Fisher distributions, using either the canonical natural exponential family or the more commonly employed polar coordinate parameterizations. We analyze when standard conjugate priors as well as posteriors are proper, and investigate the Jeffreys prior for the von Mises–Fisher family. Finally, we characterize the proper distributions in the standard conjugate family of the (matrix-valued) von Mises–Fisher distributions on Stiefel manifolds.  相似文献   

18.
The focus of this paper is objective priors for spatially correlated data with nugget effects. In addition to the Jeffreys priors and commonly used reference priors, two types of “exact” reference priors are derived based on improper marginal likelihoods. An “equivalence” theorem is developed in the sense that the expectation of any function of the score functions of the marginal likelihood function can be taken under marginal likelihoods. Interestingly, these two types of reference priors are identical.  相似文献   

19.
Abstract. We study the problem of deciding which of two normal random samples, at least one of them of small size, has greater expected value. Unlike in the standard Bayesian approach, in which a single prior distribution and a single loss function are declared, we assume that a set of plausible priors and a set of plausible loss functions are elicited from the expert (the client or the sponsor of the analysis). The choice of the sample that has greater expected value is based on equilibrium priors, allowing for an impasse if for some plausible priors and loss functions choosing one and for others the other sample is associated with smaller expected loss.  相似文献   

20.
Complex dependency structures are often conditionally modeled, where random effects parameters are used to specify the natural heterogeneity in the population. When interest is focused on the dependency structure, inferences can be made from a complex covariance matrix using a marginal modeling approach. In this marginal modeling framework, testing covariance parameters is not a boundary problem. Bayesian tests on covariance parameter(s) of the compound symmetry structure are proposed assuming multivariate normally distributed observations. Innovative proper prior distributions are introduced for the covariance components such that the positive definiteness of the (compound symmetry) covariance matrix is ensured. Furthermore, it is shown that the proposed priors on the covariance parameters lead to a balanced Bayes factor, in case of testing an inequality constrained hypothesis. As an illustration, the proposed Bayes factor is used for testing (non-)invariant intra-class correlations across different group types (public and Catholic schools), using the 1982 High School and Beyond survey data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号