首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers a hierarchical Bayesian analysis of regression models using a class of Gaussian scale mixtures. This class provides a robust alternative to the common use of the Gaussian distribution as a prior distribution in particular for estimating the regression function subject to uncertainty about the constraint. For this purpose, we use a family of rectangular screened multivariate scale mixtures of Gaussian distribution as a prior for the regression function, which is flexible enough to reflect the degrees of uncertainty about the functional constraint. Specifically, we propose a hierarchical Bayesian regression model for the constrained regression function with uncertainty on the basis of three stages of a prior hierarchy with Gaussian scale mixtures, referred to as a hierarchical screened scale mixture of Gaussian regression models (HSMGRM). We describe distributional properties of HSMGRM and an efficient Markov chain Monte Carlo algorithm for posterior inference, and apply the proposed model to real applications with constrained regression models subject to uncertainty.  相似文献   

2.
为了尝试使用贝叶斯方法研究比例数据的分位数回归统计推断问题,首先基于Tobit模型给出了分位数回归建模方法,然后通过选取合适的先验分布得到了贝叶斯层次模型,进而给出了各参数的后验分布并用于Gibbs抽样。数值模拟分析验证了所提出的贝叶斯推断方法对于比例数据分析的有效性。最后,将贝叶斯方法应用于美国加州海洛因吸毒数据,在不同的分位数水平下揭示了吸毒频率的影响因素。  相似文献   

3.
Model uncertainty has become a central focus of policy discussion surrounding the determinants of economic growth. Over 140 regressors have been employed in growth empirics due to the proliferation of several new growth theories in the past two decades. Recently Bayesian model averaging (BMA) has been employed to address model uncertainty and to provide clear policy implications by identifying robust growth determinants. The BMA approaches were, however, limited to linear regression models that abstract from possible dependencies embedded in the covariance structures of growth determinants. The recent empirical growth literature has developed jointness measures to highlight such dependencies. We address model uncertainty and covariate dependencies in a comprehensive Bayesian framework that allows for structural learning in linear regressions and Gaussian graphical models. A common prior specification across the entire comprehensive framework provides consistency. Gaussian graphical models allow for a principled analysis of dependency structures, which allows us to generate a much more parsimonious set of fundamental growth determinants. Our empirics are based on a prominent growth dataset with 41 potential economic factors that has been utilized in numerous previous analyses to account for model uncertainty as well as jointness.  相似文献   

4.
A Bayesian analysis is provided for the Wilcoxon signed-rank statistic (T+). The Bayesian analysis is based on a sign-bias parameter φ on the (0, 1) interval. For the case of a uniform prior probability distribution for φ and for small sample sizes (i.e., 6 ? n ? 25), values for the statistic T+ are computed that enable probabilistic statements about φ. For larger sample sizes, approximations are provided for the asymptotic likelihood function P(T+|φ) as well as for the posterior distribution P(φ|T+). Power analyses are examined both for properly specified Gaussian sampling and for misspecified non Gaussian models. The new Bayesian metric has high power efficiency in the range of 0.9–1 relative to a standard t test when there is Gaussian sampling. But if the sampling is from an unknown and misspecified distribution, then the new statistic still has high power; in some cases, the power can be higher than the t test (especially for probability mixtures and heavy-tailed distributions). The new Bayesian analysis is thus a useful and robust method for applications where the usual parametric assumptions are questionable. These properties further enable a way to do a generic Bayesian analysis for many non Gaussian distributions that currently lack a formal Bayesian model.  相似文献   

5.
Multivariate model validation is a complex decision-making problem involving comparison of multiple correlated quantities, based upon the available information and prior knowledge. This paper presents a Bayesian risk-based decision method for validation assessment of multivariate predictive models under uncertainty. A generalized likelihood ratio is derived as a quantitative validation metric based on Bayes’ theorem and Gaussian distribution assumption of errors between validation data and model prediction. The multivariate model is then assessed based on the comparison of the likelihood ratio with a Bayesian decision threshold, a function of the decision costs and prior of each hypothesis. The probability density function of the likelihood ratio is constructed using the statistics of multiple response quantities and Monte Carlo simulation. The proposed methodology is implemented in the validation of a transient heat conduction model, using a multivariate data set from experiments. The Bayesian methodology provides a quantitative approach to facilitate rational decisions in multivariate model assessment under uncertainty.  相似文献   

6.
Statistical meta‐analysis is mostly carried out with the help of the random effect normal model, including the case of discrete random variables. We argue that the normal approximation is not always able to adequately capture the underlying uncertainty of the original discrete data. Furthermore, when we examine the influence of the prior distributions considered, in the presence of rare events, the results from this approximation can be very poor. In order to assess the robustness of the quantities of interest in meta‐analysis with respect to the choice of priors, this paper proposes an alternative Bayesian model for binomial random variables with several zero responses. Particular attention is paid to the coherence between the prior distributions of the study model parameters and the meta‐parameter. Thus, our method introduces a simple way to examine the sensitivity of these quantities to the structure dependence selected for study. For illustrative purposes, an example with real data is analysed, using the proposed Bayesian meta‐analysis model for binomial sparse data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
A novel framework is proposed for the estimation of multiple sinusoids from irregularly sampled time series. This spectral analysis problem is addressed as an under-determined inverse problem, where the spectrum is discretized on an arbitrarily thin frequency grid. As we focus on line spectra estimation, the solution must be sparse, i.e. the amplitude of the spectrum must be zero almost everywhere. Such prior information is taken into account within the Bayesian framework. Two models are used to account for the prior sparseness of the solution, namely a Laplace prior and a Bernoulli–Gaussian prior, associated to optimization and stochastic sampling algorithms, respectively. Such approaches are efficient alternatives to usual sequential prewhitening methods, especially in case of strong sampling aliases perturbating the Fourier spectrum. Both methods should be intensively tested on real data sets by physicists.  相似文献   

8.
We develop strategies for Bayesian modelling as well as model comparison, averaging and selection for compartmental models with particular emphasis on those that occur in the analysis of positron emission tomography (PET) data. Both modelling and computational issues are considered. Biophysically inspired informative priors are developed for the problem at hand, and by comparison with default vague priors it is shown that the proposed modelling is not overly sensitive to prior specification. It is also shown that an additive normal error structure does not describe measured PET data well, despite being very widely used, and that within a simple Bayesian framework simultaneous parameter estimation and model comparison can be performed with a more general noise model. The proposed approach is compared with standard techniques using both simulated and real data. In addition to good, robust estimation performance, the proposed technique provides, automatically, a characterisation of the uncertainty in the resulting estimates which can be considerable in applications such as PET.  相似文献   

9.
This paper considers quantile regression models using an asymmetric Laplace distribution from a Bayesian point of view. We develop a simple and efficient Gibbs sampling algorithm for fitting the quantile regression model based on a location-scale mixture representation of the asymmetric Laplace distribution. It is shown that the resulting Gibbs sampler can be accomplished by sampling from either normal or generalized inverse Gaussian distribution. We also discuss some possible extensions of our approach, including the incorporation of a scale parameter, the use of double exponential prior, and a Bayesian analysis of Tobit quantile regression. The proposed methods are illustrated by both simulated and real data.  相似文献   

10.
Most existing reduced-form macroeconomic multivariate time series models employ elliptical disturbances, so that the forecast densities produced are symmetric. In this article, we use a copula model with asymmetric margins to produce forecast densities with the scope for severe departures from symmetry. Empirical and skew t distributions are employed for the margins, and a high-dimensional Gaussian copula is used to jointly capture cross-sectional and (multivariate) serial dependence. The copula parameter matrix is given by the correlation matrix of a latent stationary and Markov vector autoregression (VAR). We show that the likelihood can be evaluated efficiently using the unique partial correlations, and estimate the copula using Bayesian methods. We examine the forecasting performance of the model for four U.S. macroeconomic variables between 1975:Q1 and 2011:Q2 using quarterly real-time data. We find that the point and density forecasts from the copula model are competitive with those from a Bayesian VAR. During the recent recession the forecast densities exhibit substantial asymmetry, avoiding some of the pitfalls of the symmetric forecast densities from the Bayesian VAR. We show that the asymmetries in the predictive distributions of GDP growth and inflation are similar to those found in the probabilistic forecasts from the Survey of Professional Forecasters. Last, we find that unlike the linear VAR model, our fitted Gaussian copula models exhibit nonlinear dependencies between some macroeconomic variables. This article has online supplementary material.  相似文献   

11.
ABSTRACT

In the reliability analysis of mechanical repairable equipment subjected to reliability deterioration with operating time, two forms of the non-homogeneous Poisson processes, namely the Power-Law (PL) and the Log-Linear (LL) model, have found general acceptance in the literature. Inferential procedures, conditioned on the assumption of the PL or LL model, underestimate the overall uncertainty about a quantity of interest because the PL and LL models can provide different estimates of the quantity of interest, even when both of them adequately fit the observed data. In this paper, a composite estimation procedure, which uses the PL and LL models as competing models, is proposed in the framework of Bayesian statistics, thus allowing the uncertainty involved in model selection to be considered. A model-free approach is then proposed for incorporating technical information on the failure mechanism into the inferential procedure. Such an approach, which is based on two model-free quantities defined irrespectively of the functional form of the failure model, prevents that the prior information on the failure mechanism can improperly introduce prior probabilities on the adequacy of each model to fit the observed data. Finally, numerical applications are provided to illustrate the proposed procedures.  相似文献   

12.
This article considers Bayesian inference in the interval constrained normal linear regression model. Whereas much of the previous literature has concentrated on the case where the prior constraint is correctly specified, our framework explicitly allows for the possibility of an invalid constraint. We adopt a non-informative prior and uncertainty concerning the interval restriction is represented by two prior odds ratios. The sampling theoretic risk of the resulting Bayesian interval pre-test estimator is derived, illustrated and explored. The authors are grateful to the editor and the referee for their helpful comments.  相似文献   

13.
This paper discusses the specific problems of age-period-cohort (A-P-C) analysis within the general framework of interaction assessment for two-way cross-classified data with one observation per cell. The A-P-C multiple classification model containing the effects of age groups (rows), periods of observation (columns), and birth cohorts (diagonals of the two-way table) is characterized as one of a special class of models involving interaction terms assumed to have very specific forms. The so-called A-P-C identification problem, which results from the use of a particular interaction structure for detecting cohort effects, is shown to manifest itself in the form of an exact linear dependency among the columns of the design matrix. The precise relationship holding among these columns is derived, as is an explicit formula for the bias in the parameter estimates resulting from an incorrect specification of an assumed restriction on the parameters required to solve the normal equations. Current methods for modeling A-P-C data are critically reviewed, an illustrative numerical example is presented, and one potentially promising analysis strategy is discussed. However, gien the large number of possible sources for error in A-P-C analyses, it is strongly recommended that the results of such analyses be interpreted with a great deal of caution.  相似文献   

14.
Structural econometric auction models with explicit game-theoretic modeling of bidding strategies have been quite a challenge from a methodological perspective, especially within the common value framework. We develop a Bayesian analysis of the hierarchical Gaussian common value model with stochastic entry introduced by Bajari and Hortaçsu. A key component of our approach is an accurate and easily interpretable analytical approximation of the equilibrium bid function, resulting in a fast and numerically stable evaluation of the likelihood function. We extend the analysis to situations with positive valuations using a hierarchical gamma model. We use a Bayesian variable selection algorithm that simultaneously samples the posterior distribution of the model parameters and does inference on the choice of covariates. The methodology is applied to simulated data and to a newly collected dataset from eBay with bids and covariates from 1000 coin auctions. We demonstrate that the Bayesian algorithm is very efficient and that the approximation error in the bid function has virtually no effect on the model inference. Both models fit the data well, but the Gaussian model outperforms the gamma model in an out-of-sample forecasting evaluation of auction prices. This article has supplementary material online.  相似文献   

15.
16.
A robust Bayesian analysis in a conjugate normal framework for the simple ANOVA model is suggested. By fixing the prior mean and varying the prior covariance matrix over a restricted class, we obtain the so-called HiFi and core region, a union and intersection of HPD regions. Based on these robust HPD regions we develop the concept of a ‘robust Bayesian judgement’ procedure. We apply this approach to the simple analysis of variance model with orthogonal designs. The example analyses the costs of an asthma medication obtained by a two-way cross-over study.  相似文献   

17.
A novel fully Bayesian approach for modeling survival data with explanatory variables using the Piecewise Exponential Model (PEM) with random time grid is proposed. We consider a class of correlated Gamma prior distributions for the failure rates. Such prior specification is obtained via the dynamic generalized modeling approach jointly with a random time grid for the PEM. A product distribution is considered for modeling the prior uncertainty about the random time grid, turning possible the use of the structure of the Product Partition Model (PPM) to handle the problem. A unifying notation for the construction of the likelihood function of the PEM, suitable for both static and dynamic modeling approaches, is considered. Procedures to evaluate the performance of the proposed model are provided. Two case studies are presented in order to exemplify the methodology. For comparison purposes, the data sets are also fitted using the dynamic model with fixed time grid established in the literature. The results show the superiority of the proposed model.  相似文献   

18.
As is the case of many studies, the data collected are limited and an exact value is recorded only if it falls within an interval range. Hence, the responses can be either left, interval or right censored. Linear (and nonlinear) regression models are routinely used to analyze these types of data and are based on normality assumptions for the errors terms. However, those analyzes might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear regression models by replacing the Gaussian assumptions for the random errors with scale mixtures of normal (SMN) distributions. The SMN is an attractive class of symmetric heavy-tailed densities that includes the normal, Student-t, Pearson type VII, slash and the contaminated normal distributions, as special cases. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is introduced to carry out posterior inference. A new hierarchical prior distribution is suggested for the degrees of freedom parameter in the Student-t distribution. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measure. The proposed Bayesian methods are implemented in the R package BayesCR. The newly developed procedures are illustrated with applications using real and simulated data.  相似文献   

19.
Abstract

Covariance estimation and selection for multivariate datasets in a high-dimensional regime is a fundamental problem in modern statistics. Gaussian graphical models are a popular class of models used for this purpose. Current Bayesian methods for inverse covariance matrix estimation under Gaussian graphical models require the underlying graph and hence the ordering of variables to be known. However, in practice, such information on the true underlying model is often unavailable. We therefore propose a novel permutation-based Bayesian approach to tackle the unknown variable ordering issue. In particular, we utilize multiple maximum a posteriori estimates under the DAG-Wishart prior for each permutation, and subsequently construct the final estimate of the inverse covariance matrix. The proposed estimator has smaller variability and yields order-invariant property. We establish posterior convergence rates under mild assumptions and illustrate that our method outperforms existing approaches in estimating the inverse covariance matrices via simulation studies.  相似文献   

20.
A Gaussian process (GP) can be thought of as an infinite collection of random variables with the property that any subset, say of dimension n, of these variables have a multivariate normal distribution of dimension n, mean vector β and covariance matrix Σ [O'Hagan, A., 1994, Kendall's Advanced Theory of Statistics, Vol. 2B, Bayesian Inference (John Wiley & Sons, Inc.)]. The elements of the covariance matrix are routinely specified through the multiplication of a common variance by a correlation function. It is important to use a correlation function that provides a valid covariance matrix (positive definite). Further, it is well known that the smoothness of a GP is directly related to the specification of its correlation function. Also, from a Bayesian point of view, a prior distribution must be assigned to the unknowns of the model. Therefore, when using a GP to model a phenomenon, the researcher faces two challenges: the need of specifying a correlation function and a prior distribution for its parameters. In the literature there are many classes of correlation functions which provide a valid covariance structure. Also, there are many suggestions of prior distributions to be used for the parameters involved in these functions. We aim to investigate how sensitive the GPs are to the (sometimes arbitrary) choices of their correlation functions. For this, we have simulated 25 sets of data each of size 64 over the square [0, 5]×[0, 5] with a specific correlation function and fixed values of the GP's parameters. We then fit different correlation structures to these data, with different prior specifications and check the performance of the adjusted models using different model comparison criteria.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号