首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Structural models—or dynamic linear models as they are known in the Bayesian literature—have been widely used to model and predict time series using a decomposition in non observable components. Due to the direct interpretation of the parameters, structural models are a powerful and simple methodology to analyze time series in several areas, such as economy, climatology, environmental sciences, among others. The parameters of such models can be estimated either using maximum likelihood or Bayesian procedures, generally implemented using conjugate priors, and there are plenty of works in the literature employing both methods. But are there situations where one of these approaches should be preferred? In this work, instead of conjugate priors for the hyperparameters, the Jeffreys prior is used in the Bayesian approach, along with the uniform prior, and the results are compared to the maximum likelihood method, in an extensive Monte Carlo study. Interval estimation is also evaluated and, to this purpose, bootstrap confidence intervals are introduced in the context of structural models and their performance is compared to the asymptotic and credibility intervals. A real time series of a Brazilian electric company is used as illustration.  相似文献   

2.
We employ a hierarchical Bayesian method with exchangeable prior distributions to estimate and compare similar nondecreasing response curves. A Dirichlet process distribution is assigned to each of the response curves as a first stage prior. A second stage prior is then used to model the hyperparameters. We define parameters which will be used to compare the response curves. A Markov chain Monte Carlo method is applied to compute the resulting Bayesian estimates. To illustrate the methodology, we re-examine data from an experiment designed to test whether experimenter observation influences the ultimatum game. A major restriction of the original analysis was the shape constraint that the present technique allows us to greatly relax. We also consider independent priors and use Bayes factors to compare various models.  相似文献   

3.
Multivariate model validation is a complex decision-making problem involving comparison of multiple correlated quantities, based upon the available information and prior knowledge. This paper presents a Bayesian risk-based decision method for validation assessment of multivariate predictive models under uncertainty. A generalized likelihood ratio is derived as a quantitative validation metric based on Bayes’ theorem and Gaussian distribution assumption of errors between validation data and model prediction. The multivariate model is then assessed based on the comparison of the likelihood ratio with a Bayesian decision threshold, a function of the decision costs and prior of each hypothesis. The probability density function of the likelihood ratio is constructed using the statistics of multiple response quantities and Monte Carlo simulation. The proposed methodology is implemented in the validation of a transient heat conduction model, using a multivariate data set from experiments. The Bayesian methodology provides a quantitative approach to facilitate rational decisions in multivariate model assessment under uncertainty.  相似文献   

4.
We develop strategies for Bayesian modelling as well as model comparison, averaging and selection for compartmental models with particular emphasis on those that occur in the analysis of positron emission tomography (PET) data. Both modelling and computational issues are considered. Biophysically inspired informative priors are developed for the problem at hand, and by comparison with default vague priors it is shown that the proposed modelling is not overly sensitive to prior specification. It is also shown that an additive normal error structure does not describe measured PET data well, despite being very widely used, and that within a simple Bayesian framework simultaneous parameter estimation and model comparison can be performed with a more general noise model. The proposed approach is compared with standard techniques using both simulated and real data. In addition to good, robust estimation performance, the proposed technique provides, automatically, a characterisation of the uncertainty in the resulting estimates which can be considerable in applications such as PET.  相似文献   

5.
Abstract. This article combines the best of both objective and subjective Bayesian inference in specifying priors for inequality and equality constrained analysis of variance models. Objectivity can be found in the use of training data to specify a prior distribution, subjectivity can be found in restrictions on the prior to formulate models. The aim of this article is to find the best model in a set of models specified using inequality and equality constraints on the model parameters. For the evaluation of the models an encompassing prior approach is used. The advantage of this approach is that only a prior for the unconstrained encompassing model needs to be specified. The priors for all constrained models can be derived from this encompassing prior. Different choices for this encompassing prior will be considered and evaluated.  相似文献   

6.
When prior information on model parameters is weak or lacking, Bayesian statistical analyses are typically performed with so-called “default” priors. We consider the problem of constructing default priors for the parameters of survival models in the presence of censoring, using Jeffreys’ rule. We compare these Jeffreys priors to the “uncensored” Jeffreys priors, obtained without considering censored observations, for the parameters of the exponential and log-normal models. The comparison is based on the frequentist coverage of the posterior Bayes intervals obtained from these prior distributions.  相似文献   

7.
ABSTRACT

Seasonal autoregressive (SAR) models have been modified and extended to model high frequency time series characterized by exhibiting double seasonal patterns. Some researchers have introduced Bayesian inference for double seasonal autoregressive (DSAR) models; however, none has tackled the problem of Bayesian identification of DSAR models. Therefore, in order to fill this gap, we present a Bayesian methodology to identify the order of DSAR models. Assuming the model errors are normally distributed and using three priors, i.e. natural conjugate, g, and Jeffreys’ priors, on the model parameters, we derive the joint posterior mass function of the model order in a closed-form. Accordingly, the posterior mass function can be investigated and the best order of DSAR model is chosen as a value with the highest posterior probability for the time series being analyzed. We evaluate the proposed Bayesian methodology using simulation study, and we then apply it to real-world hourly internet amount of traffic dataset.  相似文献   

8.
In this article, we propose a denoising methodology in the wavelet domain based on a Bayesian hierarchical model using Double Weibull prior. We propose two estimators, one based on posterior mean (Double Weibull Wavelet Shrinker, DWWS) and the other based on larger posterior mode (DWWS-LPM), and show how to calculate them efficiently. Traditionally, mixture priors have been used for modeling sparse wavelet coefficients. The interesting feature of this article is the use of non-mixture prior. We show that the methodology provides good denoising performance, comparable even to state-of-the-art methods that use mixture priors and empirical Bayes setting of hyperparameters, which is demonstrated by extensive simulations on standardly used test functions. An application to real-word dataset is also considered.  相似文献   

9.
The choice of prior distributions for the variances can be important and quite difficult in Bayesian hierarchical and variance component models. For situations where little prior information is available, a ‘nonin-formative’ type prior is usually chosen. ‘Noninformative’ priors have been discussed by many authors and used in many contexts. However, care must be taken using these prior distributions as many are improper and thus, can lead to improper posterior distributions. Additionally, in small samples, these priors can be ‘informative’. In this paper, we investigate a proper ‘vague’ prior, the uniform shrinkage prior (Strawder-man 1971; Christiansen & Morris 1997). We discuss its properties and show how posterior distributions for common hierarchical models using this prior lead to proper posterior distributions. We also illustrate the attractive frequentist properties of this prior for a normal hierarchical model including testing and estimation. To conclude, we generalize this prior to the multivariate situation of a covariance matrix.  相似文献   

10.
This paper develops an objective Bayesian analysis method for estimating unknown parameters of the half-logistic distribution when a sample is available from the progressively Type-II censoring scheme. Noninformative priors such as Jeffreys and reference priors are derived. In addition, derived priors are checked to determine whether they satisfy probability-matching criteria. The Metropolis–Hasting algorithm is applied to generate Markov chain Monte Carlo samples from these posterior density functions because marginal posterior density functions of each parameter cannot be expressed in an explicit form. Monte Carlo simulations are conducted to investigate frequentist properties of estimated models under noninformative priors. For illustration purposes, a real data set is presented, and the quality of models under noninformative priors is evaluated through posterior predictive checking.  相似文献   

11.
In this article, we develop a Bayesian analysis in autoregressive model with explanatory variables. When σ2 is known, we consider a normal prior and give the Bayesian estimator for the regression coefficients of the model. For the case σ2 is unknown, another Bayesian estimator is given for all unknown parameters under a conjugate prior. Bayesian model selection problem is also being considered under the double-exponential priors. By the convergence of ρ-mixing sequence, the consistency and asymptotic normality of the Bayesian estimators of the regression coefficients are proved. Simulation results indicate that our Bayesian estimators are not strongly dependent on the priors, and are robust.  相似文献   

12.
In order to robustify posterior inference, besides the use of large classes of priors, it is necessary to consider uncertainty about the sampling model. In this article we suggest that a convenient and simple way to incorporate model robustness is to consider a discrete set of competing sampling models, and combine it with a suitable large class of priors. This set reflects foreseeable departures of the base model, like thinner or heavier tails or asymmetry. We combine the models with different classes of priors that have been proposed in the vast literature on Bayesian robustness with respect to the prior. Also we explore links with the related literature of stable estimation and precise measurement theory, now with more than one model entertained. To these ends it will be necessary to introduce a procedure for model comparison that does not depend on an arbitrary constant or scale. We utilize a recent development on automatic Bayes factors with self-adjusted scale, the ‘intrinsic Bayes factor’ (Berger and Pericchi, Technical Report, 1993).  相似文献   

13.
The posterior predictive p value (ppp) was invented as a Bayesian counterpart to classical p values. The methodology can be applied to discrepancy measures involving both data and parameters and can, hence, be targeted to check for various modeling assumptions. The interpretation can, however, be difficult since the distribution of the ppp value under modeling assumptions varies substantially between cases. A calibration procedure has been suggested, treating the ppp value as a test statistic in a prior predictive test. In this paper, we suggest that a prior predictive test may instead be based on the expected posterior discrepancy, which is somewhat simpler, both conceptually and computationally. Since both these methods require the simulation of a large posterior parameter sample for each of an equally large prior predictive data sample, we furthermore suggest to look for ways to match the given discrepancy by a computation‐saving conflict measure. This approach is also based on simulations but only requires sampling from two different distributions representing two contrasting information sources about a model parameter. The conflict measure methodology is also more flexible in that it handles non‐informative priors without difficulty. We compare the different approaches theoretically in some simple models and in a more complex applied example.  相似文献   

14.
Robust Bayesian methodology deals with the problem of explaining uncertainty of the inputs (the prior, the model, and the loss function) and provides a breakthrough way to take into account the input’s variation. If the uncertainty is in terms of the prior knowledge, robust Bayesian analysis provides a way to consider the prior knowledge in terms of a class of priors \(\varGamma \) and derive some optimal rules. In this paper, we motivate utilizing robust Bayes methodology under the asymmetric general entropy loss function in insurance and pursue two main goals, namely (i) computing premiums and (ii) predicting a future claim size. To achieve the goals, we choose some classes of priors and deal with (i) Bayes and posterior regret gamma minimax premium computation, (ii) Bayes and posterior regret gamma minimax prediction of a future claim size under the general entropy loss. We also perform a prequential analysis and compare the performance of posterior regret gamma minimax predictors against the Bayes predictors.  相似文献   

15.
Structured additive regression comprises many semiparametric regression models such as generalized additive (mixed) models, geoadditive models, and hazard regression models within a unified framework. In a Bayesian formulation, non-parametric functions, spatial effects and further model components are specified in terms of multivariate Gaussian priors for high-dimensional vectors of regression coefficients. For several model terms, such as penalized splines or Markov random fields, these Gaussian prior distributions involve rank-deficient precision matrices, yielding partially improper priors. Moreover, hyperpriors for the variances (corresponding to inverse smoothing parameters) may also be specified as improper, e.g. corresponding to Jeffreys prior or a flat prior for the standard deviation. Hence, propriety of the joint posterior is a crucial issue for full Bayesian inference in particular if based on Markov chain Monte Carlo simulations. We establish theoretical results providing sufficient (and sometimes necessary) conditions for propriety and provide empirical evidence through several accompanying simulation studies.  相似文献   

16.
In this article, we develop an empirical Bayesian approach for the Bayesian estimation of parameters in four bivariate exponential (BVE) distributions. We have opted for gamma distribution as a prior for the parameters of the model in which the hyper parameters have been estimated based on the method of moments and maximum likelihood estimates (MLEs). A simulation study was conducted to compute empirical Bayesian estimates of the parameters and their standard errors. We use moment estimators or MLEs to estimate the hyper parameters of the prior distributions. Furthermore, we compare the posterior mode of parameters obtained by different prior distributions and the Bayesian estimates based on gamma priors are very close to the true values as compared to improper priors. We use MCMC method to obtain the posterior mean and compared the same using the improper priors and the classical estimates, MLEs.  相似文献   

17.
For the balanced variance component model when the inference concerning intraclass correlation coefficient is of interest, Bayesian analysis is often appropriate. However, the question remains is to choose the appropriate prior. In this paper, we consider testing of the intraclass correlation coefficient under a default prior specification. Berger and Bernardo's (1992) On the development of the reference prior method. In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (Eds.), Bayesian Statist. Vol. 4. Oxford University Press, London, pp. 35–60 reference priors are developed and are used to obtain the intrinsic Bayes factor (Berger and Pericchi, 1996) The intrinsic Bayes factor for model selection and prediction. J. Amer. statist. Assoc. 91, 109–122 for the nested models. Influence diagnostics using intrinsic Bayes factors are also developed. Finally, one simulated data is provided which illustrates the proposed methodology with appropriate simulation based on computational formulas. Then in order to overcome the difficulty in Bayesian computation, MCMC method, such as Gibbs sampler and Metropolis–Hastings algorithm, is employed.  相似文献   

18.
New methodology for fully Bayesian mixture analysis is developed, making use of reversible jump Markov chain Monte Carlo methods that are capable of jumping between the parameter subspaces corresponding to different numbers of components in the mixture. A sample from the full joint distribution of all unknown variables is thereby generated, and this can be used as a basis for a thorough presentation of many aspects of the posterior distribution. The methodology is applied here to the analysis of univariate normal mixtures, using a hierarchical prior model that offers an approach to dealing with weak prior information while avoiding the mathematical pitfalls of using improper priors in the mixture context.  相似文献   

19.
One critical issue in the Bayesian approach is choosing the priors when there is not enough prior information to specify hyperparameters. Several improper noninformative priors for capture-recapture models were proposed in the literature. It is known that the Bayesian estimate can be sensitive to the choice of priors, especially when sample size is small to moderate. Yet, how to choose a noninformative prior for a given model remains a question. In this paper, as the first step, we consider the problem of estimating the population size for MtMt model using noninformative priors. The MtMt model has prodigious application in wildlife management, ecology, software liability, epidemiological study, census under-count, and other research areas. Four commonly used noninformative priors are considered. We find that the choice of noninformative priors depends on the number of sampling occasions only. The guidelines on the choice of noninformative priors are provided based on the simulation results. Propriety of applying improper noninformative prior is discussed. Simulation studies are developed to inspect the frequentist performance of Bayesian point and interval estimates with different noninformative priors under various population sizes, capture probabilities, and the number of sampling occasions. The simulation results show that the Bayesian approach can provide more accurate estimates of the population size than the MLE for small samples. Two real-data examples are given to illustrate the method.  相似文献   

20.
The paper aims to select a suitable prior for the Bayesian analysis of the two-component mixture of the Topp Leone model under doubly censored samples and left censored samples for the first component and right censored samples for the second component. The posterior analysis has been carried out under the assumption of a class of informative and noninformative priors using a couple of loss functions. The comparison among the different Bayes estimators has been made under a simulation study and a real life example. The model comparison criterion has been used to select a suitable prior for the Bayesian analysis. The hazard rate of the Topp Leone mixture model has been compared for a range of parametric values.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号