首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The variational approach to Bayesian inference enables simultaneous estimation of model parameters and model complexity. An interesting feature of this approach is that it also leads to an automatic choice of model complexity. Empirical results from the analysis of hidden Markov models with Gaussian observation densities illustrate this. If the variational algorithm is initialized with a large number of hidden states, redundant states are eliminated as the method converges to a solution, thereby leading to a selection of the number of hidden states. In addition, through the use of a variational approximation, the deviance information criterion for Bayesian model selection can be extended to the hidden Markov model framework. Calculation of the deviance information criterion provides a further tool for model selection, which can be used in conjunction with the variational approach.  相似文献   

2.
ABSTRACT

There have been considerable amounts of work regarding the development of various default Bayes factors in model selection and hypothesis testing. Two commonly used criteria, the intrinsic Bayes factor and the fractional Bayes factor are compared to test two independent normal means and variances. We also derive several intrinsic priors whose Bayes factors are asymptotically equivalent to the respective Bayes factors. We demonstrate our results in simulated datasets.  相似文献   

3.
4.
In this paper we consider a Bayesian predictive approach to sample size determination in equivalence trials. Equivalence experiments are conducted to show that the unknown difference between two parameters is small. For instance, in clinical practice this kind of experiment aims to determine whether the effects of two medical interventions are therapeutically similar. We declare an experiment successful if an interval estimate of the effects‐difference is included in a set of values of the parameter of interest indicating a negligible difference between treatment effects (equivalence interval). We derive two alternative criteria for the selection of the optimal sample size, one based on the predictive expectation of the interval limits and the other based on the predictive probability that these limits fall in the equivalence interval. Moreover, for both criteria we derive a robust version with respect to the choice of the prior distribution. Numerical results are provided and an application is illustrated when the normal model with conjugate prior distributions is assumed.  相似文献   

5.
Deterministic simulation models are used to guide decision-making and enhance understanding of complex systems such as disease transmission, population dynamics, and tree plantation growth. Bayesian inference about parameters in deterministic simulation models can require the pooling of expert opinion. One class of approaches to pooling expert opinion in this context is supra-Bayesian pooling, in which expert opinion is treated as data for an ultimate decision maker. This article details and compares two supra-Bayesian approaches—“event updating” and “parameter updating.” The suitability of each approach in the context of deterministic simulation models is assessed based on theoretical properties, performance on examples, and the selection and sensitivity of required hyperparameters. In general, we favor a parameter updating approach because it uses more intuitive hyperparameters, it performs sensibly on examples, and because the alternative event updating approach fails to exhibit a desirable property (relative propensity consistency) in all cases. Inference in deterministic simulation models is an increasingly important statistical and practical problem, and supra-Bayesian methods represent one viable option for achieving a sensible pooling of expert opinion.  相似文献   

6.
An analysis of inter-rater agreement is presented. We study the problem with several raters using a Bayesian model based on the Dirichlet distribution. Inter-rater agreement, including global and partial agreement, is studied by determining the joint posterior distribution of the raters. Posterior distributions are computed with a direct resampling technique. Our method is illustrated with an example involving four residents, who are diagnosing 12 psychiatric patients suspected of having a thought disorder. Initially employing analytical and resampling methods, total agreement between the four is examined with a Bayesian testing technique. Later, partial agreement is examined by determining the posterior probability of certain orderings among the rater means. The power of resampling is revealed by its ability to compute complex multiple integrals that represent various posterior probabilities of agreement and disagreement between several raters.  相似文献   

7.
This paper describes a Bayesian approach to modelling carcinogenity in animal studies where the data consist of counts of the number of tumours present over time. It compares two autoregressive hidden Markov models. One of them models the transitions between three latent states: an inactive transient state, a multiplying state for increasing counts and a reducing state for decreasing counts. The second model introduces a fourth tied state to describe non‐zero observations that are neither increasing nor decreasing. Both these models can model the length of stay upon entry of a state. A discrete constant hazards waiting time distribution is used to model the time to onset of tumour growth. Our models describe between‐animal‐variability by a single hierarchy of random effects and the within‐animal variation by first‐order serial dependence. They can be extended to higher‐order serial dependence and multi‐level hierarchies. Analysis of data from animal experiments comparing the influence of two genes leads to conclusions that differ from those of Dunson (2000). The observed data likelihood defines an information criterion to assess the predictive properties of the three‐ and four‐state models. The deviance information criterion is appropriately defined for discrete parameters.  相似文献   

8.
Using generalized linear models (GLMs), Jalaludin  et al. (2006;  J. Exposure Analysis and Epidemiology   16 , 225–237) studied the association between the daily number of visits to emergency departments for cardiovascular disease by the elderly (65+) and five measures of ambient air pollution. Bayesian methods provide an alternative approach to classical time series modelling and are starting to be more widely used. This paper considers Bayesian methods using the dataset used by Jalaludin  et al.  (2006) , and compares the results from Bayesian methods with those obtained by Jalaludin  et al.  (2006) using GLM methods.  相似文献   

9.
The estimation of Bayesian networks given high‐dimensional data, in particular gene expression data, has been the focus of much recent research. Whilst there are several methods available for the estimation of such networks, these typically assume that the data consist of independent and identically distributed samples. It is often the case, however, that the available data have a more complex mean structure, plus additional components of variance, which must then be accounted for in the estimation of a Bayesian network. In this paper, score metrics that take account of such complexities are proposed for use in conjunction with score‐based methods for the estimation of Bayesian networks. We propose first, a fully Bayesian score metric, and second, a metric inspired by the notion of restricted maximum likelihood. We demonstrate the performance of these new metrics for the estimation of Bayesian networks using simulated data with known complex mean structures. We then present the analysis of expression levels of grape‐berry genes adjusting for exogenous variables believed to affect the expression levels of the genes. Demonstrable biological effects can be inferred from the estimated conditional independence relationships and correlations amongst the grape‐berry genes.  相似文献   

10.
Consider a large number of econometric investigations using different estimation techniques and/or different subsets of all available data to estimate a fixed set of parameters. The resulting empirical distribution of point estimates can be shown - under suitable conditions - to coincide with a Bayesian posterior measure on the parameter space induced by a minimum information procedure. This Bayesian interpretation makes it easier to combine the results of various empirical exercises for statistical decision making. The collection of estimators may be generated by one investigator to ensure the satisfaction of our conditions, or they may be collected from published works, where behavioral assumptions need to be made regarding the dependence structure of econometric studies.  相似文献   

11.
This paper considers the use of Dirichlet process prior distributions in the statistical analysis of network data. Dirichlet process prior distributions have the advantages of avoiding the parametric specifications for distributions, which are rarely known, and of facilitating a clustering effect, which is often applicable to network nodes. The approach is highlighted for two network models and is conveniently implemented using WinBUGS software.  相似文献   

12.
With reference to a specific dataset, we consider how to perform a flexible non‐parametric Bayesian analysis of an inhomogeneous point pattern modelled by a Markov point process, with a location‐dependent first‐order term and pairwise interaction only. A priori we assume that the first‐order term is a shot noise process, and that the interaction function for a pair of points depends only on the distance between the two points and is a piecewise linear function modelled by a marked Poisson process. Simulation of the resulting posterior distribution using a Metropolis–Hastings algorithm in the ‘conventional’ way involves evaluating ratios of unknown normalizing constants. We avoid this problem by applying a recently introduced auxiliary variable technique. In the present setting, the auxiliary variable used is an example of a partially ordered Markov point process model.  相似文献   

13.
This paper deals with the Bayesian analysis of the additive mixed model experiments. Consider b randomly chosen subjects who respond once to each of t treatments. The subjects are treated as random effects and the treatment effects are fixed. Suppose that some prior information is available, thus motivating a Bayesian analysis. The Bayesian computation, however, can be difficult in this situation, especially when a large number of treatments is involved. Three computational methods are suggested to perform the analysis. The exact posterior density of any parameter of interest can be simulated based on random realizations taken from a restricted multivariate t distribution. The density can also be simulated using Markov chain Monte Carlo methods. The simulated density is accurate when a large number of random realizations is taken. However, it may take substantial amount of computer time when many treatments are involved. An alternative Laplacian approximation is discussed. The Laplacian method produces smooth and very accurate approximates to posterior densities, and takes only seconds of computer time. An example of a pipeline cracks experiment is used to illustrate the Bayesian approaches and the computational methods.  相似文献   

14.
ABSTRACT

A Bayesian analysis for the superposition of two dependent nonhomogenous Poisson processes is studied by means of a bivariate Poisson distribution. This particular distribution presents a new likelihood function which takes into account the correlation between the two nonhomogenous Poisson processes. A numerical example using Markov Chain Monte Carlo method with data augmentation is considered.  相似文献   

15.
This paper describes a Bayesian approach to make inference for risk reserve processes with an unknown claim‐size distribution. A flexible model based on mixtures of Erlang distributions is proposed to approximate the special features frequently observed in insurance claim sizes, such as long tails and heterogeneity. A Bayesian density estimation approach for the claim sizes is implemented using reversible jump Markov chain Monte Carlo methods. An advantage of the considered mixture model is that it belongs to the class of phase‐type distributions, and thus explicit evaluations of the ruin probabilities are possible. Furthermore, from a statistical point of view, the parametric structure of the mixtures of the Erlang distribution offers some advantages compared with the whole over‐parametrized family of phase‐type distributions. Given the observed claim arrivals and claim sizes, we show how to estimate the ruin probabilities, as a function of the initial capital, and predictive intervals that give a measure of the uncertainty in the estimations.  相似文献   

16.
We analyse a combination of errant count data subject to under‐reported counts and inerrant count data to estimate multiple Poisson rates and reporting probabilities of cervical cancer for four European countries. Our analysis uses a Bayesian hierarchical model. Using a simulation study, we demonstrate the efficacy of our new simultaneous inference method and compare the utility of our method with an empirical Bayes approach developed by Fader and Hardie (J. Appl. Statist., 2000).  相似文献   

17.
This paper develops a new class of option price models and applies it to options on the Australian S&P200 Index. The class of models generalizes the traditional Black‐Scholes framework by accommodating time‐varying conditional volatility, skewness and excess kurtosis in the underlying returns process. An important property of these more general pricing models is that the computational requirements are essentially the same as those associated with the Black‐Scholes model, with both methods being based on one‐dimensional integrals. Bayesian inferential methods are used to evaluate a range of models nested in the general framework, using observed market option prices. The evaluation is based on posterior parameter distributions, as well as posterior model probabilities. Various fit and predictive measures, plus implied volatility graphs, are also used to rank the alternative models. The empirical results provide evidence that time‐varying volatility, leptokurtosis and a small degree of negative skewness are priced in Australian stock market options.  相似文献   

18.
Spatial generalised linear mixed models are used commonly for modelling non‐Gaussian discrete spatial responses. In these models, the spatial correlation structure of data is modelled by spatial latent variables. Most users are satisfied with using a normal distribution for these variables, but in many applications it is unclear whether or not the normal assumption holds. This assumption is relaxed in the present work, using a closed skew normal distribution for the spatial latent variables, which is more flexible and includes normal and skew normal distributions. The parameter estimates and spatial predictions are calculated using the Markov Chain Monte Carlo method. Finally, the performance of the proposed model is analysed via two simulation studies, followed by a case study in which practical aspects are dealt with. The proposed model appears to give a smaller cross‐validation mean square error of the spatial prediction than the normal prior in modelling the temperature data set.  相似文献   

19.
The authors consider the optimal design of sampling schedules for binary sequence data. They propose an approach which allows a variety of goals to be reflected in the utility function by including deterministic sampling cost, a term related to prediction, and if relevant, a term related to learning about a treatment effect To this end, they use a nonparametric probability model relying on a minimal number of assumptions. They show how their assumption of partial exchangeability for the binary sequence of data allows the sampling distribution to be written as a mixture of homogeneous Markov chains of order k. The implementation follows the approach of Quintana & Müller (2004), which uses a Dirichlet process prior for the mixture.  相似文献   

20.
This paper develops a Bayesian control chart for the percentiles of the Weibull distribution, when both its in‐control and out‐of‐control parameters are unknown. The Bayesian approach enhances parameter estimates for small sample sizes that occur when monitoring rare events such as in high‐reliability applications. The chart monitors the parameters of the Weibull distribution directly, instead of transforming the data as most Weibull‐based charts do in order to meet normality assumption. The chart uses accumulated knowledge resulting from the likelihood of the current sample combined with the information given by both the initial prior knowledge and all the past samples. The chart is adapting because its control limits change (e.g. narrow) during Phase I. An example is presented and good average run length properties are demonstrated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号