首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Complex stochastic models, such as individual-based models, are becoming increasingly popular. However this complexity can often mean that the likelihood is intractable. Performing parameter estimation on the model can then be difficult. One way of doing this when the complex model is relatively quick to simulate from is approximate Bayesian computation (ABC). Rejection-ABC algorithm is not always efficient so numerous other algorithms have been proposed. One such method is ABC with Markov chain Monte Carlo (ABC–MCMC). Unfortunately for some models this method does not perform well and some alternatives have been proposed including the fsMCMC algorithm (Neal and Huang, in: Scand J Stat 42:378–396, 2015) that explores the random inputs space as well unknown model parameters. In this paper we extend the fsMCMC algorithm and take advantage of the joint parameter and random input space in order to get better mixing of the Markov Chain. We also introduce a Gibbs step that conditions on the current accepted model and allows the parameters to move as well as the random inputs conditional on this accepted model. We show empirically that this improves the efficiency of the ABC–MCMC algorithm on a queuing model and an individual-based model of the group-living bird, the woodhoopoe.  相似文献   

2.
Due to the escalating growth of big data sets in recent years, new Bayesian Markov chain Monte Carlo (MCMC) parallel computing methods have been developed. These methods partition large data sets by observations into subsets. However, for Bayesian nested hierarchical models, typically only a few parameters are common for the full data set, with most parameters being group specific. Thus, parallel Bayesian MCMC methods that take into account the structure of the model and split the full data set by groups rather than by observations are a more natural approach for analysis. Here, we adapt and extend a recently introduced two-stage Bayesian hierarchical modeling approach, and we partition complete data sets by groups. In stage 1, the group-specific parameters are estimated independently in parallel. The stage 1 posteriors are used as proposal distributions in stage 2, where the target distribution is the full model. Using three-level and four-level models, we show in both simulation and real data studies that results of our method agree closely with the full data analysis, with greatly increased MCMC efficiency and greatly reduced computation times. The advantages of our method versus existing parallel MCMC computing methods are also described.  相似文献   

3.
The Integrated Nested Laplace Approximation (INLA) has established itself as a widely used method for approximate inference on Bayesian hierarchical models which can be represented as a latent Gaussian model (LGM). INLA is based on producing an accurate approximation to the posterior marginal distributions of the parameters in the model and some other quantities of interest by using repeated approximations to intermediate distributions and integrals that appear in the computation of the posterior marginals. INLA focuses on models whose latent effects are a Gaussian Markov random field. For this reason, we have explored alternative ways of expanding the number of possible models that can be fitted using the INLA methodology. In this paper, we present a novel approach that combines INLA and Markov chain Monte Carlo (MCMC). The aim is to consider a wider range of models that can be fitted with INLA only when some of the parameters of the model have been fixed. We show how new values of these parameters can be drawn from their posterior by using conditional models fitted with INLA and standard MCMC algorithms, such as Metropolis–Hastings. Hence, this will extend the use of INLA to fit models that can be expressed as a conditional LGM. Also, this new approach can be used to build simpler MCMC samplers for complex models as it allows sampling only on a limited number of parameters in the model. We will demonstrate how our approach can extend the class of models that could benefit from INLA, and how the R-INLA package will ease its implementation. We will go through simple examples of this new approach before we discuss more advanced applications with datasets taken from the relevant literature. In particular, INLA within MCMC will be used to fit models with Laplace priors in a Bayesian Lasso model, imputation of missing covariates in linear models, fitting spatial econometrics models with complex nonlinear terms in the linear predictor and classification of data with mixture models. Furthermore, in some of the examples we could exploit INLA within MCMC to make joint inference on an ensemble of model parameters.  相似文献   

4.
Abstract. We investigate simulation methodology for Bayesian inference in Lévy‐driven stochastic volatility (SV) models. Typically, Bayesian inference from such models is performed using Markov chain Monte Carlo (MCMC); this is often a challenging task. Sequential Monte Carlo (SMC) samplers are methods that can improve over MCMC; however, there are many user‐set parameters to specify. We develop a fully automated SMC algorithm, which substantially improves over the standard MCMC methods in the literature. To illustrate our methodology, we look at a model comprised of a Heston model with an independent, additive, variance gamma process in the returns equation. The driving gamma process can capture the stylized behaviour of many financial time series and a discretized version, fit in a Bayesian manner, has been found to be very useful for modelling equity data. We demonstrate that it is possible to draw exact inference, in the sense of no time‐discretization error, from the Bayesian SV model.  相似文献   

5.
In recent years, zero-inflated count data models, such as zero-inflated Poisson (ZIP) models, are widely used as the count data with extra zeros are very common in many practical problems. In order to model the correlated count data which are either clustered or repeated and to assess the effects of continuous covariates or of time scales in a flexible way, a class of semiparametric mixed-effects models for zero-inflated count data is considered. In this article, we propose a fully Bayesian inference for such models based on a data augmentation scheme that reflects both random effects of covariates and mixture of zero-inflated distribution. A computational efficient MCMC method which combines the Gibbs sampler and M-H algorithm is implemented to obtain the estimate of the model parameters. Finally, a simulation study and a real example are used to illustrate the proposed methodologies.  相似文献   

6.
In this paper, point and interval estimations for the parameters of the exponentiated exponential (EE) distribution are studied based on progressive first-failure-censored data. The Bayes estimates are computed based on squared error and Linex loss functions and using Markov Chain Monte Carlo (MCMC) algorithm. Also, based on this censoring scheme, approximate confidence intervals for the parameters of EE distribution are developed. Monte Carlo simulation study is carried out to compare the performances of the different methods by computing the estimated risks (ERs), as well as Akaike's information criteria (AIC) and Bayesian information criteria (BIC) of the estimates. Finally, a real data set is introduced and analyzed using EE and Weibull distributions. A comparison is carried out between the mentioned models based on the corresponding Kolmogorov–Smirnov (K–S) test statistic to emphasize that the EE model fits the data with the same efficiency as the other model. Point and interval estimation of all parameters are studied based on this real data set as illustrative example.  相似文献   

7.
In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm uses simulations of ‘subsamples’ from the assumed data-generating model as well as a so-called ‘early-rejection’ strategy to speed up computations in the ABC-MCMC sampler. Using a considerate amount of subsamples does not seem to degrade the quality of the inferential results for the considered applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general and not limited to the exemplified model and data.  相似文献   

8.
Considerable progress has been made in applying Markov chain Monte Carlo (MCMC) methods to the analysis of epidemic data. However, this likelihood based method can be inefficient due to the limited data available concerning an epidemic outbreak. This paper considers an alternative approach to studying epidemic data using Approximate Bayesian Computation (ABC) methodology. ABC is a simulation-based technique for obtaining an approximate sample from the posterior distribution of the parameters of the model and in an epidemic context is very easy to implement. A new approach to ABC is introduced which generates a set of values from the (approximate) posterior distribution of the parameters during each simulation rather than a single value. This is based upon coupling simulations with different sets of parameters and we call the resulting algorithm coupled ABC. The new methodology is used to analyse final size data for epidemics amongst communities partitioned into households. It is shown that for the epidemic data sets coupled ABC is more efficient than ABC and MCMC-ABC.  相似文献   

9.
We compare results for stochastic volatility models where the underlying volatility process having generalized inverse Gaussian (GIG) and tempered stable marginal laws. We use a continuous time stochastic volatility model where the volatility follows an Ornstein–Uhlenbeck stochastic differential equation driven by a Lévy process. A model for long-range dependence is also considered, its merit and practical relevance discussed. We find that the full GIG and a special case, the inverse gamma, marginal distributions accurately fit real data. Inference is carried out in a Bayesian framework, with computation using Markov chain Monte Carlo (MCMC). We develop an MCMC algorithm that can be used for a general marginal model.  相似文献   

10.
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood‐based ABC procedures.  相似文献   

11.
In this paper, we study the statistical inference based on the Bayesian approach for regression models with the assumption that independent additive errors follow normal, Student-t, slash, contaminated normal, Laplace or symmetric hyperbolic distribution, where both location and dispersion parameters of the response variable distribution include nonparametric additive components approximated by B-splines. This class of models provides a rich set of symmetric distributions for the model error. Some of these distributions have heavier or lighter tails than the normal as well as different levels of kurtosis. In order to draw samples of the posterior distribution of the interest parameters, we propose an efficient Markov Chain Monte Carlo (MCMC) algorithm, which combines Gibbs sampler and Metropolis–Hastings algorithms. The performance of the proposed MCMC algorithm is assessed through simulation experiments. We apply the proposed methodology to a real data set. The proposed methodology is implemented in the R package BayesGESM using the function gesm().  相似文献   

12.
In this paper, we discuss a fully Bayesian quantile inference using Markov Chain Monte Carlo (MCMC) method for longitudinal data models with random effects. Under the assumption of error term subject to asymmetric Laplace distribution, we establish a hierarchical Bayesian model and obtain the posterior distribution of unknown parameters at τ-th level. We overcome the current computational limitations using two approaches. One is the general MCMC technique with Metropolis–Hastings algorithm and another is the Gibbs sampling from the full conditional distribution. These two methods outperform the traditional frequentist methods under a wide array of simulated data models and are flexible enough to easily accommodate changes in the number of random effects and in their assumed distribution. We apply the Gibbs sampling method to analyse a mouse growth data and some different conclusions from those in the literatures are obtained.  相似文献   

13.
Recently, mixture distribution becomes more and more popular in many scientific fields. Statistical computation and analysis of mixture models, however, are extremely complex due to the large number of parameters involved. Both EM algorithms for likelihood inference and MCMC procedures for Bayesian analysis have various difficulties in dealing with mixtures with unknown number of components. In this paper, we propose a direct sampling approach to the computation of Bayesian finite mixture models with varying number of components. This approach requires only the knowledge of the density function up to a multiplicative constant. It is easy to implement, numerically efficient and very practical in real applications. A simulation study shows that it performs quite satisfactorily on relatively high dimensional distributions. A well-known genetic data set is used to demonstrate the simplicity of this method and its power for the computation of high dimensional Bayesian mixture models.  相似文献   

14.
Markov chain Monte Carlo (MCMC) methods provide an important means to simulate from almost any probability density. To approximate non-standard discrete distributions, the equation-solving MCMC estimator was developed as an alternative to the classical frequency estimator. The used simulation scheme is the Metropolis–Hastings (M–H) algorithm. Recently, this estimator has been extended to the specific context of 2-step Metropolis-Hastings with delayed rejection (MHDR) algorithm, which allowed a considerable reduction in asymptotic variance. In this paper, we propose an adaptation of equation-solving estimator to the case of general n-step MHDR sampler. The aim is to further improve the precision. An application to a Bayesian hypothesis test problem shows the high performance, in terms of accuracy, of the equation-solving estimator, based on a MHDR algorithm with more than two stages.  相似文献   

15.
We consider exact and approximate Bayesian computation in the presence of latent variables or missing data. Specifically we explore the application of a posterior predictive distribution formula derived in Sweeting And Kharroubi (2003), which is a particular form of Laplace approximation, both as an importance function and a proposal distribution. We show that this formula provides a stable importance function for use within poor man’s data augmentation schemes and that it can also be used as a proposal distribution within a Metropolis-Hastings algorithm for models that are not analytically tractable. We illustrate both uses in the case of a censored regression model and a normal hierarchical model, with both normal and Student t distributed random effects. Although the predictive distribution formula is motivated by regular asymptotic theory, it is not necessary that the likelihood has a closed form or that it possesses a local maximum.  相似文献   

16.
For the balanced variance component model when the inference concerning intraclass correlation coefficient is of interest, Bayesian analysis is often appropriate. However, the question remains is to choose the appropriate prior. In this paper, we consider testing of the intraclass correlation coefficient under a default prior specification. Berger and Bernardo's (1992) On the development of the reference prior method. In: Bernardo, J.M., Berger, J.O., Dawid, A.P., Smith, A.F.M. (Eds.), Bayesian Statist. Vol. 4. Oxford University Press, London, pp. 35–60 reference priors are developed and are used to obtain the intrinsic Bayes factor (Berger and Pericchi, 1996) The intrinsic Bayes factor for model selection and prediction. J. Amer. statist. Assoc. 91, 109–122 for the nested models. Influence diagnostics using intrinsic Bayes factors are also developed. Finally, one simulated data is provided which illustrates the proposed methodology with appropriate simulation based on computational formulas. Then in order to overcome the difficulty in Bayesian computation, MCMC method, such as Gibbs sampler and Metropolis–Hastings algorithm, is employed.  相似文献   

17.
This paper examines strategies for simulating exactly from large Gaussian linear models conditional on some Gaussian observations. Local computation strategies based on the conditional independence structure of the model are developed in order to reduce costs associated with storage and computation. Application of these algorithms to simulation from nested hierarchical linear models is considered, and the construction of efficient MCMC schemes for Bayesian inference in high-dimensional linear models is outlined.  相似文献   

18.
Data augmentation is required for the implementation of many Markov chain Monte Carlo (MCMC) algorithms. The inclusion of augmented data can often lead to conditional distributions from well‐known probability distributions for some of the parameters in the model. In such cases, collapsing (integrating out parameters) has been shown to improve the performance of MCMC algorithms. We show how integrating out the infection rate parameter in epidemic models leads to efficient MCMC algorithms for two very different epidemic scenarios, final outcome data from a multitype SIR epidemic and longitudinal data from a spatial SI epidemic. The resulting MCMC algorithms give fresh insight into real‐life epidemic data sets.  相似文献   

19.
Markov chain Monte Carlo (MCMC) algorithms have revolutionized Bayesian practice. In their simplest form (i.e., when parameters are updated one at a time) they are, however, often slow to converge when applied to high-dimensional statistical models. A remedy for this problem is to block the parameters into groups, which are then updated simultaneously using either a Gibbs or Metropolis-Hastings step. In this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in non-Gaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three real-data examples.  相似文献   

20.
We analyze the computational efficiency of approximate Bayesian computation (ABC), which approximates a likelihood function by drawing pseudo-samples from the associated model. For the rejection sampling version of ABC, it is known that multiple pseudo-samples cannot substantially increase (and can substantially decrease) the efficiency of the algorithm as compared to employing a high-variance estimate based on a single pseudo-sample. We show that this conclusion also holds for a Markov chain Monte Carlo version of ABC, implying that it is unnecessary to tune the number of pseudo-samples used in ABC-MCMC. This conclusion is in contrast to particle MCMC methods, for which increasing the number of particles can provide large gains in computational efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号