首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Park  Joonha  Atchadé  Yves 《Statistics and Computing》2020,30(5):1325-1345

We explore a general framework in Markov chain Monte Carlo (MCMC) sampling where sequential proposals are tried as a candidate for the next state of the Markov chain. This sequential-proposal framework can be applied to various existing MCMC methods, including Metropolis–Hastings algorithms using random proposals and methods that use deterministic proposals such as Hamiltonian Monte Carlo (HMC) or the bouncy particle sampler. Sequential-proposal MCMC methods construct the same Markov chains as those constructed by the delayed rejection method under certain circumstances. In the context of HMC, the sequential-proposal approach has been proposed as extra chance generalized hybrid Monte Carlo (XCGHMC). We develop two novel methods in which the trajectories leading to proposals in HMC are automatically tuned to avoid doubling back, as in the No-U-Turn sampler (NUTS). The numerical efficiency of these new methods compare favorably to the NUTS. We additionally show that the sequential-proposal bouncy particle sampler enables the constructed Markov chain to pass through regions of low target density and thus facilitates better mixing of the chain when the target density is multimodal.

  相似文献   

2.
We propose a density-tempered marginalized sequential Monte Carlo (SMC) sampler, a new class of samplers for full Bayesian inference of general state-space models. The dynamic states are approximately marginalized out using a particle filter, and the parameters are sampled via a sequential Monte Carlo sampler over a density-tempered bridge between the prior and the posterior. Our approach delivers exact draws from the joint posterior of the parameters and the latent states for any given number of state particles and is thus easily parallelizable in implementation. We also build into the proposed method a device that can automatically select a suitable number of state particles. Since the method incorporates sample information in a smooth fashion, it delivers good performance in the presence of outliers. We check the performance of the density-tempered SMC algorithm using simulated data based on a linear Gaussian state-space model with and without misspecification. We also apply it on real stock prices using a GARCH-type model with microstructure noise.  相似文献   

3.
An automated (Markov chain) Monte Carlo EM algorithm   总被引:1,自引:0,他引:1  
We present an automated Monte Carlo EM (MCEM) algorithm which efficiently assesses Monte Carlo error in the presence of dependent Monte Carlo, particularly Markov chain Monte Carlo, E-step samples and chooses an appropriate Monte Carlo sample size to minimize this Monte Carlo error with respect to progressive EM step estimates. Monte Carlo error is gauged though an application of the central limit theorem during renewal periods of the MCMC sampler used in the E-step. The resulting normal approximation allows us to construct a rigorous and adaptive rule for updating the Monte Carlo sample size each iteration of the MCEM algorithm. We illustrate our automated routine and compare the performance with competing MCEM algorithms in an analysis of a data set fit by a generalized linear mixed model.  相似文献   

4.
Heng Lian 《Statistics》2013,47(6):777-785
Improving efficiency of the importance sampler is at the centre of research on Monte Carlo methods. While the adaptive approach is usually not so straightforward within the Markov chain Monte Carlo framework, the counterpart in importance sampling can be justified and validated easily. We propose an iterative adaptation method for learning the proposal distribution of an importance sampler based on stochastic approximation. The stochastic approximation method can recruit general iterative optimization techniques like the minorization–maximization algorithm. The effectiveness of the approach in optimizing the Kullback divergence between the proposal distribution and the target is demonstrated using several examples.  相似文献   

5.
A Monte Carlo (MC) method is suggested for calculating an upper prediction limit for the mean of a future sample of small size N from a lognormal distribution. This is done by obtaining a Monte Carlo estimator of the limit utilizing the future sample generated from the Gibbs sampler. For the Gibbs sampler, a full conditional posterior predictive distribution of each observation in the future sample is derived. The MC method is straightforward to specify distributionally and to implement computationally, with output readily adapted for required inference summaries. In an example, practical application of the method is described.  相似文献   

6.
Motivated by the need to sequentially design experiments for the collection of data in batches or blocks, a new pseudo-marginal sequential Monte Carlo algorithm is proposed for random effects models where the likelihood is not analytic, and has to be approximated. This new algorithm is an extension of the idealised sequential Monte Carlo algorithm where we propose to unbiasedly approximate the likelihood to yield an efficient exact-approximate algorithm to perform inference and make decisions within Bayesian sequential design. We propose four approaches to unbiasedly approximate the likelihood: standard Monte Carlo integration; randomised quasi-Monte Carlo integration, Laplace importance sampling and a combination of Laplace importance sampling and randomised quasi-Monte Carlo. These four methods are compared in terms of the estimates of likelihood weights and in the selection of the optimal sequential designs in an important pharmacological study related to the treatment of critically ill patients. As the approaches considered to approximate the likelihood can be computationally expensive, we exploit parallel computational architectures to ensure designs are derived in a timely manner.  相似文献   

7.
There are two conceptually distinct tasks in Markov chain Monte Carlo (MCMC): a sampler is designed for simulating a Markov chain and then an estimator is constructed on the Markov chain for computing integrals and expectations. In this article, we aim to address the second task by extending the likelihood approach of Kong et al. for Monte Carlo integration. We consider a general Markov chain scheme and use partial likelihood for estimation. Basically, the Markov chain scheme is treated as a random design and a stratified estimator is defined for the baseline measure. Further, we propose useful techniques including subsampling, regulation, and amplification for achieving overall computational efficiency. Finally, we introduce approximate variance estimators for the point estimators. The method can yield substantially improved accuracy compared with Chib's estimator and the crude Monte Carlo estimator, as illustrated with three examples.  相似文献   

8.
The reversible jump Markov chain Monte Carlo (MCMC) sampler (Green in Biometrika 82:711–732, 1995) has become an invaluable device for Bayesian practitioners. However, the primary difficulty with the sampler lies with the efficient construction of transitions between competing models of possibly differing dimensionality and interpretation. We propose the use of a marginal density estimator to construct between-model proposal distributions. This provides both a step towards black-box simulation for reversible jump samplers, and a tool to examine the utility of common between-model mapping strategies. We compare the performance of our approach to well established alternatives in both time series and mixture model examples.  相似文献   

9.
We introduce a new class of interacting Markov chain Monte Carlo (MCMC) algorithms which is designed to increase the efficiency of a modified multiple-try Metropolis (MTM) sampler. The extension with respect to the existing MCMC literature is twofold. First, the sampler proposed extends the basic MTM algorithm by allowing for different proposal distributions in the multiple-try generation step. Second, we exploit the different proposal distributions to naturally introduce an interacting MTM mechanism (IMTM) that expands the class of population Monte Carlo methods and builds connections with the rapidly expanding world of adaptive MCMC. We show the validity of the algorithm and discuss the choice of the selection weights and of the different proposals. The numerical studies show that the interaction mechanism allows the IMTM to efficiently explore the state space leading to higher efficiency than other competing algorithms.  相似文献   

10.
We develop a sequential Monte Carlo algorithm for the infinite hidden Markov model (iHMM) that allows us to perform on-line inferences on both system states and structural (static) parameters. The algorithm described here provides a natural alternative to Markov chain Monte Carlo samplers previously developed for the iHMM, and is particularly helpful in applications where data is collected sequentially and model parameters need to be continuously updated. We illustrate our approach in the context of both a simulation study and a financial application.  相似文献   

11.
We develop Bayesian procedures to make inference about parameters of a statistical design with autocorrelated error terms. Modelling treatment effects can be complex in the presence of other factors such as time; for example in longitudinal data. In this paper, Markov chain Monte Carlo methods (MCMC), the Metropolis-Hastings algorithm and Gibbs sampler are used to facilitate the Bayesian analysis of real life data when the error structure can be expressed as an autoregressive model of order p. We illustrate our analysis with real data.  相似文献   

12.
We derive a novel non-reversible, continuous-time Markov chain Monte Carlo sampler, called Coordinate Sampler, based on a piecewise deterministic Markov process, which is a variant of the Zigzag sampler of Bierkens et al. (Ann Stat 47(3):1288–1320, 2019). In addition to providing a theoretical validation for this new simulation algorithm, we show that the Markov chain it induces exhibits geometrical ergodicity convergence, for distributions whose tails decay at least as fast as an exponential distribution and at most as fast as a Gaussian distribution. Several numerical examples highlight that our coordinate sampler is more efficient than the Zigzag sampler, in terms of effective sample size.  相似文献   

13.
We demonstrate the use of auxiliary (or latent) variables for sampling non-standard densities which arise in the context of the Bayesian analysis of non-conjugate and hierarchical models by using a Gibbs sampler. Their strategic use can result in a Gibbs sampler having easily sampled full conditionals. We propose such a procedure to simplify or speed up the Markov chain Monte Carlo algorithm. The strength of this approach lies in its generality and its ease of implementation. The aim of the paper, therefore, is to provide an alternative sampling algorithm to rejection-based methods and other sampling approaches such as the Metropolis–Hastings algorithm.  相似文献   

14.
We propose a multivariate tobit (MT) latent variable model that is defined by a confirmatory factor analysis with covariates for analysing the mixed type data, which is inherently non-negative and sometimes has a large proportion of zeros. Some useful MT models are special cases of our proposed model. To obtain maximum likelihood estimates, we use the expectation maximum algorithm with its E-step via the Gibbs sampler made feasible by Monte Carlo simulation and its M-step greatly simplified by a sequence of conditional maximization. Standard errors are evaluated by inverting a Monte Carlo approximation of the information matrix using Louis's method. The methodology is illustrated with a simulation study and a real example.  相似文献   

15.
The particle Gibbs sampler is a systematic way of using a particle filter within Markov chain Monte Carlo. This results in an off‐the‐shelf Markov kernel on the space of state trajectories, which can be used to simulate from the full joint smoothing distribution for a state space model in a Markov chain Monte Carlo scheme. We show that the particle Gibbs Markov kernel is uniformly ergodic under rather general assumptions, which we will carefully review and discuss. In particular, we provide an explicit rate of convergence, which reveals that (i) for fixed number of data points, the convergence rate can be made arbitrarily good by increasing the number of particles and (ii) under general mixing assumptions, the convergence rate can be kept constant by increasing the number of particles superlinearly with the number of observations. We illustrate the applicability of our result by studying in detail a common stochastic volatility model with a non‐compact state space.  相似文献   

16.
In treating dynamic systems, sequential Monte Carlo methods use discrete samples to represent a complicated probability distribution and use rejection sampling, importance sampling and weighted resampling to complete the on-line 'filtering' task. We propose a special sequential Monte Carlo method, the mixture Kalman filter, which uses a random mixture of the Gaussian distributions to approximate a target distribution. It is designed for on-line estimation and prediction of conditional and partial conditional dynamic linear models, which are themselves a class of widely used non-linear systems and also serve to approximate many others. Compared with a few available filtering methods including Monte Carlo methods, the gain in efficiency that is provided by the mixture Kalman filter can be very substantial. Another contribution of the paper is the formulation of many non-linear systems into conditional or partial conditional linear form, to which the mixture Kalman filter can be applied. Examples in target tracking and digital communications are given to demonstrate the procedures proposed.  相似文献   

17.
Finite mixture of regression (FMR) models are aimed at characterizing subpopulation heterogeneity stemming from different sets of covariates that impact different groups in a population. We address the contemporary problem of simultaneously conducting covariate selection and determining the number of mixture components from a Bayesian perspective that can incorporate prior information. We propose a Gibbs sampling algorithm with reversible jump Markov chain Monte Carlo implementation to accomplish concurrent covariate selection and mixture component determination in FMR models. Our Bayesian approach contains innovative features compared to previously developed reversible jump algorithms. In addition, we introduce component-adaptive weighted g priors for regression coefficients, and illustrate their improved performance in covariate selection. Numerical studies show that the Gibbs sampler with reversible jump implementation performs well, and that the proposed weighted priors can be superior to non-adaptive unweighted priors.  相似文献   

18.
We show how to improve the efficiency of Markov Chain Monte Carlo (MCMC) simulations in dynamic mixture models by block-sampling the discrete latent variables. Two algorithms are proposed: the first is a multi-move extension of the single-move Gibbs sampler devised by Gerlach, Carter and Kohn (in J. Am. Stat. Assoc. 95, 819–828, 2000); the second is an adaptive Metropolis-Hastings scheme that performs well even when the number of discrete states is large. Three empirical examples illustrate the gain in efficiency achieved. We also show that visual inspection of sample partial autocorrelations of the discrete latent variables helps anticipating whether blocking can be effective.  相似文献   

19.
Sequential designs can be used to save computation time in implementing Monte Carlo hypothesis tests. The motivation is to stop resampling if the early resamples provide enough information on the significance of the p-value of the original Monte Carlo test. In this paper, we consider a sequential design called the B-value design proposed by Lan and Wittes and construct the sequential design bounding the resampling risk, the probability that the accept/reject decision is different from the decision from complete enumeration. For the B-value design whose exact implementation can be done by using the algorithm proposed in Fay, Kim and Hachey, we first compare the expected resample size for different designs with comparable resampling risk. We show that the B-value design has considerable savings in expected resample size compared to a fixed resample or simple curtailed design, and comparable expected resample size to the iterative push out design of Fay and Follmann. The B-value design is more practical than the iterative push out design in that it is tractable even for small values of resampling risk, which was a challenge with the iterative push out design. We also propose an approximate B-value design that can be constructed without using a specially developed software and provides analytic insights on the choice of parameter values in constructing the exact B-value design.  相似文献   

20.
Generalized Gibbs samplers simulate from any direction, not necessarily limited to the coordinate directions of the parameters of the objective function. We study how to optimally choose such directions in a random scan Gibbs sampler setting. We consider that optimal directions will be those that minimize the Kullback–Leibler divergence of two Markov chain Monte Carlo steps. Two distributions over direction are proposed for the multivariate Normal objective function. The resulting algorithms are used to simulate from a truncated multivariate Normal distribution, and the performance of our algorithms is compared with the performance of two algorithms based on the Gibbs sampler.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号