首页 | 本学科首页   官方微博 | 高级检索  
     检索      


A new strategy for speeding Markov chain Monte Carlo algorithms
Authors:Antonietta Mira  Daniel J Sargent
Institution:(1) Department of Economics, University of Insubria, Via Ravasi 2, Varese, Italy;(2) Mayo Clinic, Rochester, MN, USA
Abstract:Markov chain Monte Carlo (MCMC) methods have become popular as a basis for drawing inference from complex statistical models. Two common difficulties with MCMC algorithms are slow mixing and long run-times, which are frequently closely related. Mixing over the entire state space can often be aided by careful tuning of the chain's transition kernel. In order to preserve the algorithm's stationary distribution, however, care must be taken when updating a chain's transition kernel based on that same chain's history. In this paper we introduce a technique that allows the transition kernel of the Gibbs sampler to be updated at user specified intervals, while preserving the chain's stationary distribution. This technique seems to be beneficial both in increasing efficiency of the resulting estimates (via Rao-Blackwellization) and in reducing the run-time. A reinterpretation of the modified Gibbs sampling scheme introduced in terms of auxiliary samples allows its extension to the more general Metropolis-Hastings framework. The strategies we develop are particularly helpful when calculation of the full conditional (for a Gibbs algorithm) or of the proposal distribution (for a Metropolis-Hastings algorithm) is computationally expensive. Partial financial support from FAR 2002-3, University of Insubria is gratefully acknowledged.
Keywords:Asymptotic variance  Efficiency  Gibbs sampler  Metropolis Hastings algorithms  Rao-Blackwellization
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号