Convergence of Heavy-tailed Monte Carlo Markov Chain Algorithms |
| |
Authors: | SØ REN F. JARNER, GARETH O. ROBERTS |
| |
Affiliation: | Danish Labour Market Supplementary Pension Fund;and Department of Mathematics and Statistics, Lancaster University |
| |
Abstract: | Abstract. In this paper, we use recent results of Jarner & Roberts ( Ann. Appl. Probab., 12, 2002, 224) to show polynomial convergence rates of Monte Carlo Markov Chain algorithms with polynomial target distributions, in particular random-walk Metropolis algorithms, Langevin algorithms and independence samplers. We also use similar methodology to consider polynomial convergence of the Gibbs sampler on a constrained state space. The main result for the random-walk Metropolis algorithm is that heavy-tailed proposal distributions lead to higher rates of convergence and thus to qualitatively better algorithms as measured, for instance, by the existence of central limit theorems for higher moments. Thus, the paper gives for the first time a theoretical justification for the common belief that heavy-tailed proposal distributions improve convergence in the context of random-walk Metropolis algorithms. Similar results are shown to hold for Langevin algorithms and the independence sampler, while results for the mixing of Gibbs samplers on uniform distributions on constrained spaces are rather different in character. |
| |
Keywords: | heavy-tailed proposals Monte Carlo Markov Chain polynomial ergodicity |
|
|