首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Monte Carlo methods for the exact inference have received much attention recently in complete or incomplete contingency table analysis. However, conventional Markov chain Monte Carlo, such as the Metropolis–Hastings algorithm, and importance sampling methods sometimes generate the poor performance by failing to produce valid tables. In this paper, we apply an adaptive Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm (SAMC; Liang, Liu, & Carroll, 2007), to the exact test of the goodness-of-fit of the model in complete or incomplete contingency tables containing some structural zero cells. The numerical results are in favor of our method in terms of quality of estimates.  相似文献   

2.
The random walk Metropolis algorithm is a simple Markov chain Monte Carlo scheme which is frequently used in Bayesian statistical problems. We propose a guided walk Metropolis algorithm which suppresses some of the random walk behavior in the Markov chain. This alternative algorithm is no harder to implement than the random walk Metropolis algorithm, but empirical studies show that it performs better in terms of efficiency and convergence time.  相似文献   

3.
In the expectation–maximization (EM) algorithm for maximum likelihood estimation from incomplete data, Markov chain Monte Carlo (MCMC) methods have been used in change-point inference for a long time when the expectation step is intractable. However, the conventional MCMC algorithms tend to get trapped in local mode in simulating from the posterior distribution of change points. To overcome this problem, in this paper we propose a stochastic approximation Monte Carlo version of EM (SAMCEM), which is a combination of adaptive Markov chain Monte Carlo and EM utilizing a maximum likelihood method. SAMCEM is compared with the stochastic approximation version of EM and reversible jump Markov chain Monte Carlo version of EM on simulated and real datasets. The numerical results indicate that SAMCEM can outperform among the three methods by producing much more accurate parameter estimates and the ability to achieve change-point positions and estimates simultaneously.  相似文献   

4.
Convergence of Heavy-tailed Monte Carlo Markov Chain Algorithms   总被引:1,自引:0,他引:1  
Abstract.  In this paper, we use recent results of Jarner & Roberts ( Ann. Appl. Probab., 12, 2002, 224) to show polynomial convergence rates of Monte Carlo Markov Chain algorithms with polynomial target distributions, in particular random-walk Metropolis algorithms, Langevin algorithms and independence samplers. We also use similar methodology to consider polynomial convergence of the Gibbs sampler on a constrained state space. The main result for the random-walk Metropolis algorithm is that heavy-tailed proposal distributions lead to higher rates of convergence and thus to qualitatively better algorithms as measured, for instance, by the existence of central limit theorems for higher moments. Thus, the paper gives for the first time a theoretical justification for the common belief that heavy-tailed proposal distributions improve convergence in the context of random-walk Metropolis algorithms. Similar results are shown to hold for Langevin algorithms and the independence sampler, while results for the mixing of Gibbs samplers on uniform distributions on constrained spaces are rather different in character.  相似文献   

5.
We propose to combine two quite powerful ideas that have recently appeared in the Markov chain Monte Carlo literature: adaptive Metropolis samplers and delayed rejection. The ergodicity of the resulting non-Markovian sampler is proved, and the efficiency of the combination is demonstrated with various examples. We present situations where the combination outperforms the original methods: adaptation clearly enhances efficiency of the delayed rejection algorithm in cases where good proposal distributions are not available. Similarly, delayed rejection provides a systematic remedy when the adaptation process has a slow start.  相似文献   

6.
Two strategies that can potentially improve Markov Chain Monte Carlo algorithms are to use derivative evaluations of the target density, and to suppress random walk behaviour in the chain. The use of one or both of these strategies has been investigated in a few specific applications, but neither is used routinely. We undertake a broader evaluation of these techniques, with a view to assessing their utility for routine use. In addition to comparing different algorithms, we also compare two different ways in which the algorithms can be applied to a multivariate target distribution. Specifically, the univariate version of an algorithm can be applied repeatedly to one-dimensional conditional distributions, or the multivariate version can be applied directly to the target distribution.  相似文献   

7.
In this paper, we present an adaptive evolutionary Monte Carlo algorithm (AEMC), which combines a tree-based predictive model with an evolutionary Monte Carlo sampling procedure for the purpose of global optimization. Our development is motivated by sensor placement applications in engineering, which requires optimizing certain complicated “black-box” objective function. The proposed method is able to enhance the optimization efficiency and effectiveness as compared to a few alternative strategies. AEMC falls into the category of adaptive Markov chain Monte Carlo (MCMC) algorithms and is the first adaptive MCMC algorithm that simulates multiple Markov chains in parallel. A theorem about the ergodicity property of the AEMC algorithm is stated and proven. We demonstrate the advantages of the proposed method by applying it to a sensor placement problem in a manufacturing process, as well as to a standard Griewank test function.  相似文献   

8.
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices.  相似文献   

9.
In this article, we propose to evaluate and compare Markov chain Monte Carlo (MCMC) methods to estimate the parameters in a generalized extreme value model. We employed the Bayesian approach using traditional Metropolis-Hastings methods, Hamiltonian Monte Carlo (HMC), and Riemann manifold HMC (RMHMC) methods to obtain the approximations to the posterior marginal distributions of interest. Applications to real datasets and simulation studies provide evidence that the extra analytical work involved in Hamiltonian Monte Carlo algorithms is compensated by a more efficient exploration of the parameter space.  相似文献   

10.
We introduce a new class of interacting Markov chain Monte Carlo (MCMC) algorithms which is designed to increase the efficiency of a modified multiple-try Metropolis (MTM) sampler. The extension with respect to the existing MCMC literature is twofold. First, the sampler proposed extends the basic MTM algorithm by allowing for different proposal distributions in the multiple-try generation step. Second, we exploit the different proposal distributions to naturally introduce an interacting MTM mechanism (IMTM) that expands the class of population Monte Carlo methods and builds connections with the rapidly expanding world of adaptive MCMC. We show the validity of the algorithm and discuss the choice of the selection weights and of the different proposals. The numerical studies show that the interaction mechanism allows the IMTM to efficiently explore the state space leading to higher efficiency than other competing algorithms.  相似文献   

11.
Different strategies have been proposed to improve mixing and convergence properties of Markov Chain Monte Carlo algorithms. These are mainly concerned with customizing the proposal density in the Metropolis–Hastings algorithm to the specific target density and require a detailed exploratory analysis of the stationary distribution and/or some preliminary experiments to determine an efficient proposal. Various Metropolis–Hastings algorithms have been suggested that make use of previously sampled states in defining an adaptive proposal density. Here we propose a general class of adaptive Metropolis–Hastings algorithms based on Metropolis–Hastings-within-Gibbs sampling. For the case of a one-dimensional target distribution, we present two novel algorithms using mixtures of triangular and trapezoidal densities. These can also be seen as improved versions of the all-purpose adaptive rejection Metropolis sampling (ARMS) algorithm to sample from non-logconcave univariate densities. Using various different examples, we demonstrate their properties and efficiencies and point out their advantages over ARMS and other adaptive alternatives such as the Normal Kernel Coupler.  相似文献   

12.
Pseudo-marginal Markov chain Monte Carlo methods for sampling from intractable distributions have gained recent interest and have been theoretically studied in considerable depth. Their main appeal is that they are exact, in the sense that they target marginally the correct invariant distribution. However, the pseudo-marginal Markov chain can exhibit poor mixing and slow convergence towards its target. As an alternative, a subtly different Markov chain can be simulated, where better mixing is possible but the exactness property is sacrificed. This is the noisy algorithm, initially conceptualised as Monte Carlo within Metropolis, which has also been studied but to a lesser extent. The present article provides a further characterisation of the noisy algorithm, with a focus on fundamental stability properties like positive recurrence and geometric ergodicity. Sufficient conditions for inheriting geometric ergodicity from a standard Metropolis–Hastings chain are given, as well as convergence of the invariant distribution towards the true target distribution.  相似文献   

13.
It is well known that the approximate Bayesian computation algorithm based on Markov chain Monte Carlo methods suffers from the sensitivity to the choice of starting values, inefficiency and a low acceptance rate. To overcome these problems, this study proposes a generalization of the multiple-point Metropolis algorithm, which proceeds by generating multiple-dependent proposals and then by selecting a candidate among the set of proposals on the basis of weights that can be chosen arbitrarily. The performance of the proposed algorithm is illustrated by using both simulated and real data.  相似文献   

14.
The Multiple-Try Metropolis is a recent extension of the Metropolis algorithm in which the next state of the chain is selected among a set of proposals. We propose a modification of the Multiple-Try Metropolis algorithm which allows for the use of correlated proposals, particularly antithetic and stratified proposals. The method is particularly useful for random walk Metropolis in high dimensional spaces and can be used easily when the proposal distribution is Gaussian. We explore the use of quasi Monte Carlo (QMC) methods to generate highly stratified samples. A series of examples is presented to evaluate the potential of the method.  相似文献   

15.
Abstract.  In the Bayesian approach to ill-posed inverse problems, regularization is imposed by specifying a prior distribution on the parameters of interest and Markov chain Monte Carlo samplers are used to extract information about its posterior distribution. The aim of this paper is to investigate the convergence properties of the random-scan random-walk Metropolis (RSM) algorithm for posterior distributions in ill-posed inverse problems. We provide an accessible set of sufficient conditions, in terms of the observational model and the prior, to ensure geometric ergodicity of RSM samplers of the posterior distribution. We illustrate how these conditions can be checked in an application to the inversion of oceanographic tracer data.  相似文献   

16.
Two new implementations of the EM algorithm are proposed for maximum likelihood fitting of generalized linear mixed models. Both methods use random (independent and identically distributed) sampling to construct Monte Carlo approximations at the E-step. One approach involves generating random samples from the exact conditional distribution of the random effects (given the data) by rejection sampling, using the marginal distribution as a candidate. The second method uses a multivariate t importance sampling approximation. In many applications the two methods are complementary. Rejection sampling is more efficient when sample sizes are small, whereas importance sampling is better with larger sample sizes. Monte Carlo approximation using random samples allows the Monte Carlo error at each iteration to be assessed by using standard central limit theory combined with Taylor series methods. Specifically, we construct a sandwich variance estimate for the maximizer at each approximate E-step. This suggests a rule for automatically increasing the Monte Carlo sample size after iterations in which the true EM step is swamped by Monte Carlo error. In contrast, techniques for assessing Monte Carlo error have not been developed for use with alternative implementations of Monte Carlo EM algorithms utilizing Markov chain Monte Carlo E-step approximations. Three different data sets, including the infamous salamander data of McCullagh and Nelder, are used to illustrate the techniques and to compare them with the alternatives. The results show that the methods proposed can be considerably more efficient than those based on Markov chain Monte Carlo algorithms. However, the methods proposed may break down when the intractable integrals in the likelihood function are of high dimension.  相似文献   

17.
In this article, we perform Bayesian estimation of stochastic volatility models with heavy tail distributions using Metropolis adjusted Langevin (MALA) and Riemman manifold Langevin (MMALA) methods. We provide analytical expressions for the application of these methods, assess the performance of these methodologies in simulated data, and illustrate their use on two financial time series datasets.  相似文献   

18.
We develop a Markov chain Monte Carlo algorithm, based on ‘stochastic search variable selection’ (George and McCuUoch, 1993), for identifying promising log-linear models. The method may be used in the analysis of multi-way contingency tables where the set of plausible models is very large.  相似文献   

19.
The authors present theoretical results that show how one can simulate a mixture distribution whose components live in subspaces of different dimension by reformulating the problem in such a way that observations may be drawn from an auxiliary continuous distribution on the largest subspace and then transformed in an appropriate fashion. Motivated by the importance of enlarging the set of available Markov chain Monte Carlo (MCMC) techniques, the authors show how their results can be fruitfully employed in problems such as model selection (or averaging) of nested models, or regeneration of Markov chains for evaluating standard deviations of estimated expectations derived from MCMC simulations.  相似文献   

20.
Summary.  We discuss the inversion of the gas profiles (ozone, NO3, NO2, aerosols and neutral density) in the upper atmosphere from the spectral occultation measurements. The data are produced by the 'Global ozone monitoring of occultation of stars' instrument on board the Envisat satellite that was launched in March 2002. The instrument measures the attenuation of light spectra at various horizontal paths from about 100 km down to 10–20 km. The new feature is that these data allow the inversion of the gas concentration height profiles. A short introduction is given to the present operational data management procedure with examples of the first real data inversion. Several solution options for a more comprehensive statistical inversion are presented. A direct inversion leads to a non-linear model with hundreds of parameters to be estimated. The problem is solved with an adaptive single-step Markov chain Monte Carlo algorithm. Another approach is to divide the problem into several non-linear smaller dimensional problems, to run parallel adaptive Markov chain Monte Carlo chains for them and to solve the gas profiles in repetitive linear steps. The effect of grid size is discussed, and we present how the prior regularization takes the grid size into account in a way that effectively leads to a grid-independent inversion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号