首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Layer Sampling     
Layer sampling is an algorithm for generating variates from a non-normalized multidimensional distribution p( · ). It empirically constructs a majorizing function for p( · ) from a sequence of layers. The method first selects a layer based on the previous variate. Next, a sample is drawn from the selected layer, using a method such as Rejection Sampling. Layer sampling is regenerative. At regeneration times, the layers may be adapted to increase mixing of the Markov chain. Layer sampling may also be used to estimate arbitrary integrals, including normalizing constants.  相似文献   

2.
I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently developed method of simulated tempering, the tempered transition method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the inefficiency of a random walk, an advantage that is unfortunately cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling efficiency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are deceptive.  相似文献   

3.
Summary.  Gaussian Markov random-field (GMRF) models are frequently used in a wide variety of applications. In most cases parts of the GMRF are observed through mutually independent data; hence the full conditional of the GMRF, a hidden GMRF (HGMRF), is of interest. We are concerned with the case where the likelihood is non-Gaussian, leading to non-Gaussian HGMRF models. Several researchers have constructed block sampling Markov chain Monte Carlo schemes based on approximations of the HGMRF by a GMRF, using a second-order expansion of the log-density at or near the mode. This is possible as the GMRF approximation can be sampled exactly with a known normalizing constant. The Markov property of the GMRF approximation yields computational efficiency.The main contribution in the paper is to go beyond the GMRF approximation and to construct a class of non-Gaussian approximations which adapt automatically to the particular HGMRF that is under study. The accuracy can be tuned by intuitive parameters to nearly any precision. These non-Gaussian approximations share the same computational complexity as those which are based on GMRFs and can be sampled exactly with computable normalizing constants. We apply our approximations in spatial disease mapping and model-based geostatistical models with different likelihoods, obtain procedures for block updating and construct Metropolized independence samplers.  相似文献   

4.
Perfect sampling allows the exact simulation of random variables from the stationary measure of a Markov chain. By exploiting monotonicity properties of the slice sampler we show that a perfect version of the algorithm can be easily implemented, at least when the target distribution is bounded. Various extensions, including perfect product slice samplers, and examples of applications are discussed.  相似文献   

5.
We consider the issue of sampling from the posterior distribution of exponential random graph (ERG) models and other statistical models with intractable normalizing constants. Existing methods based on exact sampling are either infeasible or require very long computing time. We study a class of approximate Markov chain Monte Carlo (MCMC) sampling schemes that deal with this issue. We also develop a new Metropolis–Hastings kernel to sample sparse large networks from ERG models. We illustrate the proposed methods on several examples.  相似文献   

6.
We describe standard single-site Monte Carlo Markov chain methods, the Hastings and Metropolis algorithms, the Gibbs sampler and simulated annealing, for maximum a posteriori and marginal posterior modes image estimation. These methods can experience great difficulty in traversing the whole image space in a finite time when the target distribution is multi-modal. We present a survey of multiple-site update methods, including Swendsen and Wang's algorithm, coupled Markov chains and cascade algorithms designed to tackle the problem of moving between modes of the posterior image distribution. We compare the performance of some of these algorithms for sampling from degraded and non-degraded Ising models  相似文献   

7.
Exact Sampling from a Continuous State Space   总被引:3,自引:0,他引:3  
Propp & Wilson (1996) described a protocol, called coupling from the past, for exact sampling from a target distribution using a coupled Markov chain Monte Carlo algorithm. In this paper we extend coupling from the past to various MCMC samplers on a continuous state space; rather than following the monotone sampling device of Propp & Wilson, our approach uses methods related to gamma-coupling and rejection sampling to simulate the chain, and direct accounting of sample paths.  相似文献   

8.
Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305–320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom.  相似文献   

9.
Mode Jumping Proposals in MCMC   总被引:1,自引:1,他引:0  
Markov chain Monte Carlo algorithms generate samples from a target distribution by simulating a Markov chain. Large flexibility exists in specification of transition matrix of the chain. In practice, however, most algorithms used only allow small changes in the state vector in each iteration. This choice typically causes problems for multi-modal distributions as moves between modes become rare and, in turn, results in slow convergence to the target distribution. In this paper we consider continuous distributions on R n and specify how optimization for local maxima of the target distribution can be incorporated in the specification of the Markov chain. Thereby, we obtain a chain with frequent jumps between modes. We demonstrate the effectiveness of the approach in three examples. The first considers a simple mixture of bivariate normal distributions, whereas the two last examples consider sampling from posterior distributions based on previously analysed data sets.  相似文献   

10.
Abstract.  In the Bayesian approach to ill-posed inverse problems, regularization is imposed by specifying a prior distribution on the parameters of interest and Markov chain Monte Carlo samplers are used to extract information about its posterior distribution. The aim of this paper is to investigate the convergence properties of the random-scan random-walk Metropolis (RSM) algorithm for posterior distributions in ill-posed inverse problems. We provide an accessible set of sufficient conditions, in terms of the observational model and the prior, to ensure geometric ergodicity of RSM samplers of the posterior distribution. We illustrate how these conditions can be checked in an application to the inversion of oceanographic tracer data.  相似文献   

11.
Propp and Wilson (Random Structures and Algorithms (1996) 9: 223–252, Journal of Algorithms (1998) 27: 170–217) described a protocol called coupling from the past (CFTP) for exact sampling from the steady-state distribution of a Markov chain Monte Carlo (MCMC) process. In it a past time is identified from which the paths of coupled Markov chains starting at every possible state would have coalesced into a single value by the present time; this value is then a sample from the steady-state distribution.Unfortunately, producing an exact sample typically requires a large computational effort. We consider the question of how to make efficient use of the sample values that are generated. In particular, we make use of regeneration events (cf. Mykland et al. Journal of the American Statistical Association (1995) 90: 233–241) to aid in the analysis of MCMC runs. In a regeneration event, the chain is in a fixed reference distribution– this allows the chain to be broken up into a series of tours which are independent, or nearly so (though they do not represent draws from the true stationary distribution).In this paper we consider using the CFTP and related algorithms to create tours. In some cases their elements are exactly in the stationary distribution; their length may be fixed or random. This allows us to combine the precision of exact sampling with the efficiency of using entire tours.Several algorithms and estimators are proposed and analysed.  相似文献   

12.
Markov chain Monte Carlo (MCMC) sampling is a numerically intensive simulation technique which has greatly improved the practicality of Bayesian inference and prediction. However, MCMC sampling is too slow to be of practical use in problems involving a large number of posterior (target) distributions, as in dynamic modelling and predictive model selection. Alternative simulation techniques for tracking moving target distributions, known as particle filters, which combine importance sampling, importance resampling and MCMC sampling, tend to suffer from a progressive degeneration as the target sequence evolves. We propose a new technique, based on these same simulation methodologies, which does not suffer from this progressive degeneration.  相似文献   

13.
Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.  相似文献   

14.
Likelihood computation in spatial statistics requires accurate and efficient calculation of the normalizing constant (i.e. partition function) of the Gibbs distribution of the model. Two available methods to calculate the normalizing constant by Markov chain Monte Carlo methods are compared by simulation experiments for an Ising model, a Gaussian Markov field model and a pairwise interaction point field model.  相似文献   

15.
Although Markov chain Monte Carlo methods have been widely used in many disciplines, exact eigen analysis for such generated chains has been rare. In this paper, a special Metropolis-Hastings algorithm, Metropolized independent sampling, proposed first in Hastings (1970), is studied in full detail. The eigenvalues and eigenvectors of the corresponding Markov chain, as well as a sharp bound for the total variation distance between the nth updated distribution and the target distribution, are provided. Furthermore, the relationship between this scheme, rejection sampling, and importance sampling are studied with emphasis on their relative efficiencies. It is shown that Metropolized independent sampling is superior to rejection sampling in two respects: asymptotic efficiency and ease of computation.  相似文献   

16.
We develop a sequential Monte Carlo algorithm for the infinite hidden Markov model (iHMM) that allows us to perform on-line inferences on both system states and structural (static) parameters. The algorithm described here provides a natural alternative to Markov chain Monte Carlo samplers previously developed for the iHMM, and is particularly helpful in applications where data is collected sequentially and model parameters need to be continuously updated. We illustrate our approach in the context of both a simulation study and a financial application.  相似文献   

17.
The analysis of non-Gaussian time series by using state space models is considered from both classical and Bayesian perspectives. The treatment in both cases is based on simulation using importance sampling and antithetic variables; Markov chain Monte Carlo methods are not employed. Non-Gaussian disturbances for the state equation as well as for the observation equation are considered. Methods for estimating conditional and posterior means of functions of the state vector given the observations, and the mean-square errors of their estimates, are developed. These methods are extended to cover the estimation of conditional and posterior densities and distribution functions. The choice of importance sampling densities and antithetic variables is discussed. The techniques work well in practice and are computationally efficient. Their use is illustrated by applying them to a univariate discrete time series, a series with outliers and a volatility series.  相似文献   

18.
In Markov chain Monte Carlo analysis, rapid convergence of the chain to its target distribution is crucial. A chain that converges geometrically quickly is geometrically ergodic. We explore geometric ergodicity for two-component Gibbs samplers (GS) that, under a chosen scanning strategy, evolve through one-at-a-time component-wise updates. We consider three such strategies: composition, random sequence, and random scans. We show that if any one of these scans produces a geometrically ergodic GS, so too do the others. Further, we provide a simple set of sufficient conditions for the geometric ergodicity of the GS. We illustrate our results using two examples.  相似文献   

19.
Because of their multimodality, mixture posterior distributions are difficult to sample with standard Markov chain Monte Carlo (MCMC) methods. We propose a strategy to enhance the sampling of MCMC in this context, using a biasing procedure which originates from computational Statistical Physics. The principle is first to choose a “reaction coordinate”, that is, a “direction” in which the target distribution is multimodal. In a second step, the marginal log-density of the reaction coordinate with respect to the posterior distribution is estimated; minus this quantity is called “free energy” in the computational Statistical Physics literature. To this end, we use adaptive biasing Markov chain algorithms which adapt their targeted invariant distribution on the fly, in order to overcome sampling barriers along the chosen reaction coordinate. Finally, we perform an importance sampling step in order to remove the bias and recover the true posterior. The efficiency factor of the importance sampling step can easily be estimated a priori once the bias is known, and appears to be rather large for the test cases we considered.  相似文献   

20.
Importance sampling and control variates have been used as variance reduction techniques for estimating bootstrap tail quantiles and moments, respectively. We adapt each method to apply to both quantiles and moments, and combine the methods to obtain variance reductions by factors from 4 to 30 in simulation examples.We use two innovations in control variates—interpreting control variates as a re-weighting method, and the implementation of control variates using the saddlepoint; the combination requires only the linear saddlepoint but applies to general statistics, and produces estimates with accuracy of order n -1/2 B -1, where n is the sample size and B is the bootstrap sample size.We discuss two modifications to classical importance sampling—a weighted average estimate and a mixture design distribution. These modifications make importance sampling robust and allow moments to be estimated from the same bootstrap simulation used to estimate quantiles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号