首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
While Markov chain Monte Carlo (MCMC) methods are frequently used for difficult calculations in a wide range of scientific disciplines, they suffer from a serious limitation: their samples are not independent and identically distributed. Consequently, estimates of expectations are biased if the initial value of the chain is not drawn from the target distribution. Regenerative simulation provides an elegant solution to this problem. In this article, we propose a simple regenerative MCMC algorithm to generate variates for any distribution.  相似文献   

2.
A computational problem in many fields is to evaluate multiple integrals and expectations simultaneously. Consider probability distributions with unnormalized density functions indexed by parameters on a 2-dimensional grid, and assume that samples are simulated from distributions on a subgrid. Examples of such unnormalized density functions include the observed-data likelihoods in the presence of missing data and the prior times the likelihood in Bayesian inference. There are various methods using a single sample only or multiple samples jointly to compute each integral. Path sampling seems a compromise, using samples along a 1-dimensional path to compute each integral. However, different choices of the path lead to different estimators, which should ideally be identical. We propose calibrated estimators by the method of control variates to exploit such constraints for variance reduction. We also propose biquadratic interpolation to approximate integrals with parameters outside the subgrid, consistently with the calibrated estimators on the subgrid. These methods can be extended to compute differences of expectations through an auxiliary identity for path sampling. Furthermore, we develop stepwise bridge-sampling methods in parallel but complementary to path sampling. In three simulation studies, the proposed methods lead to substantially reduced mean squared errors compared with existing methods.  相似文献   

3.
I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently developed method of simulated tempering, the tempered transition method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the inefficiency of a random walk, an advantage that is unfortunately cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling efficiency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are deceptive.  相似文献   

4.
This article presents a novel Bayesian analysis for linear mixed-effects models. The analysis is based on the method of partial collapsing that allows some components to be partially collapsed out of a model. The resulting partially collapsed Gibbs (PCG) sampler constructed to fit linear mixed-effects models is expected to exhibit much better convergence properties than the corresponding Gibbs sampler. In order to construct the PCG sampler without complicating component updates, we consider the reparameterization of model components by expressing a between-group variance in terms of a within-group variance in a linear mixed-effects model. The proposed method of partial collapsing with reparameterization is applied to the Merton’s jump diffusion model as well as general linear mixed-effects models with proper prior distributions and illustrated using simulated data and longitudinal data on sleep deprivation.  相似文献   

5.
Abstract.  The sampling-importance resampling (SIR) algorithm aims at drawing a random sample from a target distribution π. First, a sample is drawn from a proposal distribution q , and then from this a smaller sample is drawn with sample probabilities proportional to the importance ratios π/ q . We propose here a simple adjustment of the sample probabilities and show that this gives faster convergence. The results indicate that our version converges better also for small sample sizes. The SIR algorithms are compared with the Metropolis–Hastings (MH) algorithm with independent proposals. Although MH converges asymptotically faster, the results indicate that our improved SIR version is better than MH for small sample sizes. We also establish a connection between the SIR algorithms and importance sampling with normalized weights. We show that the use of adjusted SIR sample probabilities as importance weights reduces the bias of the importance sampling estimate.  相似文献   

6.
Convergence rates, statistical efficiency and sampling costs are studied for the original and extended Swendsen–Wang methods of generating a sample path { S j , j ≥1} with equilibrium distribution π , with r distinct elements, on a finite state space X of size N 1. Given S j -1, each method uses auxiliary random variables to identify the subset of X from which S j is to be randomly sampled. Let πmin and πmax denote respectively the smallest and largest elements in π and let Nr denote the number of elements in π with value πmax. For a single auxiliary variable, uniform sampling from the subset and ( N 1− Nrmin+ Nr πmax≈1, our results show rapid convergence and high statistical efficiency for large πminmax or Nr / N 1 and slow convergence and poor statistical efficiency for small πminmax and Nr / N1 . Other examples provide additional insight. For extended Swendsen–Wang methods with non-uniform subset sampling, the analysis identifies the properties of a decomposition of π( x ) that favour fast convergence and high statistical efficiency. In the absence of exploitable special structure, subset sampling can be costly regardless of which of these methods is employed.  相似文献   

7.
Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this article, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidth and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric forms of the time-varying coefficients. Supplementary materials for this article are available online.  相似文献   

8.
Exact Sampling from a Continuous State Space   总被引:3,自引:0,他引:3  
Propp & Wilson (1996) described a protocol, called coupling from the past, for exact sampling from a target distribution using a coupled Markov chain Monte Carlo algorithm. In this paper we extend coupling from the past to various MCMC samplers on a continuous state space; rather than following the monotone sampling device of Propp & Wilson, our approach uses methods related to gamma-coupling and rejection sampling to simulate the chain, and direct accounting of sample paths.  相似文献   

9.
In this article, we develop rejection sampling algorithms to sample from some truncated and tail distributions. Such samplers are needed in many Markov chain Monte Carlo methods, often in connection with Bayesian inference. In addition to univariate normal, gamma, and beta distributions, we consider multivariate normal distributions truncated to certain sets.  相似文献   

10.
Studies of the behaviors of glaciers, ice sheets, and ice streams rely heavily on both observations and physical models. Data acquired via remote sensing provide critical information on geometry and movement of ice over large sections of Antarctica and Greenland. However, uncertainties are present in both the observations and the models. Hence, there is a need for combining these information sources in a fashion that incorporates uncertainty and quantifies its impact on conclusions. We present a hierarchical Bayesian approach to modeling ice-stream velocities incorporating physical models and observations regarding velocity, ice thickness, and surface elevation from the North East Ice Stream in Greenland. The Bayesian model leads to interesting issues in model assessment and computation.  相似文献   

11.
The authors present theoretical results that show how one can simulate a mixture distribution whose components live in subspaces of different dimension by reformulating the problem in such a way that observations may be drawn from an auxiliary continuous distribution on the largest subspace and then transformed in an appropriate fashion. Motivated by the importance of enlarging the set of available Markov chain Monte Carlo (MCMC) techniques, the authors show how their results can be fruitfully employed in problems such as model selection (or averaging) of nested models, or regeneration of Markov chains for evaluating standard deviations of estimated expectations derived from MCMC simulations.  相似文献   

12.
Abstract. We investigate simulation methodology for Bayesian inference in Lévy‐driven stochastic volatility (SV) models. Typically, Bayesian inference from such models is performed using Markov chain Monte Carlo (MCMC); this is often a challenging task. Sequential Monte Carlo (SMC) samplers are methods that can improve over MCMC; however, there are many user‐set parameters to specify. We develop a fully automated SMC algorithm, which substantially improves over the standard MCMC methods in the literature. To illustrate our methodology, we look at a model comprised of a Heston model with an independent, additive, variance gamma process in the returns equation. The driving gamma process can capture the stylized behaviour of many financial time series and a discretized version, fit in a Bayesian manner, has been found to be very useful for modelling equity data. We demonstrate that it is possible to draw exact inference, in the sense of no time‐discretization error, from the Bayesian SV model.  相似文献   

13.
In applications of Gaussian processes (GPs) where quantification of uncertainty is a strict requirement, it is necessary to accurately characterize the posterior distribution over Gaussian process covariance parameters. This is normally done by means of standard Markov chain Monte Carlo (MCMC) algorithms, which require repeated expensive calculations involving the marginal likelihood. Motivated by the desire to avoid the inefficiencies of MCMC algorithms rejecting a considerable amount of expensive proposals, this paper develops an alternative inference framework based on adaptive multiple importance sampling (AMIS). In particular, this paper studies the application of AMIS for GPs in the case of a Gaussian likelihood, and proposes a novel pseudo-marginal-based AMIS algorithm for non-Gaussian likelihoods, where the marginal likelihood is unbiasedly estimated. The results suggest that the proposed framework outperforms MCMC-based inference of covariance parameters in a wide range of scenarios.  相似文献   

14.
We develop a Markov chain Monte Carlo algorithm, based on ‘stochastic search variable selection’ (George and McCuUoch, 1993), for identifying promising log-linear models. The method may be used in the analysis of multi-way contingency tables where the set of plausible models is very large.  相似文献   

15.
Bayesian hierarchical formulations are utilized by the U.S. Bureau of Labor Statistics (BLS) with respondent‐level data for missing item imputation because these formulations are readily parameterized to capture correlation structures. BLS collects survey data under informative sampling designs that assign probabilities of inclusion to be correlated with the response on which sampling‐weighted pseudo posterior distributions are estimated for asymptotically unbiased inference about population model parameters. Computation is expensive and does not support BLS production schedules. We propose a new method to scale the computation that divides the data into smaller subsets, estimates a sampling‐weighted pseudo posterior distribution, in parallel, for every subset and combines the pseudo posterior parameter samples from all the subsets through their mean in the Wasserstein space of order 2. We construct conditions on a class of sampling designs where posterior consistency of the proposed method is achieved. We demonstrate on both synthetic data and in application to the Current Employment Statistics survey that our method produces results of similar accuracy as the usual approach while offering substantially faster computation.  相似文献   

16.
It is commonly asserted that the Gibbs sampler is a special case of the Metropolis–Hastings (MH) algorithm. While this statement is true for certain Gibbs samplers, it is not true in general for the version that is taught and used most often, namely, the deterministic scan Gibbs sampler. In this note, I prove that that there exist deterministic scan Gibbs samplers that do not exhibit detailed balance and hence cannot be considered MH samplers. The nuances of various Gibbs sampling schemes are discussed.  相似文献   

17.
We consider importance sampling (IS) type weighted estimators based on Markov chain Monte Carlo (MCMC) targeting an approximate marginal of the target distribution. In the context of Bayesian latent variable models, the MCMC typically operates on the hyperparameters, and the subsequent weighting may be based on IS or sequential Monte Carlo (SMC), but allows for multilevel techniques as well. The IS approach provides a natural alternative to delayed acceptance (DA) pseudo-marginal/particle MCMC, and has many advantages over DA, including a straightforward parallelization and additional flexibility in MCMC implementation. We detail minimal conditions which ensure strong consistency of the suggested estimators, and provide central limit theorems with expressions for asymptotic variances. We demonstrate how our method can make use of SMC in the state space models context, using Laplace approximations and time-discretized diffusions. Our experimental results are promising and show that the IS-type approach can provide substantial gains relative to an analogous DA scheme, and is often competitive even without parallelization.  相似文献   

18.
We propose a simulation-based Bayesian approach to the analysis of long memory stochastic volatility models, stationary and nonstationary. The main tool used to reduce the likelihood function to a tractable form is an approximate state-space representation of the model, A data set of stock market returns is analyzed with the proposed method. The approach taken here allows a quantitative assessment of the empirical evidence in favor of the stationarity, or nonstationarity, of the instantaneous volatility of the data.  相似文献   

19.
In this paper, efficient importance sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate stochastic volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother, a Bayesian Markov chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed.  相似文献   

20.
Watanabe estimated the dynamic bivariate mixture models introduced by Tauchen and Pitts and modified by Andersen using a Bayesian method via Markov chain Monte Carlo techniques. Based on a maximum likelihood method via efficient importance sampling, Liesenfeld and Richard obtained estimates that are significantly different from those of Watanabe. This note corrects the error in the multimove sampler used by Watanabe and reproduces all analyses in the work of Watanabe using a corrected multimove sampler. The estimates using the correct multimove sampler are found to be close to those obtained by Liesenfeld and Richard.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号