共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider Particle Gibbs (PG) for Bayesian analysis of non-linear non-Gaussian state-space models. As a Monte Carlo (MC) approximation of the Gibbs procedure, PG uses sequential MC (SMC) importance sampling inside the Gibbs to update the latent states. We propose to combine PG with the Particle Efficient Importance Sampling (PEIS). By using SMC sampling densities which are approximately globally fully adapted to the targeted density of the states, PEIS can substantially improve the simulation efficiency of the PG relative to existing PG implementations. The efficiency gains are illustrated in PG applications to a non-linear local-level model and stochastic volatility models. 相似文献
2.
Xiaoyu Xiong Václav Šmídl Maurizio Filippone 《Journal of Statistical Computation and Simulation》2017,87(8):1644-1665
In applications of Gaussian processes (GPs) where quantification of uncertainty is a strict requirement, it is necessary to accurately characterize the posterior distribution over Gaussian process covariance parameters. This is normally done by means of standard Markov chain Monte Carlo (MCMC) algorithms, which require repeated expensive calculations involving the marginal likelihood. Motivated by the desire to avoid the inefficiencies of MCMC algorithms rejecting a considerable amount of expensive proposals, this paper develops an alternative inference framework based on adaptive multiple importance sampling (AMIS). In particular, this paper studies the application of AMIS for GPs in the case of a Gaussian likelihood, and proposes a novel pseudo-marginal-based AMIS algorithm for non-Gaussian likelihoods, where the marginal likelihood is unbiasedly estimated. The results suggest that the proposed framework outperforms MCMC-based inference of covariance parameters in a wide range of scenarios. 相似文献
3.
《Journal of Statistical Computation and Simulation》2012,82(10):727-740
We propose an estimation procedure for time-series regression models under the Bayesian inference framework. With the exact method of Wise [Wise, J. (1955). The autocorrelation function and spectral density function. Biometrika, 42, 151–159], an exact likelihood function can be obtained instead of the likelihood conditional on initial observations. The constraints on the parameter space arising from the stationarity conditions are handled by a reparametrization, which was not taken into consideration by Chib [Chib, S. (1993). Bayes regression with autoregressive errors: A Gibbs sampling approach. J. Econometrics, 58, 275–294] or Chib and Greenberg [Chib, S. and Greenberg, E. (1994). Bayes inference in regression model with ARMA(p, q) errors. J. Econometrics, 64, 183–206]. Simulation studies show that our method leads to better inferential results than their results. 相似文献
4.
5.
Gibbs sampling has had great success in the analysis of mixture models. In particular, the “latent variable” formulation of the mixture model greatly reduces computational complexity. However, one failing of this approach is the possible existence of almost-absorbing states, called trapping states, as it may require an enormous number of iterations to escape from these states. Here we examine an alternative approach to estimation in mixture models, one based on a Rao–Blackwellization argument applied to a latent-variable-based estimator. From this derivation we construct an alternative Monte Carlo sampling scheme that avoids trapping states. 相似文献
6.
We consider importance sampling (IS) type weighted estimators based on Markov chain Monte Carlo (MCMC) targeting an approximate marginal of the target distribution. In the context of Bayesian latent variable models, the MCMC typically operates on the hyperparameters, and the subsequent weighting may be based on IS or sequential Monte Carlo (SMC), but allows for multilevel techniques as well. The IS approach provides a natural alternative to delayed acceptance (DA) pseudo-marginal/particle MCMC, and has many advantages over DA, including a straightforward parallelization and additional flexibility in MCMC implementation. We detail minimal conditions which ensure strong consistency of the suggested estimators, and provide central limit theorems with expressions for asymptotic variances. We demonstrate how our method can make use of SMC in the state space models context, using Laplace approximations and time-discretized diffusions. Our experimental results are promising and show that the IS-type approach can provide substantial gains relative to an analogous DA scheme, and is often competitive even without parallelization. 相似文献
7.
We study adaptive importance sampling (AIS) as an online learning problem and argue for the importance of the trade-off between exploration and exploitation in this adaptation. Borrowing ideas from the online learning literature, we propose Daisee, a partition-based AIS algorithm. We further introduce a notion of regret for AIS and show that Daisee has cumulative pseudo-regret, where is the number of iterations. We then extend Daisee to adaptively learn a hierarchical partitioning of the sample space for more efficient sampling and confirm the performance of both algorithms empirically. 相似文献
8.
《Journal of Statistical Computation and Simulation》2012,82(9):731-749
Efficient stochastic algorithms are presented in order to simulate allele configurations distributed according to a family π A , 0<A<∞, of exchangeable sampling distributions arising in population genetics. Each distribution π A has two parameters n and k, the sample size and the number of alleles, respectively. For A→0, the distribution π A is induced from neutral sampling, whereas for A→∞, it is induced from Maxwell–Boltzmann sampling. Three different Monte Carlo methods (independent sampling procedures) are provided, based on conditioning, sequential methods and a generalization of Pitmans ‘Chinese restaurant process’. Moreover, an efficient Markov chain Monte Carlo method is provided. The algorithms are applied to the homozygosity test and to the Ewens–Watterson–Slatkin test in order to test the hypothesis of selective neutrality. 相似文献
9.
Standard methods for maximum likelihood parameter estimation in latent variable models rely on the Expectation-Maximization algorithm and its Monte Carlo variants. Our approach is different and motivated by similar considerations to simulated annealing; that is we build a sequence of artificial distributions whose support concentrates itself on the set of maximum likelihood estimates. We sample from these distributions using a sequential Monte Carlo approach. We demonstrate state-of-the-art performance for several applications of the proposed approach. 相似文献
10.
A computational problem in many fields is to evaluate multiple integrals and expectations simultaneously. Consider probability distributions with unnormalized density functions indexed by parameters on a 2-dimensional grid, and assume that samples are simulated from distributions on a subgrid. Examples of such unnormalized density functions include the observed-data likelihoods in the presence of missing data and the prior times the likelihood in Bayesian inference. There are various methods using a single sample only or multiple samples jointly to compute each integral. Path sampling seems a compromise, using samples along a 1-dimensional path to compute each integral. However, different choices of the path lead to different estimators, which should ideally be identical. We propose calibrated estimators by the method of control variates to exploit such constraints for variance reduction. We also propose biquadratic interpolation to approximate integrals with parameters outside the subgrid, consistently with the calibrated estimators on the subgrid. These methods can be extended to compute differences of expectations through an auxiliary identity for path sampling. Furthermore, we develop stepwise bridge-sampling methods in parallel but complementary to path sampling. In three simulation studies, the proposed methods lead to substantially reduced mean squared errors compared with existing methods. 相似文献
11.
David Hirst Sondre Aanes Geir Storvik Ragnar Bang Huseby Ingunn Fride Tvete 《Journal of the Royal Statistical Society. Series C, Applied statistics》2004,53(1):1-14
Summary. The paper develops a Bayesian hierarchical model for estimating the catch at age of cod landed in Norway. The model includes covariate effects such as season and gear, and can also account for the within-boat correlation. The hierarchical structure allows us to account properly for the uncertainty in the estimates. 相似文献
12.
《Journal of Statistical Computation and Simulation》2012,82(1):23-37
We develop a Markov chain Monte Carlo algorithm, based on ‘stochastic search variable selection’ (George and McCuUoch, 1993), for identifying promising log-linear models. The method may be used in the analysis of multi-way contingency tables where the set of plausible models is very large. 相似文献
13.
Martin Hazelton 《Statistics and Computing》1995,5(4):343-350
Some statistical models defined in terms of a generating stochastic mechanism have intractable distribution theory, which renders parameter estimation difficult. However, a Monte Carlo estimate of the log-likelihood surface for such a model can be obtained via computation of nonparametric density estimates from simulated realizations of the model. Unfortunately, the bias inherent in density estimation can cause bias in the resulting log-likelihood estimate that alters the location of its maximizer. In this paper a methodology for radically reducing this bias is developed for models with an additive error component. An illustrative example involving a stochastic model of molecular fragmentation and measurement is given. 相似文献
14.
There are two conceptually distinct tasks in Markov chain Monte Carlo (MCMC): a sampler is designed for simulating a Markov chain and then an estimator is constructed on the Markov chain for computing integrals and expectations. In this article, we aim to address the second task by extending the likelihood approach of Kong et al. for Monte Carlo integration. We consider a general Markov chain scheme and use partial likelihood for estimation. Basically, the Markov chain scheme is treated as a random design and a stratified estimator is defined for the baseline measure. Further, we propose useful techniques including subsampling, regulation, and amplification for achieving overall computational efficiency. Finally, we introduce approximate variance estimators for the point estimators. The method can yield substantially improved accuracy compared with Chib's estimator and the crude Monte Carlo estimator, as illustrated with three examples. 相似文献
15.
In this paper, the generalized exponential power (GEP) density is proposed as an importance function in Monte Carlo simulations in the context of estimation of posterior moments of a location parameter. This density is divided in five classes according to its tail behaviour which may be exponential, polynomial or logarithmic. The notion of p-credence is also defined to characterize and to order the tails of a large class of symmetric densities by comparing their tails to those of the GEP density.The choice of the GEP density as an importance function allows us to obtain reliable and effective results when p-credences of the prior and the likelihood are defined, even if there are conflicting sources of information. Characterization of the posterior tails using p-credence can be done. Hence, it is possible to choose parameters of the GEP density in order to have an importance function with slightly heavier tails than the posterior. Simulation of observations from the GEP density is also addressed. 相似文献
16.
Aoristic data can be described by a marked point process in time in which the points cannot be observed directly but are known to lie in observable intervals, the marks. We consider Bayesian state estimation for the latent points when the marks are modeled in terms of an alternating renewal process in equilibrium and the prior is a Markov point process. We derive the posterior distribution, estimate its parameters and present some examples that illustrate the influence of the prior distribution. The model is then used to estimate times of occurrence of interval censored crimes. 相似文献
17.
In this paper, we extend the structural probit measurement error model by considering that the unobserved covariate follows a skew-normal distribution. The new model is termed the structural skew-normal probit model. As in the normal case, the likelihood function is obtained analytically which can be maximized by using existing statistical software. A Bayesian approach using Markov chain Monte Carlo techniques to generate from the posterior distributions is also developed. A simulation study demonstrates the usefulness of the approach in avoiding attenuation which is the case with the naive procedure and it seems to be more efficient than using the structural probit model when the distribution of the covariate (predictor) is skew. 相似文献
18.
JØRUND GÅSEMYR 《Scandinavian Journal of Statistics》2003,30(1):159-173
In this paper, we present a general formulation of an algorithm, the adaptive independent chain (AIC), that was introduced in a special context in Gåsemyr et al . [ Methodol. Comput. Appl. Probab. 3 (2001)]. The algorithm aims at producing samples from a specific target distribution Π, and is an adaptive, non-Markovian version of the Metropolis–Hastings independent chain. A certain parametric class of possible proposal distributions is fixed, and the parameters of the proposal distribution are updated periodically on the basis of the recent history of the chain, thereby obtaining proposals that get ever closer to Π. We show that under certain conditions, the algorithm produces an exact sample from Π in a finite number of iterations, and hence that it converges to Π. We also present another adaptive algorithm, the componentwise adaptive independent chain (CAIC), which may be an alternative in particular in high dimensions. The CAIC may be regarded as an adaptive approximation to the Gibbs sampler updating parametric approximations to the conditionals of Π. 相似文献
19.
AJAY JASRA DAVID A. STEPHENS ARNAUD DOUCET THEODOROS TSAGARIS 《Scandinavian Journal of Statistics》2011,38(1):1-22
Abstract. We investigate simulation methodology for Bayesian inference in Lévy‐driven stochastic volatility (SV) models. Typically, Bayesian inference from such models is performed using Markov chain Monte Carlo (MCMC); this is often a challenging task. Sequential Monte Carlo (SMC) samplers are methods that can improve over MCMC; however, there are many user‐set parameters to specify. We develop a fully automated SMC algorithm, which substantially improves over the standard MCMC methods in the literature. To illustrate our methodology, we look at a model comprised of a Heston model with an independent, additive, variance gamma process in the returns equation. The driving gamma process can capture the stylized behaviour of many financial time series and a discretized version, fit in a Bayesian manner, has been found to be very useful for modelling equity data. We demonstrate that it is possible to draw exact inference, in the sense of no time‐discretization error, from the Bayesian SV model. 相似文献
20.
We define a notion of de-initializing Markov chains. We prove that to analyse convergence of Markov chains to stationarity, it suffices to analyse convergence of a de-initializing chain. Applications are given to Markov chain Monte Carlo algorithms and to convergence diagnostics. 相似文献