共查询到20条相似文献,搜索用时 11 毫秒
1.
There are two conceptually distinct tasks in Markov chain Monte Carlo (MCMC): a sampler is designed for simulating a Markov chain and then an estimator is constructed on the Markov chain for computing integrals and expectations. In this article, we aim to address the second task by extending the likelihood approach of Kong et al. for Monte Carlo integration. We consider a general Markov chain scheme and use partial likelihood for estimation. Basically, the Markov chain scheme is treated as a random design and a stratified estimator is defined for the baseline measure. Further, we propose useful techniques including subsampling, regulation, and amplification for achieving overall computational efficiency. Finally, we introduce approximate variance estimators for the point estimators. The method can yield substantially improved accuracy compared with Chib's estimator and the crude Monte Carlo estimator, as illustrated with three examples. 相似文献
2.
Yongtao Guan Roland Fleißner Paul Joyce Stephen M. Krone 《Statistics and Computing》2006,16(2):193-202
As the number of applications for Markov Chain Monte Carlo (MCMC) grows, the power of these methods as well as their shortcomings
become more apparent. While MCMC yields an almost automatic way to sample a space according to some distribution, its implementations
often fall short of this task as they may lead to chains which converge too slowly or get trapped within one mode of a multi-modal
space. Moreover, it may be difficult to determine if a chain is only sampling a certain area of the space or if it has indeed
reached stationarity.
In this paper, we show how a simple modification of the proposal mechanism results in faster convergence of the chain and
helps to circumvent the problems described above. This mechanism, which is based on an idea from the field of “small-world”
networks, amounts to adding occasional “wild” proposals to any local proposal scheme. We demonstrate through both theory and
extensive simulations, that these new proposal distributions can greatly outperform the traditional local proposals when it
comes to exploring complex heterogenous spaces and multi-modal distributions. Our method can easily be applied to most, if
not all, problems involving MCMC and unlike many other remedies which improve the performance of MCMC it preserves the simplicity
of the underlying algorithm. 相似文献
3.
An automated (Markov chain) Monte Carlo EM algorithm 总被引:1,自引:0,他引:1
《Journal of Statistical Computation and Simulation》2012,82(5):349-360
We present an automated Monte Carlo EM (MCEM) algorithm which efficiently assesses Monte Carlo error in the presence of dependent Monte Carlo, particularly Markov chain Monte Carlo, E-step samples and chooses an appropriate Monte Carlo sample size to minimize this Monte Carlo error with respect to progressive EM step estimates. Monte Carlo error is gauged though an application of the central limit theorem during renewal periods of the MCMC sampler used in the E-step. The resulting normal approximation allows us to construct a rigorous and adaptive rule for updating the Monte Carlo sample size each iteration of the MCEM algorithm. We illustrate our automated routine and compare the performance with competing MCEM algorithms in an analysis of a data set fit by a generalized linear mixed model. 相似文献
4.
《Journal of Statistical Computation and Simulation》2012,82(10):2166-2186
The lasso is a popular technique of simultaneous estimation and variable selection in many research areas. The marginal posterior mode of the regression coefficients is equivalent to estimates given by the non-Bayesian lasso when the regression coefficients have independent Laplace priors. Because of its flexibility of statistical inferences, the Bayesian approach is attracting a growing body of research in recent years. Current approaches are primarily to either do a fully Bayesian analysis using Markov chain Monte Carlo (MCMC) algorithm or use Monte Carlo expectation maximization (MCEM) methods with an MCMC algorithm in each E-step. However, MCMC-based Bayesian method has much computational burden and slow convergence. Tan et al. [An efficient MCEM algorithm for fitting generalized linear mixed models for correlated binary data. J Stat Comput Simul. 2007;77:929–943] proposed a non-iterative sampling approach, the inverse Bayes formula (IBF) sampler, for computing posteriors of a hierarchical model in the structure of MCEM. Motivated by their paper, we develop this IBF sampler in the structure of MCEM to give the marginal posterior mode of the regression coefficients for the Bayesian lasso, by adjusting the weights of importance sampling, when the full conditional distribution is not explicit. Simulation experiments show that the computational time is much reduced with our method based on the expectation maximization algorithm and our algorithms and our methods behave comparably with other Bayesian lasso methods not only in prediction accuracy but also in variable selection accuracy and even better especially when the sample size is relatively large. 相似文献
5.
Walter R. Gilks & Carlo Berzuini 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2001,63(1):127-146
Markov chain Monte Carlo (MCMC) sampling is a numerically intensive simulation technique which has greatly improved the practicality of Bayesian inference and prediction. However, MCMC sampling is too slow to be of practical use in problems involving a large number of posterior (target) distributions, as in dynamic modelling and predictive model selection. Alternative simulation techniques for tracking moving target distributions, known as particle filters, which combine importance sampling, importance resampling and MCMC sampling, tend to suffer from a progressive degeneration as the target sequence evolves. We propose a new technique, based on these same simulation methodologies, which does not suffer from this progressive degeneration. 相似文献
6.
《Journal of the Korean Statistical Society》2014,43(1):31-45
Monte Carlo methods for the exact inference have received much attention recently in complete or incomplete contingency table analysis. However, conventional Markov chain Monte Carlo, such as the Metropolis–Hastings algorithm, and importance sampling methods sometimes generate the poor performance by failing to produce valid tables. In this paper, we apply an adaptive Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm (SAMC; Liang, Liu, & Carroll, 2007), to the exact test of the goodness-of-fit of the model in complete or incomplete contingency tables containing some structural zero cells. The numerical results are in favor of our method in terms of quality of estimates. 相似文献
7.
D. Clayton & J. Rasbash 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》1999,162(3):425-436
Estimation in mixed linear models is, in general, computationally demanding, since applied problems may involve extensive data sets and large numbers of random effects. Existing computer algorithms are slow and/or require large amounts of memory. These problems are compounded in generalized linear mixed models for categorical data, since even approximate methods involve fitting of a linear mixed model within steps of an iteratively reweighted least squares algorithm. Only in models in which the random effects are hierarchically nested can the computations for fitting these models to large data sets be carried out rapidly. We describe a data augmentation approach to these computational difficulties in which we repeatedly fit an overlapping series of submodels, incorporating the missing terms in each submodel as 'offsets'. The submodels are chosen so that they have a nested random-effect structure, thus allowing maximum exploitation of the computational efficiency which is available in this case. Examples of the use of the algorithm for both metric and discrete responses are discussed, all calculations being carried out using macros within the MLwiN program. 相似文献
8.
Borus Jungbacker 《Econometric Reviews》2013,32(2-3):385-408
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods. 相似文献
9.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods. 相似文献
10.
AJAY JASRA DAVID A. STEPHENS ARNAUD DOUCET THEODOROS TSAGARIS 《Scandinavian Journal of Statistics》2011,38(1):1-22
Abstract. We investigate simulation methodology for Bayesian inference in Lévy‐driven stochastic volatility (SV) models. Typically, Bayesian inference from such models is performed using Markov chain Monte Carlo (MCMC); this is often a challenging task. Sequential Monte Carlo (SMC) samplers are methods that can improve over MCMC; however, there are many user‐set parameters to specify. We develop a fully automated SMC algorithm, which substantially improves over the standard MCMC methods in the literature. To illustrate our methodology, we look at a model comprised of a Heston model with an independent, additive, variance gamma process in the returns equation. The driving gamma process can capture the stylized behaviour of many financial time series and a discretized version, fit in a Bayesian manner, has been found to be very useful for modelling equity data. We demonstrate that it is possible to draw exact inference, in the sense of no time‐discretization error, from the Bayesian SV model. 相似文献
11.
J. G. Booth & J. P. Hobert 《Journal of the Royal Statistical Society. Series B, Statistical methodology》1999,61(1):265-285
Two new implementations of the EM algorithm are proposed for maximum likelihood fitting of generalized linear mixed models. Both methods use random (independent and identically distributed) sampling to construct Monte Carlo approximations at the E-step. One approach involves generating random samples from the exact conditional distribution of the random effects (given the data) by rejection sampling, using the marginal distribution as a candidate. The second method uses a multivariate t importance sampling approximation. In many applications the two methods are complementary. Rejection sampling is more efficient when sample sizes are small, whereas importance sampling is better with larger sample sizes. Monte Carlo approximation using random samples allows the Monte Carlo error at each iteration to be assessed by using standard central limit theory combined with Taylor series methods. Specifically, we construct a sandwich variance estimate for the maximizer at each approximate E-step. This suggests a rule for automatically increasing the Monte Carlo sample size after iterations in which the true EM step is swamped by Monte Carlo error. In contrast, techniques for assessing Monte Carlo error have not been developed for use with alternative implementations of Monte Carlo EM algorithms utilizing Markov chain Monte Carlo E-step approximations. Three different data sets, including the infamous salamander data of McCullagh and Nelder, are used to illustrate the techniques and to compare them with the alternatives. The results show that the methods proposed can be considerably more efficient than those based on Markov chain Monte Carlo algorithms. However, the methods proposed may break down when the intractable integrals in the likelihood function are of high dimension. 相似文献
12.
Drew Creal 《Econometric Reviews》2013,32(3):245-296
This article serves as an introduction and survey for economists to the field of sequential Monte Carlo methods which are also known as particle filters. Sequential Monte Carlo methods are simulation-based algorithms used to compute the high-dimensional and/or complex integrals that arise regularly in applied work. These methods are becoming increasingly popular in economics and finance; from dynamic stochastic general equilibrium models in macro-economics to option pricing. The objective of this article is to explain the basics of the methodology, provide references to the literature, and cover some of the theoretical results that justify the methods in practice. 相似文献
13.
While Markov chain Monte Carlo (MCMC) methods are frequently used for difficult calculations in a wide range of scientific disciplines, they suffer from a serious limitation: their samples are not independent and identically distributed. Consequently, estimates of expectations are biased if the initial value of the chain is not drawn from the target distribution. Regenerative simulation provides an elegant solution to this problem. In this article, we propose a simple regenerative MCMC algorithm to generate variates for any distribution. 相似文献
14.
Bayesian estimation for the two unknown parameters and the reliability function of the exponentiated Weibull model are obtained based on generalized order statistics. Markov chain Monte Carlo (MCMC) methods are considered to compute the Bayes estimates of the target parameters. Our computations are based on the balanced loss function which contains the symmetric and asymmetric loss functions as special cases. The results have been specialized to the progressively Type-II censored data and upper record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation. 相似文献
15.
On sequential Monte Carlo sampling methods for Bayesian filtering 总被引:145,自引:0,他引:145
In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and non-Gaussian. A general importance sampling framework is developed that unifies many of the methods which have been proposed over the last few decades in several different scientific disciplines. Novel extensions to the existing methods are also proposed. We show in particular how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature; these lead to very effective importance distributions. Furthermore we describe a method which uses Rao-Blackwellisation in order to take advantage of the analytic structure present in some important classes of state-space models. In a final section we develop algorithms for prediction, smoothing and evaluation of the likelihood in dynamic models. 相似文献
16.
A. Kong P. McCullagh X.-L. Meng D. Nicolae Z. Tan 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2003,65(3):585-604
Summary. The task of estimating an integral by Monte Carlo methods is formulated as a statistical model using simulated observations as data. The difficulty in this exercise is that we ordinarily have at our disposal all of the information required to compute integrals exactly by calculus or numerical integration, but we choose to ignore some of the information for simplicity or computational feasibility. Our proposal is to use a semiparametric statistical model that makes explicit what information is ignored and what information is retained. The parameter space in this model is a set of measures on the sample space, which is ordinarily an infinite dimensional object. None-the-less, from simulated data the base-line measure can be estimated by maximum likelihood, and the required integrals computed by a simple formula previously derived by Vardi and by Lindsay in a closely related model for biased sampling. The same formula was also suggested by Geyer and by Meng and Wong using entirely different arguments. By contrast with Geyer's retrospective likelihood, a correct estimate of simulation error is available directly from the Fisher information. The principal advantage of the semiparametric model is that variance reduction techniques are associated with submodels in which the maximum likelihood estimator in the submodel may have substantially smaller variance than the traditional estimator. The method is applicable to Markov chain and more general Monte Carlo sampling schemes with multiple samplers. 相似文献
17.
Automatic choice of driving values in Monte Carlo likelihood approximation via posterior simulations
For models with random effects or missing data, the likelihood function is sometimes intractable analytically but amenable to Monte Carlo approximation. To get a good approximation, the parameter value that drives the simulations should be sufficiently close to the maximum likelihood estimate (MLE) which unfortunately is unknown. Introducing a working prior distribution, we express the likelihood function as a posterior expectation and approximate it using posterior simulations. If the sample size is large, the sample information is likely to outweigh the prior specification and the posterior simulations will be concentrated around the MLE automatically, leading to good approximation of the likelihood near the MLE. For smaller samples, we propose to use the current posterior as the next prior distribution to make the posterior simulations closer to the MLE and hence improve the likelihood approximation. By using the technique of data duplication, we can simulate from the sharpened posterior distribution without actually updating the prior distribution. The suggested method works well in several test cases. A more complex example involving censored spatial data is also discussed. 相似文献
18.
Gavin J. Gibson 《Journal of the Royal Statistical Society. Series C, Applied statistics》1997,46(2):215-233
Strategies for controlling plant epidemics are investigated by fitting continuous time spatiotemporal stochastic models to data consisting of maps of disease incidence observed at discrete times. Markov chain Monte Carlo methods are used for fitting two such models to data describing the spread of citrus tristeza virus (CTV) in an orchard. The approach overcomes some of the difficulties encountered when fitting stochastic models to infrequent observations of a continuous process. The results of the analysis cast doubt on the effectiveness of a strategy identified from a previous spatial analysis of the CTV data. Extensions of the approaches to more general models and other problems are also considered. 相似文献
19.
On the Value of derivative evaluations and random walk suppression in Markov Chain Monte Carlo algorithms 总被引:2,自引:1,他引:2
Two strategies that can potentially improve Markov Chain Monte Carlo algorithms are to use derivative evaluations of the target density, and to suppress random walk behaviour in the chain. The use of one or both of these strategies has been investigated in a few specific applications, but neither is used routinely. We undertake a broader evaluation of these techniques, with a view to assessing their utility for routine use. In addition to comparing different algorithms, we also compare two different ways in which the algorithms can be applied to a multivariate target distribution. Specifically, the univariate version of an algorithm can be applied repeatedly to one-dimensional conditional distributions, or the multivariate version can be applied directly to the target distribution. 相似文献
20.
We consider importance sampling (IS) type weighted estimators based on Markov chain Monte Carlo (MCMC) targeting an approximate marginal of the target distribution. In the context of Bayesian latent variable models, the MCMC typically operates on the hyperparameters, and the subsequent weighting may be based on IS or sequential Monte Carlo (SMC), but allows for multilevel techniques as well. The IS approach provides a natural alternative to delayed acceptance (DA) pseudo-marginal/particle MCMC, and has many advantages over DA, including a straightforward parallelization and additional flexibility in MCMC implementation. We detail minimal conditions which ensure strong consistency of the suggested estimators, and provide central limit theorems with expressions for asymptotic variances. We demonstrate how our method can make use of SMC in the state space models context, using Laplace approximations and time-discretized diffusions. Our experimental results are promising and show that the IS-type approach can provide substantial gains relative to an analogous DA scheme, and is often competitive even without parallelization. 相似文献