首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Particle Markov Chain Monte Carlo methods are used to carry out inference in nonlinear and non-Gaussian state space models, where the posterior density of the states is approximated using particles. Current approaches usually perform Bayesian inference using either a particle marginal Metropolis–Hastings (PMMH) algorithm or a particle Gibbs (PG) sampler. This paper shows how the two ways of generating variables mentioned above can be combined in a flexible manner to give sampling schemes that converge to a desired target distribution. The advantage of our approach is that the sampling scheme can be tailored to obtain good results for different applications. For example, when some parameters and the states are highly correlated, such parameters can be generated using PMMH, while all other parameters are generated using PG because it is easier to obtain good proposals for the parameters within the PG framework. We derive some convergence properties of our sampling scheme and also investigate its performance empirically by applying it to univariate and multivariate stochastic volatility models and comparing it to other PMCMC methods proposed in the literature.  相似文献   

2.
There are two generations of Gibbs sampling methods for semiparametric models involving the Dirichlet process. The first generation suffered from a severe drawback: the locations of the clusters, or groups of parameters, could essentially become fixed, moving only rarely. Two strategies that have been proposed to create the second generation of Gibbs samplers are integration and appending a second stage to the Gibbs sampler wherein the cluster locations are moved. We show that these same strategies are easily implemented for the sequential importance sampler, and that the first strategy dramatically improves results. As in the case of Gibbs sampling, these strategies are applicable to a much wider class of models. They are shown to provide more uniform importance sampling weights and lead to additional Rao-Blackwellization of estimators.  相似文献   

3.
We consider importance sampling (IS) type weighted estimators based on Markov chain Monte Carlo (MCMC) targeting an approximate marginal of the target distribution. In the context of Bayesian latent variable models, the MCMC typically operates on the hyperparameters, and the subsequent weighting may be based on IS or sequential Monte Carlo (SMC), but allows for multilevel techniques as well. The IS approach provides a natural alternative to delayed acceptance (DA) pseudo-marginal/particle MCMC, and has many advantages over DA, including a straightforward parallelization and additional flexibility in MCMC implementation. We detail minimal conditions which ensure strong consistency of the suggested estimators, and provide central limit theorems with expressions for asymptotic variances. We demonstrate how our method can make use of SMC in the state space models context, using Laplace approximations and time-discretized diffusions. Our experimental results are promising and show that the IS-type approach can provide substantial gains relative to an analogous DA scheme, and is often competitive even without parallelization.  相似文献   

4.
We consider analysis of complex stochastic models based upon partial information. MCMC and reversible jump MCMC are often the methods of choice for such problems, but in some situations they can be difficult to implement; and suffer from problems such as poor mixing, and the difficulty of diagnosing convergence. Here we review three alternatives to MCMC methods: importance sampling, the forward-backward algorithm, and sequential Monte Carlo (SMC). We discuss how to design good proposal densities for importance sampling, show some of the range of models for which the forward-backward algorithm can be applied, and show how resampling ideas from SMC can be used to improve the efficiency of the other two methods. We demonstrate these methods on a range of examples, including estimating the transition density of a diffusion and of a discrete-state continuous-time Markov chain; inferring structure in population genetics; and segmenting genetic divergence data.  相似文献   

5.
This article presents a new way of modeling time-varying volatility. We generalize the usual stochastic volatility models to encompass regime-switching properties. The unobserved state variables are governed by a first-order Markov process. Bayesian estimators are constructed by Gibbs sampling. High-, medium- and low-volatility states are identified for the Standard and Poor's 500 weekly return data. Persistence in volatility is explained by the persistence in the low- and the medium-volatility states. The high-volatility regime is able to capture the 1987 crash and overlap considerably with four U.S. economic recession periods.  相似文献   

6.
Gibbs sampling has had great success in the analysis of mixture models. In particular, the “latent variable” formulation of the mixture model greatly reduces computational complexity. However, one failing of this approach is the possible existence of almost-absorbing states, called trapping states, as it may require an enormous number of iterations to escape from these states. Here we examine an alternative approach to estimation in mixture models, one based on a Rao–Blackwellization argument applied to a latent-variable-based estimator. From this derivation we construct an alternative Monte Carlo sampling scheme that avoids trapping states.  相似文献   

7.
In the non-conjugate Gibbs sampler, the required sampling from the full conditional densities needs the adoption of black-box sampling methods. Recent suggestions include rejection sampling, adaptive rejection sampling, generalized ratio of uniforms, and the Griddy-Gibbs sampler. This paper describes a general idea based on variate transformations which can be tailored in all the above methods and increase the Gibbs sampler efficiency. Moreover, a simple technique to assess convergence is suggested and illustrative examples are presented.  相似文献   

8.
The authors discuss prior distributions that are conjugate to the multivariate normal likelihood when some of the observations are incomplete. They present a general class of priors for incorporating information about unidentified parameters in the covariance matrix. They analyze the special case of monotone patterns of missing data, providing an explicit recursive form for the posterior distribution resulting from a conjugate prior distribution. They develop an importance sampling and a Gibbs sampling approach to sample from a general posterior distribution and compare the two methods.  相似文献   

9.
Abstract. Very recently, it has been suggested in the biomedical literature to combine computerized image analysis with non‐uniform sampling to increase the efficiency of estimators of intensities of biological cell populations. We give this ingenious idea of empirical importance sampling a stochastic formulation, using point process theory and modern sampling theory. We develop statistical tools for assessing its efficiency and construct optimal model‐based estimators of intensities. Examples of applications of empirical importance sampling in microscopy are provided.  相似文献   

10.
Particle filters (PF) and auxiliary particle filters (APF) are widely used sequential Monte Carlo (SMC) techniques. In this paper we comparatively analyse, from a non asymptotic point of view, the Sampling Importance Resampling (SIR) PF with optimal conditional importance distribution (CID) and the fully adapted APF (FA). We compute the (finite samples) conditional second order moments of Monte Carlo (MC) estimators of a moment of interest of the filtering pdf, and analyse under which circumstances the FA-based estimator outperforms (or not) the optimal Sequential Importance Sampling (SIS)-based one. Our analysis is local, in the sense that we compare the estimators produced by one time step of the different SMC algorithms, starting from a common set of weighted points. This analysis enables us to propose a hybrid SIS/FA algorithm which automatically switches at each time step from one loop to the other. We finally validate our results via computer simulations.  相似文献   

11.
Bayesian analysis of outlier problems using the Gibbs sampler   总被引:6,自引:0,他引:6  
We consider the Bayesian analysis of outlier models. We show that the Gibbs sampler brings considerable conceptual and computational simplicity to the problem of calculating posterior marginals. Although other techniques for finding posterior marginals are available, the Gibbs sampling approach is notable for its ease of implementation. Allowing the probability of an outlier to be unknown introduces an extra parameter into the model but this turns out to involve only minor modification to the algorithm. We illustrate these ideas using a contaminated Gaussian distribution, at-distribution, a contaminated binomial model and logistic regression.  相似文献   

12.
Particle filters for mixture models with an unknown number of components   总被引:2,自引:1,他引:1  
We consider the analysis of data under mixture models where the number of components in the mixture is unknown. We concentrate on mixture Dirichlet process models, and in particular we consider such models under conjugate priors. This conjugacy enables us to integrate out many of the parameters in the model, and to discretize the posterior distribution. Particle filters are particularly well suited to such discrete problems, and we propose the use of the particle filter of Fearnhead and Clifford for this problem. The performance of this particle filter, when analyzing both simulated and real data from a Gaussian mixture model, is uniformly better than the particle filter algorithm of Chen and Liu. In many situations it outperforms a Gibbs Sampler. We also show how models without the required amount of conjugacy can be efficiently analyzed by the same particle filter algorithm.  相似文献   

13.
Bayesian random effects models may be fitted using Gibbs sampling, but the Gibbs sampler can be slow mixing due to what might be regarded as lack of model identifiability. This slow mixing substantially increases the number of iterations required during Gibbs sampling. We present an analysis of data on immunity after Rubella vaccinations which results in a slow-mixing Gibbs sampler. We show that this problem of slow mixing can be resolved by transforming the random effects and then, if desired, expressing their joint prior distribution as a sequence of univariate conditional distributions. The resulting analysis shows that the decline in antibodies after Rubella vaccination is relatively shallow compared to the decline in antibodies which has been shown after Hepatitis B vaccination.  相似文献   

14.
Pricing options is an important problem in financial engineering. In many scenarios of practical interest, financial option prices associated with an underlying asset reduces to computing an expectation w.r.t. a diffusion process. In general, these expectations cannot be calculated analytically, and one way to approximate these quantities is via the Monte Carlo (MC) method; MC methods have been used to price options since at least the 1970s. It has been seen in Del Moral P, Shevchenko PV. [Valuation of barrier options using sequential Monte Carlo. 2014. arXiv preprint] and Jasra A, Del Moral P. [Sequential Monte Carlo methods for option pricing. Stoch Anal Appl. 2011;29:292–316] that Sequential Monte Carlo (SMC) methods are a natural tool to apply in this context and can vastly improve over standard MC. In this article, in a similar spirit to Del Moral and Shevchenko (2014) and Jasra and Del Moral (2011), we show that one can achieve significant gains by using SMC methods by constructing a sequence of artificial target densities over time. In particular, we approximate the optimal importance sampling distribution in the SMC algorithm by using a sequence of weighting functions. This is demonstrated on two examples, barrier options and target accrual redemption notes (TARNs). We also provide a proof of unbiasedness of our SMC estimate.  相似文献   

15.
Multivariate Logit models are convenient to describe multivariate correlated binary choices as they provide closed-form likelihood functions. However, the computation time required for calculating choice probabilities increases exponentially with the number of choices, which makes maximum likelihood-based estimation infeasible when many choices are considered. To solve this, we propose three novel estimation methods: (i) stratified importance sampling, (ii) composite conditional likelihood (CCL), and (iii) generalized method of moments, which yield consistent estimates and still have similar small-sample bias to maximum likelihood. Our simulation study shows that computation times for CCL are much smaller and that its efficiency loss is small.  相似文献   

16.
Gibbs sampler as a computer-intensive algorithm is an important statistical tool both in application and in theoretical work. This algorithm, in many cases, is time-consuming; this paper extends the concept of using the steady-state ranked simulated sampling approach, utilized in Monte Carlo methods by Samawi [On the approximation of multiple integrals using steady state ranked simulated sampling, 2010, submitted for publication], to improve the well-known Gibbs sampling algorithm. It is demonstrated that this approach provides unbiased estimators, in the case of estimating the means and the distribution function, and substantially improves the performance of the Gibbs sampling algorithm and convergence, which results in a significant reduction in the costs and time required to attain a certain level of accuracy. Similar to Casella and George [Explaining the Gibbs sampler, Am. Statist. 46(3) (1992), pp. 167–174], we provide some analytical properties in simple cases and compare the performance of our method using the same illustrations.  相似文献   

17.
Quality Measurement Plan (QMP) as developed by Hoadley (1981) is a statistical method for analyzing discrete quality audit data which consist of the expected number of defects given the standard quality. The QMP is based on an empirical Bayes (EB) model of the audit sampling process. Despite its wide publicity, Hoadley's method has often been described as heuristic. In this paper we offer an hierarchical Bayes (HB) alternative to Hoadley's EB model, and overcome much of the criticism against this model. Gibbs sampling is used to implement the HB model proposed in this paper. Also, the convergence of the Gibbs sampler is monitored via the algorithm of Gelman and Rubin (1992).  相似文献   

18.
Nonlinear mixed‐effect models are often used in the analysis of longitudinal data. However, it sometimes happens that missing values for some of the model covariates are not purely random. Motivated by an application to HTV viral dynamics, where this situation occurs, the author considers likelihood inference for this type of problem. His approach involves a Monte Carlo EM algorithm, along with a Gibbs sampler and rejection/importance sampling methods. A concrete application is provided.  相似文献   

19.
This paper considers the multiple change-point estimation for exponential distribution with truncated and censored data by Gibbs sampling. After all the missing data of interest is filled in by some sampling methods such as rejection sampling method, the complete-data likelihood function is obtained. The full conditional distributions of all parameters are discussed. The means of Gibbs samples are taken as Bayesian estimations of the parameters. The implementation steps of Gibbs sampling are introduced in detail. Finally random simulation test is developed, and the results show that Bayesian estimations are fairly accurate.  相似文献   

20.
A common assumption in fitting panel data models is normality of stochastic subject effects. This can be extremely restrictive, making vague most potential features of true distributions. The objective of this article is to propose a modeling strategy, from a semi-parametric Bayesian perspective, to specify a flexible distribution for the random effects in dynamic panel data models. This is addressed here by assuming the Dirichlet process mixture model to introduce Dirichlet process prior for the random-effects distribution. We address the role of initial conditions in dynamic processes, emphasizing on joint modeling of start-up and subsequent responses. We adopt Gibbs sampling techniques to approximate posterior estimates. These important topics are illustrated by a simulation study and also by testing hypothetical models in two empirical contexts drawn from economic studies. We use modified versions of information criteria to compare the fitted models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号