首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 906 毫秒
1.
In treating dynamic systems, sequential Monte Carlo methods use discrete samples to represent a complicated probability distribution and use rejection sampling, importance sampling and weighted resampling to complete the on-line 'filtering' task. We propose a special sequential Monte Carlo method, the mixture Kalman filter, which uses a random mixture of the Gaussian distributions to approximate a target distribution. It is designed for on-line estimation and prediction of conditional and partial conditional dynamic linear models, which are themselves a class of widely used non-linear systems and also serve to approximate many others. Compared with a few available filtering methods including Monte Carlo methods, the gain in efficiency that is provided by the mixture Kalman filter can be very substantial. Another contribution of the paper is the formulation of many non-linear systems into conditional or partial conditional linear form, to which the mixture Kalman filter can be applied. Examples in target tracking and digital communications are given to demonstrate the procedures proposed.  相似文献   

2.
ABSTRACT

The standard Kalman filter cannot handle inequality constraints imposed on the state variables, as state truncation induces a nonlinear and non-Gaussian model. We propose a Rao-Blackwellized particle filter with the optimal importance function for forward filtering and the likelihood function evaluation. The particle filter effectively enforces the state constraints when the Kalman filter violates them. Monte Carlo experiments demonstrate excellent performance of the proposed particle filter with Rao-Blackwellization, in which the Gaussian linear sub-structure is exploited at both the cross-sectional and temporal levels.  相似文献   

3.
We consider Bayesian parameter inference associated to partially-observed stochastic processes that start from a set B 0 and are stopped or killed at the first hitting time of a known set A. Such processes occur naturally within the context of a wide variety of applications. The associated posterior distributions are highly complex and posterior parameter inference requires the use of advanced Markov chain Monte Carlo (MCMC) techniques. Our approach uses a recently introduced simulation methodology, particle Markov chain Monte Carlo (PMCMC) (Andrieu et al. 2010), where sequential Monte Carlo (SMC) (Doucet et al. 2001; Liu 2001) approximations are embedded within MCMC. However, when the parameter of interest is fixed, standard SMC algorithms are not always appropriate for many stopped processes. In Chen et al. (2005), Del Moral (2004), the authors introduce SMC approximations of multi-level Feynman-Kac formulae, which can lead to more efficient algorithms. This is achieved by devising a sequence of sets from B 0 to A and then performing the resampling step only when the samples of the process reach intermediate sets in the sequence. The choice of the intermediate sets is critical to the performance of such a scheme. In this paper, we demonstrate that multi-level SMC algorithms can be used as a proposal in PMCMC. In addition, we introduce a flexible strategy that adapts the sets for different parameter proposals. Our methodology is illustrated on the coalescent model with migration.  相似文献   

4.
Sequential Monte Carlo methods (also known as particle filters and smoothers) are used for filtering and smoothing in general state-space models. These methods are based on importance sampling. In practice, it is often difficult to find a suitable proposal which allows effective importance sampling. This article develops an original particle filter and an original particle smoother which employ nonparametric importance sampling. The basic idea is to use a nonparametric estimate of the marginally optimal proposal. The proposed algorithms provide a better approximation of the filtering and smoothing distributions than standard methods. The methods’ advantage is most distinct in severely nonlinear situations. In contrast to most existing methods, they allow the use of quasi-Monte Carlo (QMC) sampling. In addition, they do not suffer from weight degeneration rendering a resampling step unnecessary. For the estimation of model parameters, an efficient on-line maximum-likelihood (ML) estimation technique is proposed which is also based on nonparametric approximations. All suggested algorithms have almost linear complexity for low-dimensional state-spaces. This is an advantage over standard smoothing and ML procedures. Particularly, all existing sequential Monte Carlo methods that incorporate QMC sampling have quadratic complexity. As an application, stochastic volatility estimation for high-frequency financial data is considered, which is of great importance in practice. The computer code is partly available as supplemental material.  相似文献   

5.
Markov chain Monte Carlo (MCMC) is an important computational technique for generating samples from non-standard probability distributions. A major challenge in the design of practical MCMC samplers is to achieve efficient convergence and mixing properties. One way to accelerate convergence and mixing is to adapt the proposal distribution in light of previously sampled points, thus increasing the probability of acceptance. In this paper, we propose two new adaptive MCMC algorithms based on the Independent Metropolis–Hastings algorithm. In the first, we adjust the proposal to minimize an estimate of the cross-entropy between the target and proposal distributions, using the experience of pre-runs. This approach provides a general technique for deriving natural adaptive formulae. The second approach uses multiple parallel chains, and involves updating chains individually, then updating a proposal density by fitting a Bayesian model to the population. An important feature of this approach is that adapting the proposal does not change the limiting distributions of the chains. Consequently, the adaptive phase of the sampler can be continued indefinitely. We include results of numerical experiments indicating that the new algorithms compete well with traditional Metropolis–Hastings algorithms. We also demonstrate the method for a realistic problem arising in Comparative Genomics.  相似文献   

6.
Monte Carlo methods represent the de facto standard for approximating complicated integrals involving multidimensional target distributions. In order to generate random realizations from the target distribution, Monte Carlo techniques use simpler proposal probability densities to draw candidate samples. The performance of any such method is strictly related to the specification of the proposal distribution, such that unfortunate choices easily wreak havoc on the resulting estimators. In this work, we introduce a layered (i.e., hierarchical) procedure to generate samples employed within a Monte Carlo scheme. This approach ensures that an appropriate equivalent proposal density is always obtained automatically (thus eliminating the risk of a catastrophic performance), although at the expense of a moderate increase in the complexity. Furthermore, we provide a general unified importance sampling (IS) framework, where multiple proposal densities are employed and several IS schemes are introduced by applying the so-called deterministic mixture approach. Finally, given these schemes, we also propose a novel class of adaptive importance samplers using a population of proposals, where the adaptation is driven by independent parallel or interacting Markov chain Monte Carlo (MCMC) chains. The resulting algorithms efficiently combine the benefits of both IS and MCMC methods.  相似文献   

7.
Gaussian proposal density using moment matching in SMC methods   总被引:1,自引:0,他引:1  
In this article we introduce a new Gaussian proposal distribution to be used in conjunction with the sequential Monte Carlo (SMC) method for solving non-linear filtering problems. The proposal, in line with the recent trend, incorporates the current observation. The introduced proposal is characterized by the exact moments obtained from the dynamical system. This is in contrast with recent works where the moments are approximated either numerically or by linearizing the observation model. We show further that the newly introduced proposal performs better than other similar proposal functions which also incorporate both state and observations. This work was supported by a research grant from THALES Nederland BV.  相似文献   

8.
The author studies state space models for multivariate binomial time series, focussing on the development of the Kalman filter and smoothing for state variables. He proposes a Monte Carlo approach employing the latent variable representation which transplants the classical Kalman filter and smoothing developed for Gaussian state space models to discrete models and leads to a conceptually simple and computationally convenient approach. The method is illustrated through simulations and concrete examples.  相似文献   

9.
This paper discusses the tests for departures from nominal dispersion in the framework of generalized nonlinear models with varying dispersion and/or additive random effects. We consider two classes of exponential family distributions. The first is discrete exponential family distributions, such as Poisson, binomial, and negative binomial distributions. The second is continuous exponential family distributions, such as normal, gamma, and inverse Gaussian distributions. Correspondingly, we develop a unifying approach and propose several tests for testing for departures from nominal dispersion in two classes of generalized nonlinear models. The score test statistics are constructed and expressed in simple, easy to use, matrix formulas, so that the tests can easily be implemented using existing statistical software. The properties of test statistics are investigated through Monte Carlo simulations.  相似文献   

10.
This article focuses on simulation-based inference for the time-deformation models directed by a duration process. In order to better capture the heavy tail property of the time series of financial asset returns, the innovation of the observation equation is subsequently assumed to have a Student-t distribution. Suitable Markov chain Monte Carlo (MCMC) algorithms, which are hybrids of Gibbs and slice samplers, are proposed for estimation of the parameters of these models. In the algorithms, the parameters of the models can be sampled either directly from known distributions or through an efficient slice sampler. The states are simulated one at a time by using a Metropolis-Hastings method, where the proposal distributions are sampled through a slice sampler. Simulation studies conducted in this article suggest that our extended models and accompanying MCMC algorithms work well in terms of parameter estimation and volatility forecast.  相似文献   

11.
Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is intractable, or expensive to calculate. To improve over Markov chain Monte Carlo (MCMC) implementations of ABC, the use of sequential Monte Carlo (SMC) methods has recently been suggested. Most effective SMC algorithms that are currently available for ABC have a computational complexity that is quadratic in the number of Monte Carlo samples (Beaumont et al., Biometrika 86:983?C990, 2009; Peters et al., Technical report, 2008; Toni et al., J.?Roy. Soc. Interface 6:187?C202, 2009) and require the careful choice of simulation parameters. In this article an adaptive SMC algorithm is proposed which admits a computational complexity that is linear in the number of samples and adaptively determines the simulation parameters. We demonstrate our algorithm on a toy example and on a birth-death-mutation model arising in epidemiology.  相似文献   

12.
Fitting stochastic kinetic models represented by Markov jump processes within the Bayesian paradigm is complicated by the intractability of the observed-data likelihood. There has therefore been considerable attention given to the design of pseudo-marginal Markov chain Monte Carlo algorithms for such models. However, these methods are typically computationally intensive, often require careful tuning and must be restarted from scratch upon receipt of new observations. Sequential Monte Carlo (SMC) methods on the other hand aim to efficiently reuse posterior samples at each time point. Despite their appeal, applying SMC schemes in scenarios with both dynamic states and static parameters is made difficult by the problem of particle degeneracy. A principled approach for overcoming this problem is to move each parameter particle through a Metropolis-Hastings kernel that leaves the target invariant. This rejuvenation step is key to a recently proposed \(\hbox {SMC}^2\) algorithm, which can be seen as the pseudo-marginal analogue of an idealised scheme known as iterated batch importance sampling. Computing the parameter weights in \(\hbox {SMC}^2\) requires running a particle filter over dynamic states to unbiasedly estimate the intractable observed-data likelihood up to the current time point. In this paper, we propose to use an auxiliary particle filter inside the \(\hbox {SMC}^2\) scheme. Our method uses two recently proposed constructs for sampling conditioned jump processes, and we find that the resulting inference schemes typically require fewer state particles than when using a simple bootstrap filter. Using two applications, we compare the performance of the proposed approach with various competing methods, including two global MCMC schemes.  相似文献   

13.
This paper aims at evaluating different aspects of Monte Carlo expectation – maximization algorithm to estimate heavy-tailed mixed logistic regression (MLR) models. As a novelty it also proposes a multiple chain Gibbs sampler to generate of the latent variables distributions thus obtaining independent samples. In heavy-tailed MLR models, the analytical forms of the full conditional distributions for the random effects are unknown. Four different Metropolis–Hastings algorithms are assumed to generate from them. We also discuss stopping rules in order to obtain more efficient algorithms in heavy-tailed MLR models. The algorithms are compared through the analysis of simulated and Ascaris Suum data.  相似文献   

14.
We propose a density-tempered marginalized sequential Monte Carlo (SMC) sampler, a new class of samplers for full Bayesian inference of general state-space models. The dynamic states are approximately marginalized out using a particle filter, and the parameters are sampled via a sequential Monte Carlo sampler over a density-tempered bridge between the prior and the posterior. Our approach delivers exact draws from the joint posterior of the parameters and the latent states for any given number of state particles and is thus easily parallelizable in implementation. We also build into the proposed method a device that can automatically select a suitable number of state particles. Since the method incorporates sample information in a smooth fashion, it delivers good performance in the presence of outliers. We check the performance of the density-tempered SMC algorithm using simulated data based on a linear Gaussian state-space model with and without misspecification. We also apply it on real stock prices using a GARCH-type model with microstructure noise.  相似文献   

15.
We introduce a new class of interacting Markov chain Monte Carlo (MCMC) algorithms which is designed to increase the efficiency of a modified multiple-try Metropolis (MTM) sampler. The extension with respect to the existing MCMC literature is twofold. First, the sampler proposed extends the basic MTM algorithm by allowing for different proposal distributions in the multiple-try generation step. Second, we exploit the different proposal distributions to naturally introduce an interacting MTM mechanism (IMTM) that expands the class of population Monte Carlo methods and builds connections with the rapidly expanding world of adaptive MCMC. We show the validity of the algorithm and discuss the choice of the selection weights and of the different proposals. The numerical studies show that the interaction mechanism allows the IMTM to efficiently explore the state space leading to higher efficiency than other competing algorithms.  相似文献   

16.
Convergence of Heavy-tailed Monte Carlo Markov Chain Algorithms   总被引:1,自引:0,他引:1  
Abstract.  In this paper, we use recent results of Jarner & Roberts ( Ann. Appl. Probab., 12, 2002, 224) to show polynomial convergence rates of Monte Carlo Markov Chain algorithms with polynomial target distributions, in particular random-walk Metropolis algorithms, Langevin algorithms and independence samplers. We also use similar methodology to consider polynomial convergence of the Gibbs sampler on a constrained state space. The main result for the random-walk Metropolis algorithm is that heavy-tailed proposal distributions lead to higher rates of convergence and thus to qualitatively better algorithms as measured, for instance, by the existence of central limit theorems for higher moments. Thus, the paper gives for the first time a theoretical justification for the common belief that heavy-tailed proposal distributions improve convergence in the context of random-walk Metropolis algorithms. Similar results are shown to hold for Langevin algorithms and the independence sampler, while results for the mixing of Gibbs samplers on uniform distributions on constrained spaces are rather different in character.  相似文献   

17.
Bayesian modelling of spatial compositional data   总被引:1,自引:0,他引:1  
Compositional data are vectors of proportions, specifying fractions of a whole. Aitchison (1986) defines logistic normal distributions for compositional data by applying a logistic transformation and assuming the transformed data to be multi- normal distributed. In this paper we generalize this idea to spatially varying logistic data and thereby define logistic Gaussian fields. We consider the model in a Bayesian framework and discuss appropriate prior distributions. We consider both complete observations and observations of subcompositions or individual proportions, and discuss the resulting posterior distributions. In general, the posterior cannot be analytically handled, but the Gaussian base of the model allows us to define efficient Markov chain Monte Carlo algorithms. We use the model to analyse a data set of sediments in an Arctic lake. These data have previously been considered, but then without taking the spatial aspect into account.  相似文献   

18.
For the purpose of maximum likelihood estimation of static parameters, we apply a kernel smoother to the particles in the standard SIR filter for non-linear state space models with additive Gaussian observation noise. This reduces the Monte Carlo error in the estimates of both the posterior density of the states and the marginal density of the observation at each time point. We correct for variance inflation in the smoother, which together with the use of Gaussian kernels, results in a Gaussian (Kalman) update when the amount of smoothing turns to infinity. We propose and study of a criterion for choosing the optimal bandwidth h in the kernel smoother. Finally, we illustrate our approach using examples from econometrics. Our filter is shown to be highly suited for dynamic models with high signal-to-noise ratio, for which the SIR filter has problems.  相似文献   

19.
Summary. Solving Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data has many applications for dynamic models. A large number of algorithms based on particle filtering methods, also known as sequential Monte Carlo algorithms, have recently been proposed to solve these problems. We propose a special particle filtering method which uses random mixtures of normal distributions to represent the posterior distributions of partially observed Gaussian state space models. This algorithm is based on a marginalization idea for improving efficiency and can lead to substantial gains over standard algorithms. It differs from previous algorithms which were only applicable to conditionally linear Gaussian state space models. Computer simulations are carried out to evaluate the performance of the proposed algorithm for dynamic tobit and probit models.  相似文献   

20.
A Monte Carlo algorithm is said to be adaptive if it automatically calibrates its current proposal distribution using past simulations. The choice of the parametric family that defines the set of proposal distributions is critical for good performance. In this paper, we present such a parametric family for adaptive sampling on high dimensional binary spaces. A practical motivation for this problem is variable selection in a linear regression context. We want to sample from a Bayesian posterior distribution on the model space using an appropriate version of Sequential Monte Carlo. Raw versions of Sequential Monte Carlo are easily implemented using binary vectors with independent components. For high dimensional problems, however, these simple proposals do not yield satisfactory results. The key to an efficient adaptive algorithm are binary parametric families which take correlations into account, analogously to the multivariate normal distribution on continuous spaces. We provide a review of models for binary data and make one of them work in the context of Sequential Monte Carlo sampling. Computational studies on real life data with about a hundred covariates suggest that, on difficult instances, our Sequential Monte Carlo approach clearly outperforms standard techniques based on Markov chain exploration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号