首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Markov chain Monte Carlo methods, in particular, the Gibbs sampler, are widely used algorithms both in application and theoretical works in the classical and Bayesian paradigms. However, these algorithms are often computer intensive. Samawi et al. [Steady-state ranked Gibbs sampler. J. Stat. Comput. Simul. 2012;82(8), 1223–1238. doi:10.1080/00949655.2011.575378] demonstrate through theory and simulation that the dependent steady-state Gibbs sampler is more efficient and accurate in model parameter estimation than the original Gibbs sampler. This paper proposes the independent steady-state Gibbs sampler (ISSGS) approach to improve the original Gibbs sampler in multidimensional problems. It is demonstrated that ISSGS provides accuracy with unbiased estimation and improves the performance and convergence of the Gibbs sampler in multidimensional problems.  相似文献   

2.
This paper is concerned with improving the performance of certain Markov chain algorithms for Monte Carlo simulation. We propose a new algorithm for simulating from multivariate Gaussian densities. This algorithm combines ideas from coupled Markov chain methods and from an existing algorithm based only on over-relaxation. The rate of convergence of the proposed and existing algorithms can be measured in terms of the square of the spectral radius of certain matrices. We present examples in which the proposed algorithm converges faster than the existing algorithm and the Gibbs sampler. We also derive an expression for the asymptotic variance of any linear combination of the variables simulated by the proposed algorithm. We outline how the proposed algorithm can be extended to non-Gaussian densities.  相似文献   

3.
In this article we investigate the relationship between the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM-type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under certain conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models.  相似文献   

4.
Likelihood-free methods such as approximate Bayesian computation (ABC) have extended the reach of statistical inference to problems with computationally intractable likelihoods. Such approaches perform well for small-to-moderate dimensional problems, but suffer a curse of dimensionality in the number of model parameters. We introduce a likelihood-free approximate Gibbs sampler that naturally circumvents the dimensionality issue by focusing on lower-dimensional conditional distributions. These distributions are estimated by flexible regression models either before the sampler is run, or adaptively during sampler implementation. As a result, and in comparison to Metropolis-Hastings-based approaches, we are able to fit substantially more challenging statistical models than would otherwise be possible. We demonstrate the sampler’s performance via two simulated examples, and a real analysis of Airbnb rental prices using a intractable high-dimensional multivariate nonlinear state-space model with a 36-dimensional latent state observed on 365 time points, which presents a real challenge to standard ABC techniques.  相似文献   

5.
The ordinal probit, univariate or multivariate, is a generalized linear model (GLM) structure that arises frequently in such disparate areas of statistical applications as medicine and econometrics. Despite the straightforwardness of its implementation using the Gibbs sampler, the ordinal probit may present challenges in obtaining satisfactory convergence.We present a multivariate Hastings-within-Gibbs update step for generating latent data and bin boundary parameters jointly, instead of individually from their respective full conditionals. When the latent data are parameters of interest, this algorithm substantially improves Gibbs sampler convergence for large datasets. We also discuss Monte Carlo Markov chain (MCMC) implementation of cumulative logit (proportional odds) and cumulative complementary log-log (proportional hazards) models with latent data.  相似文献   

6.
We describe standard single-site Monte Carlo Markov chain methods, the Hastings and Metropolis algorithms, the Gibbs sampler and simulated annealing, for maximum a posteriori and marginal posterior modes image estimation. These methods can experience great difficulty in traversing the whole image space in a finite time when the target distribution is multi-modal. We present a survey of multiple-site update methods, including Swendsen and Wang's algorithm, coupled Markov chains and cascade algorithms designed to tackle the problem of moving between modes of the posterior image distribution. We compare the performance of some of these algorithms for sampling from degraded and non-degraded Ising models  相似文献   

7.
Bayesian shrinkage methods have generated a lot of interest in recent years, especially in the context of high‐dimensional linear regression. In recent work, a Bayesian shrinkage approach using generalized double Pareto priors has been proposed. Several useful properties of this approach, including the derivation of a tractable three‐block Gibbs sampler to sample from the resulting posterior density, have been established. We show that the Markov operator corresponding to this three‐block Gibbs sampler is not Hilbert–Schmidt. We propose a simpler two‐block Gibbs sampler and show that the corresponding Markov operator is trace class (and hence Hilbert–Schmidt). Establishing the trace class property for the proposed two‐block Gibbs sampler has several useful consequences. Firstly, it implies that the corresponding Markov chain is geometrically ergodic, thereby implying the existence of a Markov chain central limit theorem, which in turn enables computation of asymptotic standard errors for Markov chain‐based estimates of posterior quantities. Secondly, because the proposed Gibbs sampler uses two blocks, standard recipes in the literature can be used to construct a sandwich Markov chain (by inserting an appropriate extra step) to gain further efficiency and to achieve faster convergence. The trace class property for the two‐block sampler implies that the corresponding sandwich Markov chain is also trace class and thereby geometrically ergodic. Finally, it also guarantees that all eigenvalues of the sandwich chain are dominated by the corresponding eigenvalues of the Gibbs sampling chain (with at least one strict domination). Our results demonstrate that a minor change in the structure of a Markov chain can lead to fundamental changes in its theoretical properties. We illustrate the improvement in efficiency resulting from our proposed Markov chains using simulated and real examples.  相似文献   

8.
Finite mixture of regression (FMR) models are aimed at characterizing subpopulation heterogeneity stemming from different sets of covariates that impact different groups in a population. We address the contemporary problem of simultaneously conducting covariate selection and determining the number of mixture components from a Bayesian perspective that can incorporate prior information. We propose a Gibbs sampling algorithm with reversible jump Markov chain Monte Carlo implementation to accomplish concurrent covariate selection and mixture component determination in FMR models. Our Bayesian approach contains innovative features compared to previously developed reversible jump algorithms. In addition, we introduce component-adaptive weighted g priors for regression coefficients, and illustrate their improved performance in covariate selection. Numerical studies show that the Gibbs sampler with reversible jump implementation performs well, and that the proposed weighted priors can be superior to non-adaptive unweighted priors.  相似文献   

9.
Convergence of Heavy-tailed Monte Carlo Markov Chain Algorithms   总被引:1,自引:0,他引:1  
Abstract.  In this paper, we use recent results of Jarner & Roberts ( Ann. Appl. Probab., 12, 2002, 224) to show polynomial convergence rates of Monte Carlo Markov Chain algorithms with polynomial target distributions, in particular random-walk Metropolis algorithms, Langevin algorithms and independence samplers. We also use similar methodology to consider polynomial convergence of the Gibbs sampler on a constrained state space. The main result for the random-walk Metropolis algorithm is that heavy-tailed proposal distributions lead to higher rates of convergence and thus to qualitatively better algorithms as measured, for instance, by the existence of central limit theorems for higher moments. Thus, the paper gives for the first time a theoretical justification for the common belief that heavy-tailed proposal distributions improve convergence in the context of random-walk Metropolis algorithms. Similar results are shown to hold for Langevin algorithms and the independence sampler, while results for the mixing of Gibbs samplers on uniform distributions on constrained spaces are rather different in character.  相似文献   

10.
This article describes three methods for computing a discrete joint density from full conditional densities. They are the Gibbs sampler, a hybrid method, and an interaction-based method. The hybrid method uses the iterative proportional fitting algorithm, and it is derived from the mixed parameterization of a contingency table. The interaction-based approach is derived from the canonical parameters, while the Gibbs sampler can be regarded as based on the mean parameters. In short, different approaches are motivated by different parameterizations. The setting of a bivariate conditionally specified distribution is used as the premise for comparing the numerical accuracy of the three methods. Detailed comparisons of marginal distributions, odds ratios and expected values are reported. We give theoretical justifications as to why the hybrid method produces better approximation than the Gibbs sampler. Generalizations to more than two variables are discussed. In practice, Gibbs sampler has certain advantages: it is conceptually easy to understand and there are many software tools available. Nevertheless, the hybrid method and the interaction-based method are accurate and simple alternatives when the Gibbs sampler results in a slowly mixing chain and requires substantial simulation efforts.  相似文献   

11.
In complex models like hidden Markov chains, the convergence of the MCMC algorithms used to approximate the posterior distribution and the Bayes estimates of the parameters of interest must be controlled in a robust manner. We propose in this paper a series of online controls, which rely on classical non-parametric tests, to evaluate independence from the start-up distribution, stability of the Markov chain, and asymptotic normality. These tests lead to graphical control spreadsheets which arepresentedin the set-up of normalmixture hidden Markov chains to compare the full Gibbs sampler with an aggregated Gibbs sampler based on the forward – backward formulas.  相似文献   

12.
The Gibbs sampler has been used extensively in the statistics literature. It relies on iteratively sampling from a set of compatible conditional distributions and the sampler is known to converge to a unique invariant joint distribution. However, the Gibbs sampler behaves rather differently when the conditional distributions are not compatible. Such applications have seen increasing use in areas such as multiple imputation. In this paper, we demonstrate that what a Gibbs sampler converges to is a function of the order of the sampling scheme. Besides providing the mathematical background of this behaviour, we also explain how that happens through a thorough analysis of the examples.  相似文献   

13.
There are two generations of Gibbs sampling methods for semiparametric models involving the Dirichlet process. The first generation suffered from a severe drawback: the locations of the clusters, or groups of parameters, could essentially become fixed, moving only rarely. Two strategies that have been proposed to create the second generation of Gibbs samplers are integration and appending a second stage to the Gibbs sampler wherein the cluster locations are moved. We show that these same strategies are easily implemented for the sequential importance sampler, and that the first strategy dramatically improves results. As in the case of Gibbs sampling, these strategies are applicable to a much wider class of models. They are shown to provide more uniform importance sampling weights and lead to additional Rao-Blackwellization of estimators.  相似文献   

14.
Using reinforced processes related to beta-Stacy process and generalized Pólya urn scheme jointly with a structure assumption about dependence, a Bayesian nonparametric prior and a predictive estimator for a multivariate survival function are provided. This estimator can be computed through an easy implementation of a Gibbs sampler algorithm. Moreover consistency of the estimator is studied.  相似文献   

15.
Parallel multivariate slice sampling   总被引:2,自引:0,他引:2  
Slice sampling provides an easily implemented method for constructing a Markov chain Monte Carlo (MCMC) algorithm. However, slice sampling has two major drawbacks: (i) it requires repeated evaluation of likelihoods for each update, which can make it impractical when evaluations are expensive or as the number of evaluations grows (geometrically) with the dimension of the slice sampler, and (ii) since it can be challenging to construct multivariate updates, the updates are typically univariate, which often results in slow mixing samplers. We propose an approach to multivariate slice sampling that naturally lends itself to a parallel implementation. Our approach takes advantage of recent advances in computer architectures, for instance, the newest generation of graphics cards can execute roughly 30,000 threads simultaneously. We demonstrate that it is possible to construct a multivariate slice sampler that has good mixing properties and is efficient in terms of computing time. The contributions of this article are therefore twofold. We study approaches for constructing a multivariate slice sampler, and we show how parallel computing can be useful for making MCMC algorithms computationally efficient. We study various implementations of our algorithm in the context of real and simulated data.  相似文献   

16.
We provide a new approach to the sampling of the well known mixture of Dirichlet process model. Recent attention has focused on retention of the random distribution function in the model, but sampling algorithms have then suffered from the countably infinite representation these distributions have. The key to the algorithm detailed in this article, which also keeps the random distribution functions, is the introduction of a latent variable which allows a finite number, which is known, of objects to be sampled within each iteration of a Gibbs sampler.  相似文献   

17.
We present a multivariate sufficient statistic using Kronecker products that dramatically increases computational efficiency in evaluating likelihood functions and/or posterior distributions. In particular, we examine the example of multivariate regression in a Bayesian setting. We compare the computation time for using the Gibbs sampler both with and without the sufficient statistic. We show that the difference in computation time grows quadratically with the number of covariates and products and linearly with the number of individuals. In the simulation study, speedup factors ranging from at least 20 times to almost 300 times were observed when using the Kronecker sufficient statistic.  相似文献   

18.
This paper presents a method for adaptation in Metropolis–Hastings algorithms. A product of a proposal density and K copies of the target density is used to define a joint density which is sampled by a Gibbs sampler including a Metropolis step. This provides a framework for adaptation since the current value of all K copies of the target distribution can be used in the proposal distribution. The methodology is justified by standard Gibbs sampling theory and generalizes several previously proposed algorithms. It is particularly suited to Metropolis-within-Gibbs updating and we discuss the application of our methods in this context. The method is illustrated with both a Metropolis–Hastings independence sampler and a Metropolis-with-Gibbs independence sampler. Comparisons are made with standard adaptive Metropolis–Hastings methods.  相似文献   

19.
A blocked Gibbs sampler for NGG-mixture models via a priori truncation   总被引:1,自引:0,他引:1  
We define a new class of random probability measures, approximating the well-known normalized generalized gamma (NGG) process. Our new process is defined from the representation of NGG processes as discrete measures where the weights are obtained by normalization of the jumps of Poisson processes and the support consists of independent identically distributed location points, however considering only jumps larger than a threshold \(\varepsilon \). Therefore, the number of jumps of the new process, called \(\varepsilon \)-NGG process, is a.s. finite. A prior distribution for \(\varepsilon \) can be elicited. We assume such a process as the mixing measure in a mixture model for density and cluster estimation, and build an efficient Gibbs sampler scheme to simulate from the posterior. Finally, we discuss applications and performance of the model to two popular datasets, as well as comparison with competitor algorithms, the slice sampler and a posteriori truncation.  相似文献   

20.
This paper aims at evaluating different aspects of Monte Carlo expectation – maximization algorithm to estimate heavy-tailed mixed logistic regression (MLR) models. As a novelty it also proposes a multiple chain Gibbs sampler to generate of the latent variables distributions thus obtaining independent samples. In heavy-tailed MLR models, the analytical forms of the full conditional distributions for the random effects are unknown. Four different Metropolis–Hastings algorithms are assumed to generate from them. We also discuss stopping rules in order to obtain more efficient algorithms in heavy-tailed MLR models. The algorithms are compared through the analysis of simulated and Ascaris Suum data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号