首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Markov chain Monte Carlo (MCMC) methods have become popular as a basis for drawing inference from complex statistical models. Two common difficulties with MCMC algorithms are slow mixing and long run-times, which are frequently closely related. Mixing over the entire state space can often be aided by careful tuning of the chain's transition kernel. In order to preserve the algorithm's stationary distribution, however, care must be taken when updating a chain's transition kernel based on that same chain's history. In this paper we introduce a technique that allows the transition kernel of the Gibbs sampler to be updated at user specified intervals, while preserving the chain's stationary distribution. This technique seems to be beneficial both in increasing efficiency of the resulting estimates (via Rao-Blackwellization) and in reducing the run-time. A reinterpretation of the modified Gibbs sampling scheme introduced in terms of auxiliary samples allows its extension to the more general Metropolis-Hastings framework. The strategies we develop are particularly helpful when calculation of the full conditional (for a Gibbs algorithm) or of the proposal distribution (for a Metropolis-Hastings algorithm) is computationally expensive. Partial financial support from FAR 2002-3, University of Insubria is gratefully acknowledged.  相似文献   

2.
Markov chain Monte Carlo (MCMC) routines have become a fundamental means for generating random variates from distributions otherwise difficult to sample. The Hastings sampler, which includes the Gibbs and Metropolis samplers as special cases, is the most popular MCMC method. A number of implementations are available for running these MCMC routines varying in the order through which the components or blocks of the random vector of interest X are cycled or visited. The two most common implementations are the deterministic sweep strategy, whereby the components or blocks of X are updated successively and in a fixed order, and the random sweep strategy, whereby the coordinates or blocks of X are updated in a randomly determined order. In this article, we present a general representation for MCMC updating schemes showing that the deterministic scan is a special case of the random scan. We also discuss decision criteria for choosing a sweep strategy.  相似文献   

3.
The analysis of infectious disease data presents challenges arising from the dependence in the data and the fact that only part of the transmission process is observable. These difficulties are usually overcome by making simplifying assumptions. The paper explores the use of Markov chain Monte Carlo (MCMC) methods for the analysis of infectious disease data, with the hope that they will permit analyses to be made under more realistic assumptions. Two important kinds of data sets are considered, containing temporal and non-temporal information, from outbreaks of measles and influenza. Stochastic epidemic models are used to describe the processes that generate the data. MCMC methods are then employed to perform inference in a Bayesian context for the model parameters. The MCMC methods used include standard algorithms, such as the Metropolis–Hastings algorithm and the Gibbs sampler, as well as a new method that involves likelihood approximation. It is found that standard algorithms perform well in some situations but can exhibit serious convergence difficulties in others. The inferences that we obtain are in broad agreement with estimates obtained by other methods where they are available. However, we can also provide inferences for parameters which have not been reported in previous analyses.  相似文献   

4.
There are two conceptually distinct tasks in Markov chain Monte Carlo (MCMC): a sampler is designed for simulating a Markov chain and then an estimator is constructed on the Markov chain for computing integrals and expectations. In this article, we aim to address the second task by extending the likelihood approach of Kong et al. for Monte Carlo integration. We consider a general Markov chain scheme and use partial likelihood for estimation. Basically, the Markov chain scheme is treated as a random design and a stratified estimator is defined for the baseline measure. Further, we propose useful techniques including subsampling, regulation, and amplification for achieving overall computational efficiency. Finally, we introduce approximate variance estimators for the point estimators. The method can yield substantially improved accuracy compared with Chib's estimator and the crude Monte Carlo estimator, as illustrated with three examples.  相似文献   

5.
Yu (1995) provides a novel convergence diagnostic for Markov chain Monte Carlo (MCMC) which provides a qualitative measure of mixing for Markov chains via a cusum path plot for univariate parameters of interest. The method is based upon the output of a single replication of an MCMC sampler and is therefore widely applicable and simple to use. One criticism of the method is that it is subjective in its interpretation, since it is based upon a graphical comparison of two cusum path plots. In this paper, we develop a quantitative measure of smoothness which we can associate with any given cusum path, and show how we can use this measure to obtain a quantitative measure of mixing. In particular, we derive the large sample distribution of this smoothness measure, so that objective inference is possible. In addition, we show how this quantitative measure may also be used to provide an estimate of the burn-in length for any given sampler. We discuss the utility of this quantitative approach, and highlight a problem which may occur if the chain is able to remain in any one state for some period of time. We provide a more general implementation of the method to overcome the problem in such cases.  相似文献   

6.
In this paper, the Markov chain Monte Carlo (MCMC) method is used to estimate the parameters of a modified Weibull distribution based on a complete sample. While maximum-likelihood estimation (MLE) is the most used method for parameter estimation, MCMC has recently emerged as a good alternative. When applied to parameter estimation, MCMC methods have been shown to be easy to implement computationally, the estimates always exist and are statistically consistent, and their probability intervals are convenient to construct. Details of applying MCMC to parameter estimation for the modified Weibull model are elaborated and a numerical example is presented to illustrate the methods of inference discussed in this paper. To compare MCMC with MLE, a simulation study is provided, and the differences between the estimates obtained by the two algorithms are examined.  相似文献   

7.
We consider conditional exact tests of factor effects in designed experiments for discrete response variables. Similarly to the analysis of contingency tables, a Markov chain Monte Carlo method can be used for performing exact tests, when large-sample approximations are poor and the enumeration of the conditional sample space is infeasible. For designed experiments with a single observation for each run, we formulate log-linear or logistic models and consider a connected Markov chain over an appropriate sample space. In particular, we investigate fractional factorial designs with 2p-q2p-q runs, noting correspondences to the models for 2p-q2p-q contingency tables.  相似文献   

8.
Two strategies that can potentially improve Markov Chain Monte Carlo algorithms are to use derivative evaluations of the target density, and to suppress random walk behaviour in the chain. The use of one or both of these strategies has been investigated in a few specific applications, but neither is used routinely. We undertake a broader evaluation of these techniques, with a view to assessing their utility for routine use. In addition to comparing different algorithms, we also compare two different ways in which the algorithms can be applied to a multivariate target distribution. Specifically, the univariate version of an algorithm can be applied repeatedly to one-dimensional conditional distributions, or the multivariate version can be applied directly to the target distribution.  相似文献   

9.
The ordinal probit, univariate or multivariate, is a generalized linear model (GLM) structure that arises frequently in such disparate areas of statistical applications as medicine and econometrics. Despite the straightforwardness of its implementation using the Gibbs sampler, the ordinal probit may present challenges in obtaining satisfactory convergence.We present a multivariate Hastings-within-Gibbs update step for generating latent data and bin boundary parameters jointly, instead of individually from their respective full conditionals. When the latent data are parameters of interest, this algorithm substantially improves Gibbs sampler convergence for large datasets. We also discuss Monte Carlo Markov chain (MCMC) implementation of cumulative logit (proportional odds) and cumulative complementary log-log (proportional hazards) models with latent data.  相似文献   

10.
Due to the escalating growth of big data sets in recent years, new Bayesian Markov chain Monte Carlo (MCMC) parallel computing methods have been developed. These methods partition large data sets by observations into subsets. However, for Bayesian nested hierarchical models, typically only a few parameters are common for the full data set, with most parameters being group specific. Thus, parallel Bayesian MCMC methods that take into account the structure of the model and split the full data set by groups rather than by observations are a more natural approach for analysis. Here, we adapt and extend a recently introduced two-stage Bayesian hierarchical modeling approach, and we partition complete data sets by groups. In stage 1, the group-specific parameters are estimated independently in parallel. The stage 1 posteriors are used as proposal distributions in stage 2, where the target distribution is the full model. Using three-level and four-level models, we show in both simulation and real data studies that results of our method agree closely with the full data analysis, with greatly increased MCMC efficiency and greatly reduced computation times. The advantages of our method versus existing parallel MCMC computing methods are also described.  相似文献   

11.
Markov chain Monte Carlo (MCMC) implementations of Bayesian inference for latent spatial Gaussian models are very computationally intensive, and restrictions on storage and computation time are limiting their application to large problems. Here we propose various parallel MCMC algorithms for such models. The algorithms' performance is discussed with respect to a simulation study, which demonstrates the increase in speed with which the algorithms explore the posterior distribution as a function of the number of processors. We also discuss how feasible problem size is increased by use of these algorithms.  相似文献   

12.
The random walk Metropolis algorithm is a simple Markov chain Monte Carlo scheme which is frequently used in Bayesian statistical problems. We propose a guided walk Metropolis algorithm which suppresses some of the random walk behavior in the Markov chain. This alternative algorithm is no harder to implement than the random walk Metropolis algorithm, but empirical studies show that it performs better in terms of efficiency and convergence time.  相似文献   

13.
While Markov chain Monte Carlo (MCMC) methods are frequently used for difficult calculations in a wide range of scientific disciplines, they suffer from a serious limitation: their samples are not independent and identically distributed. Consequently, estimates of expectations are biased if the initial value of the chain is not drawn from the target distribution. Regenerative simulation provides an elegant solution to this problem. In this article, we propose a simple regenerative MCMC algorithm to generate variates for any distribution.  相似文献   

14.
Abstract.  Let π denote an intractable probability distribution that we would like to explore. Suppose that we have a positive recurrent, irreducible Markov chain that satisfies a minorization condition and has π as its invariant measure. We provide a method of using simulations from the Markov chain to construct a statistical estimate of π from which it is straightforward to sample. We show that this estimate is 'strongly consistent' in the sense that the total variation distance between the estimate and π converges to 0 almost surely as the number of simulations grows. Moreover, we use some recently developed asymptotic results to provide guidance as to how much simulation is necessary. Draws from the estimate can be used to approximate features of π or as intelligent starting values for the original Markov chain. We illustrate our methods with two examples.  相似文献   

15.
In this article, to reduce computational load in performing Bayesian variable selection, we used a variant of reversible jump Markov chain Monte Carlo methods, and the Holmes and Held (HH) algorithm, to sample model index variables in logistic mixed models involving a large number of explanatory variables. Furthermore, we proposed a simple proposal distribution for model index variables, and used a simulation study and real example to compare the performance of the HH algorithm with our proposed and existing proposal distributions. The results show that the HH algorithm with our proposed proposal distribution is a computationally efficient and reliable selection method.  相似文献   

16.
Hidden Markov models form an extension of mixture models which provides a flexible class of models exhibiting dependence and a possibly large degree of variability. We show how reversible jump Markov chain Monte Carlo techniques can be used to estimate the parameters as well as the number of components of a hidden Markov model in a Bayesian framework. We employ a mixture of zero-mean normal distributions as our main example and apply this model to three sets of data from finance, meteorology and geomagnetism.  相似文献   

17.
Summary.  We discuss the inversion of the gas profiles (ozone, NO3, NO2, aerosols and neutral density) in the upper atmosphere from the spectral occultation measurements. The data are produced by the 'Global ozone monitoring of occultation of stars' instrument on board the Envisat satellite that was launched in March 2002. The instrument measures the attenuation of light spectra at various horizontal paths from about 100 km down to 10–20 km. The new feature is that these data allow the inversion of the gas concentration height profiles. A short introduction is given to the present operational data management procedure with examples of the first real data inversion. Several solution options for a more comprehensive statistical inversion are presented. A direct inversion leads to a non-linear model with hundreds of parameters to be estimated. The problem is solved with an adaptive single-step Markov chain Monte Carlo algorithm. Another approach is to divide the problem into several non-linear smaller dimensional problems, to run parallel adaptive Markov chain Monte Carlo chains for them and to solve the gas profiles in repetitive linear steps. The effect of grid size is discussed, and we present how the prior regularization takes the grid size into account in a way that effectively leads to a grid-independent inversion.  相似文献   

18.
Summary. A Bayesian method for segmenting weed and crop textures is described and implemented. The work forms part of a project to identify weeds and crops in images so that selective crop spraying can be carried out. An image is subdivided into blocks and each block is modelled as a single texture. The number of different textures in the image is assumed unknown. A hierarchical Bayesian procedure is used where the texture labels have a Potts model (colour Ising Markov random field) prior and the pixels within a block are distributed according to a Gaussian Markov random field, with the parameters dependent on the type of texture. We simulate from the posterior distribution by using a reversible jump Metropolis–Hastings algorithm, where the number of different texture components is allowed to vary. The methodology is applied to a simulated image and then we carry out texture segmentation on the weed and crop images that motivated the work.  相似文献   

19.
We consider importance sampling (IS) type weighted estimators based on Markov chain Monte Carlo (MCMC) targeting an approximate marginal of the target distribution. In the context of Bayesian latent variable models, the MCMC typically operates on the hyperparameters, and the subsequent weighting may be based on IS or sequential Monte Carlo (SMC), but allows for multilevel techniques as well. The IS approach provides a natural alternative to delayed acceptance (DA) pseudo-marginal/particle MCMC, and has many advantages over DA, including a straightforward parallelization and additional flexibility in MCMC implementation. We detail minimal conditions which ensure strong consistency of the suggested estimators, and provide central limit theorems with expressions for asymptotic variances. We demonstrate how our method can make use of SMC in the state space models context, using Laplace approximations and time-discretized diffusions. Our experimental results are promising and show that the IS-type approach can provide substantial gains relative to an analogous DA scheme, and is often competitive even without parallelization.  相似文献   

20.
Summary. A major difficulty in meta-analysis is publication bias . Studies with positive outcomes are more likely to be published than studies reporting negative or inconclusive results. Correcting for this bias is not possible without making untestable assumptions. In this paper, a sensitivity analysis is discussed for the meta-analysis of 2×2 tables using exact conditional distributions. A Markov chain Monte Carlo EM algorithm is used to calculate maximum likelihood estimates. A rule for increasing the accuracy of estimation and automating the choice of the number of iterations is suggested.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号