首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
Generalized Gibbs samplers simulate from any direction, not necessarily limited to the coordinate directions of the parameters of the objective function. We study how to optimally choose such directions in a random scan Gibbs sampler setting. We consider that optimal directions will be those that minimize the Kullback–Leibler divergence of two Markov chain Monte Carlo steps. Two distributions over direction are proposed for the multivariate Normal objective function. The resulting algorithms are used to simulate from a truncated multivariate Normal distribution, and the performance of our algorithms is compared with the performance of two algorithms based on the Gibbs sampler.  相似文献   

2.
Viewing the future order statistics as latent variables at each Gibbs sampling iteration, several Bayesian approaches to predict future order statistics based on type-II censored order statistics, X(1), X(2), …, X(r), of a size n( > r) random sample from a four-parameter generalized modified Weibull (GMW) distribution, are studied. Four parameters of the GMW distribution are first estimated via simulation study. Then various Bayesian approaches, which include the plug-in method, the Monte Carlo method, the Gibbs sampling scheme, and the MCMC procedure, are proposed to develop the prediction intervals of unobserved order statistics. Finally, four type-II censored samples are utilized to investigate the predictions.  相似文献   

3.
This article focuses on simulation-based inference for the time-deformation models directed by a duration process. In order to better capture the heavy tail property of the time series of financial asset returns, the innovation of the observation equation is subsequently assumed to have a Student-t distribution. Suitable Markov chain Monte Carlo (MCMC) algorithms, which are hybrids of Gibbs and slice samplers, are proposed for estimation of the parameters of these models. In the algorithms, the parameters of the models can be sampled either directly from known distributions or through an efficient slice sampler. The states are simulated one at a time by using a Metropolis-Hastings method, where the proposal distributions are sampled through a slice sampler. Simulation studies conducted in this article suggest that our extended models and accompanying MCMC algorithms work well in terms of parameter estimation and volatility forecast.  相似文献   

4.
Gibbs sampler as a computer-intensive algorithm is an important statistical tool both in application and in theoretical work. This algorithm, in many cases, is time-consuming; this paper extends the concept of using the steady-state ranked simulated sampling approach, utilized in Monte Carlo methods by Samawi [On the approximation of multiple integrals using steady state ranked simulated sampling, 2010, submitted for publication], to improve the well-known Gibbs sampling algorithm. It is demonstrated that this approach provides unbiased estimators, in the case of estimating the means and the distribution function, and substantially improves the performance of the Gibbs sampling algorithm and convergence, which results in a significant reduction in the costs and time required to attain a certain level of accuracy. Similar to Casella and George [Explaining the Gibbs sampler, Am. Statist. 46(3) (1992), pp. 167–174], we provide some analytical properties in simple cases and compare the performance of our method using the same illustrations.  相似文献   

5.
An automated (Markov chain) Monte Carlo EM algorithm   总被引:1,自引:0,他引:1  
We present an automated Monte Carlo EM (MCEM) algorithm which efficiently assesses Monte Carlo error in the presence of dependent Monte Carlo, particularly Markov chain Monte Carlo, E-step samples and chooses an appropriate Monte Carlo sample size to minimize this Monte Carlo error with respect to progressive EM step estimates. Monte Carlo error is gauged though an application of the central limit theorem during renewal periods of the MCMC sampler used in the E-step. The resulting normal approximation allows us to construct a rigorous and adaptive rule for updating the Monte Carlo sample size each iteration of the MCEM algorithm. We illustrate our automated routine and compare the performance with competing MCEM algorithms in an analysis of a data set fit by a generalized linear mixed model.  相似文献   

6.
Nonlinear time series analysis plays an important role in recent econometric literature, especially the bilinear model. In this paper, we cast the bilinear time series model in a Bayesian framework and make inference by using the Gibbs sampler, a Monte Carlo method. The methodology proposed is illustrated by using generated examples, two real data sets, as well as a simulation study. The results show that the Gibbs sampler provides a very encouraging option in analyzing bilinear time series.  相似文献   

7.
Markov chain Monte Carlo (MCMC) methods, including the Gibbs sampler and the Metropolis–Hastings algorithm, are very commonly used in Bayesian statistics for sampling from complicated, high-dimensional posterior distributions. A continuing source of uncertainty is how long such a sampler must be run in order to converge approximately to its target stationary distribution. A method has previously been developed to compute rigorous theoretical upper bounds on the number of iterations required to achieve a specified degree of convergence in total variation distance by verifying drift and minorization conditions. We propose the use of auxiliary simulations to estimate the numerical values needed in this theorem. Our simulation method makes it possible to compute quantitative convergence bounds for models for which the requisite analytical computations would be prohibitively difficult or impossible. On the other hand, although our method appears to perform well in our example problems, it cannot provide the guarantees offered by analytical proof.  相似文献   

8.
We demonstrate the use of auxiliary (or latent) variables for sampling non-standard densities which arise in the context of the Bayesian analysis of non-conjugate and hierarchical models by using a Gibbs sampler. Their strategic use can result in a Gibbs sampler having easily sampled full conditionals. We propose such a procedure to simplify or speed up the Markov chain Monte Carlo algorithm. The strength of this approach lies in its generality and its ease of implementation. The aim of the paper, therefore, is to provide an alternative sampling algorithm to rejection-based methods and other sampling approaches such as the Metropolis–Hastings algorithm.  相似文献   

9.
Markov chain Monte Carlo methods, in particular, the Gibbs sampler, are widely used algorithms both in application and theoretical works in the classical and Bayesian paradigms. However, these algorithms are often computer intensive. Samawi et al. [Steady-state ranked Gibbs sampler. J. Stat. Comput. Simul. 2012;82(8), 1223–1238. doi:10.1080/00949655.2011.575378] demonstrate through theory and simulation that the dependent steady-state Gibbs sampler is more efficient and accurate in model parameter estimation than the original Gibbs sampler. This paper proposes the independent steady-state Gibbs sampler (ISSGS) approach to improve the original Gibbs sampler in multidimensional problems. It is demonstrated that ISSGS provides accuracy with unbiased estimation and improves the performance and convergence of the Gibbs sampler in multidimensional problems.  相似文献   

10.
The particle Gibbs sampler is a systematic way of using a particle filter within Markov chain Monte Carlo. This results in an off‐the‐shelf Markov kernel on the space of state trajectories, which can be used to simulate from the full joint smoothing distribution for a state space model in a Markov chain Monte Carlo scheme. We show that the particle Gibbs Markov kernel is uniformly ergodic under rather general assumptions, which we will carefully review and discuss. In particular, we provide an explicit rate of convergence, which reveals that (i) for fixed number of data points, the convergence rate can be made arbitrarily good by increasing the number of particles and (ii) under general mixing assumptions, the convergence rate can be kept constant by increasing the number of particles superlinearly with the number of observations. We illustrate the applicability of our result by studying in detail a common stochastic volatility model with a non‐compact state space.  相似文献   

11.
We present a variant of the sequential Monte Carlo sampler by incorporating the partial rejection control mechanism of Liu (2001). We show that the resulting algorithm can be considered as a sequential Monte Carlo sampler with a modified mutation kernel. We prove that the new sampler can reduce the variance of the incremental importance weights when compared with standard sequential Monte Carlo samplers, and provide a central limit theorem. Finally, the sampler is adapted for application under the challenging approximate Bayesian computation modelling framework.  相似文献   

12.
We propose a multivariate tobit (MT) latent variable model that is defined by a confirmatory factor analysis with covariates for analysing the mixed type data, which is inherently non-negative and sometimes has a large proportion of zeros. Some useful MT models are special cases of our proposed model. To obtain maximum likelihood estimates, we use the expectation maximum algorithm with its E-step via the Gibbs sampler made feasible by Monte Carlo simulation and its M-step greatly simplified by a sequence of conditional maximization. Standard errors are evaluated by inverting a Monte Carlo approximation of the information matrix using Louis's method. The methodology is illustrated with a simulation study and a real example.  相似文献   

13.
We consider a Bayesian deterministically trending dynamic time series model with heteroscedastic error variance, in which there exist multiple structural changes in level, trend and error variance, but the number of change-points and the timings are unknown. For a Bayesian analysis, a truncated Poisson prior and conjugate priors are used for the number of change-points and the distributional parameters, respectively. To identify the best model and estimate the model parameters simultaneously, we propose a new method by sequentially making use of the Gibbs sampler in conjunction with stochastic approximation Monte Carlo simulations, as an adaptive Monte Carlo algorithm. The numerical results are in favor of our method in terms of the quality of estimates.  相似文献   

14.
This paper aims at evaluating different aspects of Monte Carlo expectation – maximization algorithm to estimate heavy-tailed mixed logistic regression (MLR) models. As a novelty it also proposes a multiple chain Gibbs sampler to generate of the latent variables distributions thus obtaining independent samples. In heavy-tailed MLR models, the analytical forms of the full conditional distributions for the random effects are unknown. Four different Metropolis–Hastings algorithms are assumed to generate from them. We also discuss stopping rules in order to obtain more efficient algorithms in heavy-tailed MLR models. The algorithms are compared through the analysis of simulated and Ascaris Suum data.  相似文献   

15.
The lasso is a popular technique of simultaneous estimation and variable selection in many research areas. The marginal posterior mode of the regression coefficients is equivalent to estimates given by the non-Bayesian lasso when the regression coefficients have independent Laplace priors. Because of its flexibility of statistical inferences, the Bayesian approach is attracting a growing body of research in recent years. Current approaches are primarily to either do a fully Bayesian analysis using Markov chain Monte Carlo (MCMC) algorithm or use Monte Carlo expectation maximization (MCEM) methods with an MCMC algorithm in each E-step. However, MCMC-based Bayesian method has much computational burden and slow convergence. Tan et al. [An efficient MCEM algorithm for fitting generalized linear mixed models for correlated binary data. J Stat Comput Simul. 2007;77:929–943] proposed a non-iterative sampling approach, the inverse Bayes formula (IBF) sampler, for computing posteriors of a hierarchical model in the structure of MCEM. Motivated by their paper, we develop this IBF sampler in the structure of MCEM to give the marginal posterior mode of the regression coefficients for the Bayesian lasso, by adjusting the weights of importance sampling, when the full conditional distribution is not explicit. Simulation experiments show that the computational time is much reduced with our method based on the expectation maximization algorithm and our algorithms and our methods behave comparably with other Bayesian lasso methods not only in prediction accuracy but also in variable selection accuracy and even better especially when the sample size is relatively large.  相似文献   

16.
In modelling financial return time series and time-varying volatility, the Gaussian and the Student-t distributions are widely used in stochastic volatility (SV) models. However, other distributions such as the Laplace distribution and generalized error distribution (GED) are also common in SV modelling. Therefore, this paper proposes the use of the generalized t (GT) distribution whose special cases are the Gaussian distribution, Student-t distribution, Laplace distribution and GED. Since the GT distribution is a member of the scale mixture of uniform (SMU) family of distribution, we handle the GT distribution via its SMU representation. We show this SMU form can substantially simplify the Gibbs sampler for Bayesian simulation-based computation and can provide a mean of identifying outliers. In an empirical study, we adopt a GT–SV model to fit the daily return of the exchange rate of Australian dollar to three other currencies and use the exchange rate to US dollar as a covariate. Model implementation relies on Bayesian Markov chain Monte Carlo algorithms using the WinBUGS package.  相似文献   

17.
Heng Lian 《Statistics》2013,47(6):777-785
Improving efficiency of the importance sampler is at the centre of research on Monte Carlo methods. While the adaptive approach is usually not so straightforward within the Markov chain Monte Carlo framework, the counterpart in importance sampling can be justified and validated easily. We propose an iterative adaptation method for learning the proposal distribution of an importance sampler based on stochastic approximation. The stochastic approximation method can recruit general iterative optimization techniques like the minorization–maximization algorithm. The effectiveness of the approach in optimizing the Kullback divergence between the proposal distribution and the target is demonstrated using several examples.  相似文献   

18.
This paper describes the Bayesian inference and prediction of the two-parameter Weibull distribution when the data are Type-II censored data. The aim of this paper is twofold. First we consider the Bayesian inference of the unknown parameters under different loss functions. The Bayes estimates cannot be obtained in closed form. We use Gibbs sampling procedure to draw Markov Chain Monte Carlo (MCMC) samples and it has been used to compute the Bayes estimates and also to construct symmetric credible intervals. Further we consider the Bayes prediction of the future order statistics based on the observed sample. We consider the posterior predictive density of the future observations and also construct a predictive interval with a given coverage probability. Monte Carlo simulations are performed to compare different methods and one data analysis is performed for illustration purposes.  相似文献   

19.
We describe standard single-site Monte Carlo Markov chain methods, the Hastings and Metropolis algorithms, the Gibbs sampler and simulated annealing, for maximum a posteriori and marginal posterior modes image estimation. These methods can experience great difficulty in traversing the whole image space in a finite time when the target distribution is multi-modal. We present a survey of multiple-site update methods, including Swendsen and Wang's algorithm, coupled Markov chains and cascade algorithms designed to tackle the problem of moving between modes of the posterior image distribution. We compare the performance of some of these algorithms for sampling from degraded and non-degraded Ising models  相似文献   

20.
The ordinal probit, univariate or multivariate, is a generalized linear model (GLM) structure that arises frequently in such disparate areas of statistical applications as medicine and econometrics. Despite the straightforwardness of its implementation using the Gibbs sampler, the ordinal probit may present challenges in obtaining satisfactory convergence.We present a multivariate Hastings-within-Gibbs update step for generating latent data and bin boundary parameters jointly, instead of individually from their respective full conditionals. When the latent data are parameters of interest, this algorithm substantially improves Gibbs sampler convergence for large datasets. We also discuss Monte Carlo Markov chain (MCMC) implementation of cumulative logit (proportional odds) and cumulative complementary log-log (proportional hazards) models with latent data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号