首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 12 毫秒
1.
Convergence assessment techniques for Markov chain Monte Carlo   总被引:7,自引:0,他引:7  
MCMC methods have effectively revolutionised the field of Bayesian statistics over the past few years. Such methods provide invaluable tools to overcome problems with analytic intractability inherent in adopting the Bayesian approach to statistical modelling.However, any inference based upon MCMC output relies critically upon the assumption that the Markov chain being simulated has achieved a steady state or converged. Many techniques have been developed for trying to determine whether or not a particular Markov chain has converged, and this paper aims to review these methods with an emphasis on the mathematics underpinning these techniques, in an attempt to summarise the current state-of-play for convergence assessment techniques and to motivate directions for future research in this area.  相似文献   

2.
混合贝塔分布随机波动模型及其贝叶斯分析   总被引:1,自引:2,他引:1  
为了更准确地揭示金融资产收益率数据的真实数据生成过程,提出了基于混合贝塔分布的随机波动模型,讨论了混合贝塔分布随机波动模型的贝叶斯估计方法,并给出了一种Gibbs抽样算法。以上证A股综指简单收益率为例,分别建立了基于正态分布和混合贝塔分布的随机波动模型,研究表明,基于混合贝塔分布的随机波动模型更准确地描述了样本数据的真实数据生成过程,而正态分布的随机波动模型将高峰厚尾等现象归结为波动冲击,从而低估了收益率的平均波动水平,高估了波动的持续性和波动的冲击扰动。  相似文献   

3.
This article presents a new way of modeling time-varying volatility. We generalize the usual stochastic volatility models to encompass regime-switching properties. The unobserved state variables are governed by a first-order Markov process. Bayesian estimators are constructed by Gibbs sampling. High-, medium- and low-volatility states are identified for the Standard and Poor's 500 weekly return data. Persistence in volatility is explained by the persistence in the low- and the medium-volatility states. The high-volatility regime is able to capture the 1987 crash and overlap considerably with four U.S. economic recession periods.  相似文献   

4.
A regression model with skew-normal errors provides a useful extension for ordinary normal regression models when the dataset under consideration involves asymmetric outcomes. In this article, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis for joint location and scale nonlinear models with skew-normal errors, which relax the normality assumption and include the normal one as a special case. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of MCMC methods to simulate samples from the joint posterior distribution. Finally, simulation studies and a real example are used to illustrate the proposed methodology.  相似文献   

5.
Abstract. Kernel density estimation is an important tool in visualizing posterior densities from Markov chain Monte Carlo output. It is well known that when smooth transition densities exist, the asymptotic properties of the estimator agree with those for independent data. In this paper, we show that because of the rejection step of the Metropolis–Hastings algorithm, this is no longer true and the asymptotic variance will depend on the probability of accepting a proposed move. We find an expression for this variance and apply the result to algorithms for automatic bandwidth selection.  相似文献   

6.
Bayesian inference for the multinomial probit model, using the Gibbs sampler with data augmentation, has been recently considered by some authors. The present paper introduces a modification of the sampling technique, by defining a hybrid Markov chain in which, after each Gibbs sampling cycle, a Metropolis step is carried out along a direction of constant likelihood. Examples with simulated data sets motivate and illustrate the new technique. A proof of the ergodicity of the hybrid Markov chain is also given.  相似文献   

7.
This article provides a Bayesian method of estimating the marginal posterior distributions for stochastic discount factors associated with observed asset returns. These estimates can be used to provide measures of fit for asset-pricing models and to identify broad features of the characteristics that should be explained. These measures of fit can be used to supplement model-evaluation exercises based on Hansen–Jagannathan bounds  相似文献   

8.
We consider stochastic volatility models that are defined by an Ornstein–Uhlenbeck (OU)-Gamma time change. These models are most suitable for modeling financial time series and follow the general framework of the popular non-Gaussian OU models of Barndorff-Nielsen and Shephard. One current problem of these otherwise attractive nontrivial models is, in general, the unavailability of a tractable likelihood-based statistical analysis for the returns of financial assets, which requires the ability to sample from a nontrivial joint distribution. We show that an OU process driven by an infinite activity Gamma process, which is an OU-Gamma process, exhibits unique features, which allows one to explicitly describe and exactly sample from relevant joint distributions. This is a consequence of the OU structure and the calculus of Gamma and Dirichlet processes. We develop a particle marginal Metropolis–Hastings algorithm for this type of continuous-time stochastic volatility models and check its performance using simulated data. For illustration we finally fit the model to S&P500 index data.  相似文献   

9.
Efficient estimation of the regression coefficients in longitudinal data analysis requires a correct specification of the covariance structure. If misspecification occurs, it may lead to inefficient or biased estimators of parameters in the mean. One of the most commonly used methods for handling the covariance matrix is based on simultaneous modeling of the Cholesky decomposition. Therefore, in this paper, we reparameterize covariance structures in longitudinal data analysis through the modified Cholesky decomposition of itself. Based on this modified Cholesky decomposition, the within-subject covariance matrix is decomposed into a unit lower triangular matrix involving moving average coefficients and a diagonal matrix involving innovation variances, which are modeled as linear functions of covariates. Then, we propose a fully Bayesian inference for joint mean and covariance models based on this decomposition. A computational efficient Markov chain Monte Carlo method which combines the Gibbs sampler and Metropolis–Hastings algorithm is implemented to simultaneously obtain the Bayesian estimates of unknown parameters, as well as their standard deviation estimates. Finally, several simulation studies and a real example are presented to illustrate the proposed methodology.  相似文献   

10.
A stochastic epidemic model with several kinds of susceptible is used to analyse temporal disease outbreak data from a Bayesian perspective. Prior distributions are used to model uncertainty in the actual numbers of susceptibles initially present. The posterior distribution of the parameters of the model is explored via Markov chain Monte Carlo methods. The methods are illustrated using two datasets, and the results are compared where possible to results obtained by previous analyses.  相似文献   

11.
In this article, we assess Bayesian estimation and prediction using integrated Laplace approximation (INLA) on a stochastic volatility (SV) model. This was performed through a Monte Carlo study with 1,000 simulated time series. To evaluate the estimation method, two criteria were considered: the bias and square root of the mean square error (smse). The criteria used for prediction are the one step ahead forecast of volatility and the one day Value at Risk (VaR). The main findings are that the INLA approximations are fairly accurate and relatively robust to the choice of prior distribution on the persistence parameter. Additionally, VaR estimates are computed and compared for three financial time series returns indexes.  相似文献   

12.
This article develops an asymmetric volatility model that takes into consideration the structural breaks in the volatility process. Break points and other parameters of the model are estimated using MCMC and Gibbs sampling techniques. Models with different number of break points are compared using the Bayes factor and BIC. We provide a formal test and hence a new procedure for Bayes factor computation to choose between models with different number of breaks. The procedure is illustrated using simulated as well as real data sets. The analysis shows an evidence to the fact that the financial crisis in the market from the first week of September 2008 has caused a significant break in the structure of the return series of two major NYSE indices viz., S & P 500 and Dow Jones. Analysis of the USD/EURO exchange rate data also shows an evidence of structural break around the same time.  相似文献   

13.
The problem of simulating from distributions with intractable normalizing constants has received much attention in recent literature. In this article, we propose an asymptotic algorithm, the so-called double Metropolis–Hastings (MH) sampler, for tackling this problem. Unlike other auxiliary variable algorithms, the double MH sampler removes the need for exact sampling, the auxiliary variables being generated using MH kernels, and thus can be applied to a wide range of problems for which exact sampling is not available. For the problems for which exact sampling is available, it can typically produce the same accurate results as the exchange algorithm, but using much less CPU time. The new method is illustrated by various spatial models.  相似文献   

14.
Semiparametric reproductive dispersion mixed model (SPRDMM) is a natural extension of the reproductive dispersion model and the semiparametric mixed model. In this paper, we relax the normality assumption of random effects in SPRDMM and use a truncated and centred Dirichlet process prior to specify random effects, and present the Bayesian P-spline to approximate the smoothing unknown function. A hybrid algorithm combining the block Gibbs sampler and the Metropolis–Hastings algorithm is implemented to sample observations from the posterior distribution. Also, we develop Bayesian case deletion influence measure for SPRDMM based on the φ-divergence and present those computationally feasible formulas. Several simulation studies and a real example are presented to illustrate the proposed methodologies.  相似文献   

15.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients.  相似文献   

16.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients.  相似文献   

17.
18.
Modeling spatial patterns and processes to assess the spatial variations of data over a study region is an important issue in many fields. In this paper, we focus on investigating the spatial variations of earthquake risks after a main shock. Although earthquake risks have been extensively studied in the literatures, to our knowledge, there does not exist a suitable spatial model for assessing the problem. Therefore, we propose a joint modeling approach based on spatial hierarchical Bayesian models and spatial conditional autoregressive models to describe the spatial variations in earthquake risks over the study region during two periods. A family of stochastic algorithms based on a Markov chain Monte Carlo technique is then performed for posterior computations. The probabilistic issue for the changes of earthquake risks after a main shock is also discussed. Finally, the proposed method is applied to the earthquake records for Taiwan before and after the Chi-Chi earthquake.  相似文献   

19.
In this paper, the maximum likelihood (ML) and Bayes, by using Markov chain Monte Carlo (MCMC), methods are considered to estimate the parameters of three-parameter modified Weibull distribution (MWD(β, τ, λ)) based on a right censored sample of generalized order statistics (gos). Simulation experiments are conducted to demonstrate the efficiency of the proposed methods. Some comparisons are carried out between the ML and Bayes methods by computing the mean squared errors (MSEs), Akaike's information criteria (AIC) and Bayesian information criteria (BIC) of the estimates to illustrate the paper. Three real data sets from Weibull(α, β) distribution are introduced and analyzed using the MWD(β, τ, λ) and also using the Weibull(α, β) distribution. A comparison is carried out between the mentioned models based on the corresponding Kolmogorov–Smirnov (KS) test statistic, {AIC and BIC} to emphasize that the MWD(β, τ, λ) fits the data better than the other distribution. All parameters are estimated based on type-II censored sample, censored upper record values and progressively type-II censored sample which are generated from the real data sets.  相似文献   

20.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号