首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
The Box–Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet—a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time series. This bundle of many possible realizations is used to project the future in several ways. Highest probability forecast regions are formed and portrayed with computer graphics. The predictive density's shape is explored. Finally, we discuss a method that allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.  相似文献   

2.
To capture mean and variance asymmetries and time‐varying volatility in financial time series, we generalize the threshold stochastic volatility (THSV) model and incorporate a heavy‐tailed error distribution. Unlike existing stochastic volatility models, this model simultaneously accounts for uncertainty in the unobserved threshold value and in the time‐delay parameter. Self‐exciting and exogenous threshold variables are considered to investigate the impact of a number of market news variables on volatility changes. Adopting a Bayesian approach, we use Markov chain Monte Carlo methods to estimate all unknown parameters and latent variables. A simulation experiment demonstrates good estimation performance for reasonable sample sizes. In a study of two international financial market indices, we consider two variants of the generalized THSV model, with US market news as the threshold variable. Finally, we compare models using Bayesian forecasting in a value‐at‐risk (VaR) study. The results show that our proposed model can generate more accurate VaR forecasts than can standard models.  相似文献   

3.
For many stochastic models, it is difficult to make inference about the model parameters because it is impossible to write down a tractable likelihood given the observed data. A common solution is data augmentation in a Markov chain Monte Carlo (MCMC) framework. However, there are statistical problems where this approach has proved infeasible but where simulation from the model is straightforward leading to the popularity of the approximate Bayesian computation algorithm. We introduce a forward simulation MCMC (fsMCMC) algorithm, which is primarily based upon simulation from the model. The fsMCMC algorithm formulates the simulation of the process explicitly as a data augmentation problem. By exploiting non‐centred parameterizations, an efficient MCMC updating schema for the parameters and augmented data is introduced, whilst maintaining straightforward simulation from the model. The fsMCMC algorithm is successfully applied to two distinct epidemic models including a birth–death–mutation model that has only previously been analysed using approximate Bayesian computation methods.  相似文献   

4.
We propose a Bayesian stochastic search approach to selecting restrictions on multivariate regression models where the errors exhibit deterministic or stochastic conditional volatilities. We develop a Markov chain Monte Carlo (MCMC) algorithm that generates posterior restrictions on the regression coefficients and Cholesky decompositions of the covariance matrix of the errors. Numerical simulations with artificially generated data show that the proposed method is effective in selecting the data-generating model restrictions and improving the forecasting performance of the model. Applying the method to daily foreign exchange rate data, we conduct stochastic search on a VAR model with stochastic conditional volatilities.  相似文献   

5.
We discuss the development of dynamic factor models for multivariate financial time series, and the incorporation of stochastic volatility components for latent factor processes. Bayesian inference and computation is developed and explored in a study of the dynamic factor structure of daily spot exchange rates for a selection of international currencies. The models are direct generalizations of univariate stochastic volatility models and represent specific varieties of models recently discussed in the growing multivariate stochastic volatility literature. We discuss model fitting based on retrospective data and sequential analysis for forward filtering and short-term forecasting. Analyses are compared with results from the much simpler method of dynamic variance-matrix discounting that, for over a decade, has been a standard approach in applied financial econometrics. We study these models in analysis, forecasting, and sequential portfolio allocation for a selected set of international exchange-rate-return time series. Our goals are to understand a range of modeling questions arising in using these factor models and to explore empirical performance in portfolio construction relative to discount approaches. We report on our experiences and conclude with comments about the practical utility of structured factor models and on future potential model extensions.  相似文献   

6.
An essential ingredient of any time series analysis is the estimation of the model parameters and the forecasting of future observations. This investigation takes a Bayesian approach to the analysis of time series by making inferences of the model parameters from the posterior distribution and forecasting from the predictive distribution.

The foundation of the approach is to approximate the condi-tional likelihood by a normal-gamma distribution on the parameter space. The techniques illustrated with many examples of ARMA processes.  相似文献   

7.
The likelihood function of a general nonlinear, non-Gaussian state space model is a high-dimensional integral with no closed-form solution. In this article, I show how to calculate the likelihood function exactly for a large class of non-Gaussian state space models that include stochastic intensity, stochastic volatility, and stochastic duration models among others. The state variables in this class follow a nonnegative stochastic process that is popular in econometrics for modeling volatility and intensities. In addition to calculating the likelihood, I also show how to perform filtering and smoothing to estimate the latent variables in the model. The procedures in this article can be used for either Bayesian or frequentist estimation of the model’s unknown parameters as well as the latent state variables. Supplementary materials for this article are available online.  相似文献   

8.
The present work proposes a new integer valued autoregressive model with Poisson marginal distribution based on the mixing Pegram and dependent Bernoulli thinning operators. Properties of the model are discussed. We consider several methods for estimating the unknown parameters of the model. Also, the classical and Bayesian approaches are used for forecasting. Simulations are performed for the performance of these estimators and forecasting methods. Finally, the analysis of two real data has been presented for illustrative purposes.  相似文献   

9.
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1–2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

10.
Forecasting Performance of an Open Economy DSGE Model   总被引:1,自引:0,他引:1  
《Econometric Reviews》2007,26(2):289-328
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

11.
The autologistic model, first introduced by Besag, is a popular tool for analyzing binary data in spatial lattices. However, no investigation was found to consider modeling of binary data clustered in uncorrelated lattices. Owing to spatial dependency of responses, the exact likelihood estimation of parameters is not possible. For circumventing this difficulty, many studies have been designed to approximate the likelihood and the related partition function of the model. So, the traditional and Bayesian estimation methods based on the likelihood function are often time-consuming and require heavy computations and recursive techniques. Some investigators have introduced and implemented data augmentation and latent variable model to reduce computational complications in parameter estimation. In this work, the spatially correlated binary data distributed in uncorrelated lattices were modeled using autologistic regression, a Bayesian inference was developed with contribution of data augmentation and the proposed models were applied to caries experiences of deciduous dents.  相似文献   

12.
ABSTRACT

This paper proposes a hysteretic autoregressive model with GARCH specification and a skew Student's t-error distribution for financial time series. With an integrated hysteresis zone, this model allows both the conditional mean and conditional volatility switching in a regime to be delayed when the hysteresis variable lies in a hysteresis zone. We perform Bayesian estimation via an adaptive Markov Chain Monte Carlo sampling scheme. The proposed Bayesian method allows simultaneous inferences for all unknown parameters, including threshold values and a delay parameter. To implement model selection, we propose a numerical approximation of the marginal likelihoods to posterior odds. The proposed methodology is illustrated using simulation studies and two major Asia stock basis series. We conduct a model comparison for variant hysteresis and threshold GARCH models based on the posterior odds ratios, finding strong evidence of the hysteretic effect and some asymmetric heavy-tailness. Versus multi-regime threshold GARCH models, this new collection of models is more suitable to describe real data sets. Finally, we employ Bayesian forecasting methods in a Value-at-Risk study of the return series.  相似文献   

13.
We consider the estimation of a large number of GARCH models, of the order of several hundreds. Our interest lies in the identification of common structures in the volatility dynamics of the univariate time series. To do so, we classify the series in an unknown number of clusters. Within a cluster, the series share the same model and the same parameters. Each cluster contains therefore similar series. We do not know a priori which series belongs to which cluster. The model is a finite mixture of distributions, where the component weights are unknown parameters and each component distribution has its own conditional mean and variance. Inference is done by the Bayesian approach, using data augmentation techniques. Simulations and an illustration using data on U.S. stocks are provided.  相似文献   

14.
This paper explores the use of data augmentation in settings beyond the standard Bayesian one. In particular, we show that, after proposing an appropriate generalised data-augmentation principle, it is possible to extend the range of sampling situations in which fiducial methods can be applied by constructing Markov chains whose stationary distributions represent valid posterior inferences on model parameters. Some properties of these chains are presented and a number of open questions are discussed. We also use the approach to draw out connections between classical and Bayesian approaches in some standard settings.  相似文献   

15.
Structural econometric auction models with explicit game-theoretic modeling of bidding strategies have been quite a challenge from a methodological perspective, especially within the common value framework. We develop a Bayesian analysis of the hierarchical Gaussian common value model with stochastic entry introduced by Bajari and Hortaçsu. A key component of our approach is an accurate and easily interpretable analytical approximation of the equilibrium bid function, resulting in a fast and numerically stable evaluation of the likelihood function. We extend the analysis to situations with positive valuations using a hierarchical gamma model. We use a Bayesian variable selection algorithm that simultaneously samples the posterior distribution of the model parameters and does inference on the choice of covariates. The methodology is applied to simulated data and to a newly collected dataset from eBay with bids and covariates from 1000 coin auctions. We demonstrate that the Bayesian algorithm is very efficient and that the approximation error in the bid function has virtually no effect on the model inference. Both models fit the data well, but the Gaussian model outperforms the gamma model in an out-of-sample forecasting evaluation of auction prices. This article has supplementary material online.  相似文献   

16.
Estimation of finite mixture models when the mixing distribution support is unknown is an important problem. This article gives a new approach based on a marginal likelihood for the unknown support. Motivated by a Bayesian Dirichlet prior model, a computationally efficient stochastic approximation version of the marginal likelihood is proposed and large-sample theory is presented. By restricting the support to a finite grid, a simulated annealing method is employed to maximize the marginal likelihood and estimate the support. Real and simulated data examples show that this novel stochastic approximation and simulated annealing procedure compares favorably with existing methods.  相似文献   

17.
In this article, we propose a Bayesian approach to estimate the multiple structural change-points in a level and the trend when the number of change-points is unknown. Our formulation of the structural-change model involves a binary discrete variable that indicates the structural change. The determination of the number and the form of structural changes are considered as a model selection issue in Bayesian structural-change analysis. We apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo (SAMC) algorithm, to this structural-change model selection issue. SAMC effectively functions for the complex structural-change model estimation, since it prevents entrapment in local posterior mode. The estimation of the model parameters in each regime is made using the Gibbs sampler after each change-point is detected. The performance of our proposed method has been investigated on simulated and real data sets, a long time series of US real gross domestic product, US uses of force between 1870 and 1994 and 1-year time series of temperature in Seoul, South Korea.  相似文献   

18.
蒋青嬗等 《统计研究》2018,35(11):105-115
忽略个体效应和空间效应会严重干扰效率测算,其中忽略个体效应使得技术无效率项发生偏移,忽略空间相关性导致估计量有偏且不一致。本文基于真实固定效应随机前沿模型(引入了个体效应),引入因变量和双边误差项的空间滞后项,构建了适用性更佳的真实固定效应空间随机前沿模型。对模型进行组内变化以消除额外参数,使用贝叶斯方法(需推导未知参数的后验分布并执行MCMC抽样)估计参数和技术效率。该方法真正克服了额外参数问题,比同类方法直观、简便。数值模拟结果表明,本文方法对参数、个体截距项及技术无效率项的估计精度均较高,且增加样本容量,估计精度变优。  相似文献   

19.
This paper studies a functional coe?cient time series model with trending regressors, where the coe?cients are unknown functions of time and random variables. We propose a local linear estimation method to estimate the unknown coe?cient functions, and establish the corresponding asymptotic theory under mild conditions. We also develop a test procedure to see if the functional coe?cients take particular parametric forms. For practical use, we further propose a Bayesian approach to select the bandwidths, and conduct several numerical experiments to examine the finite sample performance of our proposed local linear estimator and the test procedure. The results show that the local linear estimator works well and the proposed test has satisfactory size and power. In addition, our simulation studies show that the Bayesian bandwidth selection method performs better than the cross-validation method. Furthermore, we use the functional coe?cient model to study the relationship between consumption per capita and income per capita in United States, and it was shown that the functional coe?cient model with our proposed local linear estimator and Bayesian bandwidth selection method performs well in both in-sample fitting and out-of-sample forecasting.  相似文献   

20.
We consider the development of Bayesian Nonparametric methods for product partition models such as Hidden Markov Models and change point models. Our approach uses a Mixture of Dirichlet Process (MDP) model for the unknown sampling distribution (likelihood) for the observations arising in each state and a computationally efficient data augmentation scheme to aid inference. The method uses novel MCMC methodology which combines recent retrospective sampling methods with the use of slice sampler variables. The methodology is computationally efficient, both in terms of MCMC mixing properties, and robustness to the length of the time series being investigated. Moreover, the method is easy to implement requiring little or no user-interaction. We apply our methodology to the analysis of genomic copy number variation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号