首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929.  相似文献   

2.
In this paper we provide a comprehensive Bayesian posterior analysis of trend determination in general autoregressive models. Multiple lag autoregressive models with fitted drifts and time trends as well as models that allow for certain types of structural change in the deterministic components are considered. We utilize a modified information matrix-based prior that accommodates stochastic nonstationarity, takes into account the interactions between long-run and short-run dynamics and controls the degree of stochastic nonstationarity permitted. We derive analytic posterior densities for all of the trend determining parameters via the Laplace approximation to multivariate integrals. We also address the sampling properties of our posteriors under alternative data generating processes by simulation methods. We apply our Bayesian techniques to the Nelson-Plosser macroeconomic data and various stock price and dividend data. Contrary to DeJong and Whiteman (1989a,b,c), we do not find that the data overwhelmingly favor the existence of deterministic trends over stochastic trends. In addition, we find evidence supporting Perron's (1989) view that some of the Nelson and Plosser data are best construed as trend stationary with a change in the trend function occurring at 1929.  相似文献   

3.
"The Office of the Actuary, U.S. Social Security Administration, produces alternative forecasts of mortality to reflect uncertainty about the future.... In this article we identify the components and assumptions of the official forecasts and approximate them by stochastic parametric models. We estimate parameters of the models from past data, derive statistical intervals for the forecasts, and compare them with the official high-low intervals. We use the models to evaluate the forecasts rather than to develop different predictions of the future. Analysis of data from 1972 to 1985 shows that the official intervals for mortality forecasts for males or females aged 45-70 have approximately a 95% chance of including the true mortality rate in any year. For other ages the chances are much less than 95%."  相似文献   

4.
In this paper, we compare the forecast ability of GARCH(1,1) and stochastic volatility models for interest rates. The stochastic volatility is estimated using Markov chain Monte Carlo methods. The comparison is based on daily data from 1994 to 1996 for the ten year swap rates for Deutsch Mark, Japanese Yen, and Pound Sterling. Various forecast horizons are considered. It turns out that forecasts based on stochastic volatility models are in most cases superiour to those obtained by GARCH(1,1) models.  相似文献   

5.
Stochastic population forecasts based on conditional expert opinions   总被引:1,自引:0,他引:1  
The paper develops and applies an expert-based stochastic population forecasting method, which can also be used to obtain a probabilistic version of scenario-based official forecasts. The full probability distribution of population forecasts is specified by starting from expert opinions on the future development of demographic components. Expert opinions are elicited as conditional on the realization of scenarios, in a two-step (or multiple-step) fashion. The method is applied to develop a stochastic forecast for the Italian population, starting from official scenarios from the Italian National Statistical Office.  相似文献   

6.
This paper shows how cubic smoothing splines fitted to univariate time series data can be used to obtain local linear forecasts. The approach is based on a stochastic state‐space model which allows the use of likelihoods for estimating the smoothing parameter, and which enables easy construction of prediction intervals. The paper shows that the model is a special case of an ARIMA(0, 2, 2) model; it provides a simple upper bound for the smoothing parameter to ensure an invertible model; and it demonstrates that the spline model is not a special case of Holt's local linear trend method. The paper compares the spline forecasts with Holt's forecasts and those obtained from the full ARIMA(0, 2, 2) model, showing that the restricted parameter space does not impair forecast performance. The advantage of this approach over a full ARIMA(0, 2, 2) model is that it gives a smooth trend estimate as well as a linear forecast function.  相似文献   

7.
Summary.  The time series properties of the temperature reconstruction of Moberg and co-workers are analysed. It is found that the record appears to exhibit long memory characteristics that can be modelled by an autoregressive fractionally integrated moving average process that is both stationary and mean reverting, so that forecasts will eventually return to a constant underlying level. Recent research has suggested that long memory and shifts in level and trend may be confused with each other, and fitting models with slowly changing trends is found to remove the evidence of long memory. Discriminating between the two models is difficult, however, and the strikingly different forecasts that are implied by the two models point towards some intriguing research questions concerning the stochastic process driving this temperature reconstruction.  相似文献   

8.
Although a previous study found that neural network forecasts were more accurate than time series models for predicting Latin American stock indexes, the forecasting accuracy of neural network for predicting gold futures prices has never been discussed. Therefore, the first objective of this study is to compare the forecasting accuracy of a neural network model with that of ARIMA models. Furthermore, the fluctuations in gold futures are not only influenced by the quantitative variables, but also by many nonquantifiable factors, such as wars, international relations, and terrorist attacks. The second objective of this study is therefore to propose the integration of text mining and an artificial neural network to forecast gold futures prices. The historical gold futures prices from 1999 to 2008 were used as training data and testing data, and the prices of 2009 were used to examine the effectiveness of the proposed model. The results of empirical analysis showed that an artificial neural network forecasted gold futures prices better than ARIMA models did. In addition, text mining provided a reasonable explanation of the trend in gold futures prices.  相似文献   

9.
In this paper, we introduce a new distribution, called the alpha-skew generalized normal (ASGN), for GARCH models in modeling daily Value-at-Risk (VaR). Basic structural properties of the proposed distribution are derived including probability and cumulative density functions, moments and stochastic representation. The real data application based on ISE-100 index is given to show the performance of GARCH model specified under ASGN innovation distribution with respect to normal, Student’s-t, skew normal and generalized normal models in terms of the VaR accuracy. The empirical results show that GARCH model with ASGN innovation distribution generates the most accurate VaR forecasts for all confidence levels.  相似文献   

10.
This article is concerned with the development of a statistical model-based approach to optimally combine forecasts derived from an extrapolative model, such as an autoregressive integrated moving average (ARIMA) time series model, with forecasts of a particular characteristic of the same series obtained from independent sources. The methods derived combine the strengths of all forecasting approaches considered in the combination scheme. The implications of the general theory are investigated in the context of some commonly encountered seasonal ARIMA models. An empirical example to illustrate the method is included.  相似文献   

11.
The Box–Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet—a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time series. This bundle of many possible realizations is used to project the future in several ways. Highest probability forecast regions are formed and portrayed with computer graphics. The predictive density's shape is explored. Finally, we discuss a method that allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.  相似文献   

12.
This paper extends stochastic conditional duration (SCD) models for financial transaction data to allow for correlation between error processes and innovations of observed duration process and latent log duration process. Suitable algorithms of Markov Chain Monte Carlo (MCMC) are developed to fit the resulting SCD models under various distributional assumptions about the innovation of the measurement equation. Unlike the estimation methods commonly used to estimate the SCD models in the literature, we work with the original specification of the model, without subjecting the observation equation to a logarithmic transformation. Results of simulation studies suggest that our proposed models and corresponding estimation methodology perform quite well. We also apply an auxiliary particle filter technique to construct one-step-ahead in-sample and out-of-sample duration forecasts of the fitted models. Applications to the IBM transaction data allow comparison of our models and methods to those existing in the literature.  相似文献   

13.
Emrah Altun 《Statistics》2019,53(2):364-386
In this paper, we introduce a new distribution, called generalized Gudermannian (GG) distribution, and its skew extension for GARCH models in modelling daily Value-at-Risk (VaR). Basic structural properties of the proposed distribution are obtained including probability density and cumulative distribution functions, moments, and stochastic representation. The maximum likelihood method is used to estimate unknown parameters of the proposed model and finite sample performance of maximum likelihood estimates are evaluated by means of Monte-Carlo simulation study. The real data application on Nikkei 225 index is given to demonstrate the performance of GARCH model specified under skew extension of GG innovation distribution against normal, Student's-t, skew normal and generalized error and skew generalized error distributions in terms of the accuracy of VaR forecasts. The empirical results show that the GARCH model with GG innovation distribution produces the most accurate VaR forecasts for all confidence levels.  相似文献   

14.
In this paper we discuss the recursive (or on line) estimation in (i) regression and (ii) autoregressive integrated moving average (ARIMA) time series models. The adopted approach uses Kalman filtering techniques to calculate estimates recursively. This approach is used for the estimation of constant as well as time varying parameters. In the first section of the paper we consider the linear regression model. We discuss recursive estimation both for constant and time varying parameters. For constant parameters, Kalman filtering specializes to recursive least squares. In general, we allow the parameters to vary according to an autoregressive integrated moving average process and update the parameter estimates recursively. Since the stochastic model for the parameter changes will "be rarely known, simplifying assumptions have to be made. In particular we assume a random walk model for the time varying parameters and show how to determine whether the parameters are changing over time. This is illustrated with an example.  相似文献   

15.
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1–2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

16.
Forecasting Performance of an Open Economy DSGE Model   总被引:1,自引:0,他引:1  
《Econometric Reviews》2007,26(2):289-328
This paper analyzes the forecasting performance of an open economy dynamic stochastic general equilibrium (DSGE) model, estimated with Bayesian methods, for the Euro area during 1994Q1-2002Q4. We compare the DSGE model and a few variants of this model to various reduced-form forecasting models such as vector autoregressions (VARs) and vector error correction models (VECM), estimated both by maximum likelihood and two different Bayesian approaches, and traditional benchmark models, e.g., the random walk. The accuracy of point forecasts, interval forecasts and the predictive distribution as a whole are assessed in an out-of-sample rolling event evaluation using several univariate and multivariate measures. The results show that the open economy DSGE model compares well with more empirical models and thus that the tension between rigor and fit in older generations of DSGE models is no longer present. We also critically examine the role of Bayesian model probabilities and other frequently used low-dimensional summaries, e.g., the log determinant statistic, as measures of overall forecasting performance.  相似文献   

17.
Accurate wind power forecasts depend on reliable wind speed forecasts. Numerical weather predictions utilize huge amounts of computing time, but still have rather low spatial and temporal resolution. However, stochastic wind speed forecasts perform well in rather high temporal resolution settings. They consume comparably little computing resources and return reliable forecasts, if forecasting horizons are not too long. In the recent literature, spatial interdependence is increasingly taken into consideration. In this paper we propose a new and quite flexible multivariate model that accounts for neighbouring weather stations’ information and as such, exploits spatial data at a high resolution. The model is applied to forecasting horizons of up to 1 day and is capable of handling a high resolution temporal structure. We use a periodic vector autoregressive model with seasonal lags to account for the interaction of the explanatory variables. Periodicity is considered and is modelled by cubic B-splines. Due to the model’s flexibility, the number of explanatory variables becomes huge. Therefore, we utilize time-saving shrinkage methods like lasso and elastic net for estimation. Particularly, a relatively newly developed iteratively re-weighted lasso and elastic net is applied that also incorporates heteroscedasticity. We compare our model to several benchmarks. The out-of-sample forecasting results show that the exploitation of spatial information increases the forecasting accuracy tremendously, in comparison to models in use so far.  相似文献   

18.
The aim of this paper is to describe several methods for quantifying the amount of uncertainty inherent in population forecasts used to assess the impact of demographic processes on social security systems. Each method is briefly outlined, and its advantages and disadvantages are discussed. The primary emphasis is on stochastic population models, and the geographic focus is on the Federal Republic of Germany.  相似文献   

19.
This article derives the large-sample distributions of Lagrange multiplier (LM) tests for parameter instability against several alternatives of interest in the context of cointegrated regression models. The fully modified estimator of Phillips and Hansen is extended to cover general models with stochastic and deterministic trends. The test statistics considered include the SupF test of Quandt, as well as the LM tests of Nyblom and of Nabeya and Tanaka. It is found that the asymptotic distributions depend on the nature of the regressor processes—that is, if the regressors are stochastic or deterministic trends. The distributions are noticeably different from the distributions when the data are weakly dependent. It is also found that the lack of cointegration is a special case of the alternative hypothesis considered (an unstable intercept), so the tests proposed here may also be viewed as a test of the null of cointegration against the alternative of no cointegration. The tests are applied to three data sets—an aggregate consumption function, a present value model of stock prices and dividends, and the term structure of interest rates.  相似文献   

20.
A method of information-criterion-based cointegration detection using dynamic factor models is proposed. The results of the data-based and non data-based Monte Carlo simulations suggest that this method is as effective as conventional hypothesis-testing methods. In the proposed method, an observed multivariate time series is described in terms of common stochastic trends plus stationary autoregressive cycles. Then the best model is selected from among alternative models obtained by changing the number of common stochastic trends, on the basis of information criteria. Consequently, the cointegration rank is determined on the basis of the selected model. Two advantages of the proposed method are also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号