首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A number of volatility forecasting studies have led to the perception that the ARCH- and Stochastic Volatility-type models provide poor out-of-sample forecasts of volatility. This is primarily based on the use of traditional forecast evaluation criteria concerning the accuracy and the unbiasedness of forecasts. In this paper we provide an analytical assessment of volatility forecasting performance. We use the volatility and log volatility framework to prove how the inherent noise in the approximation of the true- and unobservable-volatility by the squared return, results in a misleading forecast evaluation, inflating the observed mean squared forecast error and invalidating the Diebold-Mariano statistic. We analytically characterize this noise and explicitly quantify its effects assuming normal errors. We extend our results using more general error structures such as the Compound Normal and the Gram-Charlier classes of distributions. We argue that evaluation problems are likely to be exacerbated by non-normality of the shocks and that non-linear and utility-based criteria can be more suitable for the evaluation of volatility forecasts.  相似文献   

2.
Accurate volatility forecasting is a key determinant for portfolio management, risk management and economic policy. The paper provides evidence that the sum of squared standardized forecast errors is a reliable measure for model evaluation when the predicted variable is the intra-day realized volatility. The forecasting evaluation is valid for standardized forecast errors with leptokurtic distribution as well as with leptokurtic and asymmetric distributions. Additionally, the widely applied forecasting evaluation function, the predicted mean-squared error, fails to select the adequate model in the case of models with residuals that are leptokurtically and asymmetrically distributed. Hence, the realized volatility forecasting evaluation should be based on the standardized forecast errors instead of their unstandardized version.  相似文献   

3.
Abstract

An improved forecasting model by merging two different computational models in predicting future volatility was proposed. The model integrates wavelet and EGARCH model where the pre-processing activity based on wavelet transform is performed with de-noising technique to eliminate noise in observed signal. The denoised signal is then feed into EGARCH model to forecast the volatility. The predictive capability of the proposed model is compared with the existing EGARCH model. The results show that the hybrid model has increased the accuracy of forecasting future volatility.  相似文献   

4.
Recent advances in financial econometrics have allowed for the construction of efficient ex post measures of daily volatility. This paper investigates the importance of instability in models of realised volatility and their corresponding forecasts. Testing for model instability is conducted with a subsampling method. We show that removing structurally unstable data of a short duration has a negligible impact on the accuracy of conditional mean forecasts of volatility. In contrast, it does provide a substantial improvement in a model's forecast density of volatility. In addition, the forecasting performance improves, often dramatically, when we evaluate models on structurally stable data.  相似文献   

5.
We propose a parametric nonlinear time-series model, namely the Autoregressive-Stochastic volatility with threshold (AR-SVT) model with mean equation for forecasting level and volatility. Methodology for estimation of parameters of this model is developed by first obtaining recursive Kalman filter time-update equation and then employing the unrestricted quasi-maximum likelihood method. Furthermore, optimal one-step and two-step-ahead out-of-sample forecasts formulae along with forecast error variances are derived analytically by recursive use of conditional expectation and variance. As an illustration, volatile all-India monthly spices export during the period January 2006 to January 2012 is considered. Entire data analysis is carried out using EViews and matrix laboratory (MATLAB) software packages. The AR-SVT model is fitted and interval forecasts for 10 hold-out data points are obtained. Superiority of this model for describing and forecasting over other competing models for volatility, namely AR-Generalized autoregressive conditional heteroscedastic, AR-Exponential GARCH, AR-Threshold GARCH, and AR-Stochastic volatility models is shown for the data under consideration. Finally, for the AR-SVT model, optimal out-of-sample forecasts along with forecasts of one-step-ahead variances are obtained.  相似文献   

6.
This study extends the affine Nelson–Siegel model by introducing the time-varying volatility component in the observation equation of yield curve, modeled as a standard EGARCH process. The model is illustrated in state-space framework and empirically compared to the standard affine and dynamic Nelson–Siegel model in terms of in-sample fit and out-of-sample forecast accuracy. The affine based extended model that accounts for time-varying volatility outpaces the other models for fitting the yield curve and produces relatively more accurate 6- and 12-month ahead forecasts, while the standard affine model comes with more precise forecasts for the very short forecast horizons. The study concludes that the standard and affine Nelson–Siegel models have higher forecasting capability than their counterpart EGARCH based models for the short forecast horizons, i.e., 1 month. The EGARCH based extended models have excellent performance for the medium and longer forecast horizons.  相似文献   

7.
Abstract

In this paper, using estimating function approach, a new optimal volatility estimator is introduced and based on the recursive form of the estimator a data-driven generalized EWMA model for value at risk (VaR) forecast is proposed. An appropriate data-driven model for volatility is identified by the relationship between absolute deviation and standard deviation for symmetric distributions with finite variance. It is shown that the asymptotic variance of the proposed volatility estimator is smaller than that of conventional estimators and is more appropriate for financial data with larger kurtosis. For IBM, Microsoft, Apple stocks and SP 500 index the proposed method is used to identify the model, estimate the volatility, and obtain minimum mean square error(MMSE) forecasts of VaR.  相似文献   

8.
Given a multiple time series that is generated by a multivariate ARMA process and assuming the objective is to forecast a weighted sum of the individual variables, then under a mean squared error measure of forecasting precision, it is preferable to forecast the disaggregated multiple time series and aggregate the forecasts, rather than forecast the aggregated series directly, if the involved processes are known. This result fails to hold if the processes used for forecasting are estimated from a given set of time series data. The implications of these results for empirical research are investigated using different sets of economic data.  相似文献   

9.
ABSTRACT

We analyze the evolution of macroeconomic uncertainty in the United States, based on the forecast errors of consensus survey forecasts of various economic indicators. Comprehensive information contained in the survey forecasts enables us to capture a real-time measure of uncertainty surrounding subjective forecasts in a simple framework. We jointly model and estimate macroeconomic (common) and indicator-specific uncertainties of four indicators, using a factor stochastic volatility model. Our macroeconomic uncertainty estimates have three major spikes has three major spikes aligned with the 1973–1975, 1980, and 2007–2009 recessions, while other recessions were characterized by increases in indicator-specific uncertainties. We also show that the selection of data vintages affects the estimates and relative size of jumps in estimated uncertainty series. Finally, our macroeconomic uncertainty has a persistent negative impact on real economic activity, rather than producing “wait-and-see” dynamics.  相似文献   

10.
ABSTRACT

We propose point forecast accuracy measures based directly on distance of the forecast-error c.d.f. from the unit step function at 0 (“stochastic error distance,” or SED). We provide a precise characterization of the relationship between SED and standard predictive loss functions, and we show that all such loss functions can be written as weighted SEDs. The leading case is absolute error loss. Among other things, this suggests shifting attention away from conditional-mean forecasts and toward conditional-median forecasts.  相似文献   

11.
《Econometric Reviews》2012,31(1):54-70
Abstract

This study forecasts the volatility of two energy futures markets (oil and gas), using high-frequency data. We, first, disentangle volatility into continuous volatility and jumps. Second, we apply wavelet analysis to study the relationship between volume and the volatility measures for different horizons. Third, we augment the heterogeneous autoregressive (HAR) model by nonlinearly including both jumps and volume. We then propose different empirical extensions of the HAR model. Our study shows that oil and gas volatilities nonlinearly depend on public information (jumps), private information (continuous volatility), and trading volume. Moreover, our threshold augmented HAR model with heterogeneous jumps and continuous volatility outperforms HAR model in forecasting volatility.  相似文献   

12.
We construct a monthly real-time dataset consisting of vintages for 1991.1–2010.12 that is suitable for generating forecasts of the real price of oil from a variety of models. We document that revisions of the data typically represent news, and we introduce backcasting and nowcasting techniques to fill gaps in the real-time data. We show that real-time forecasts of the real price of oil can be more accurate than the no-change forecast at horizons up to 1 year. In some cases, real-time mean squared prediction error (MSPE) reductions may be as high as 25% 1 month ahead and 24% 3 months ahead. This result is in striking contrast to related results in the literature for asset prices. In particular, recursive vector autoregressive (VAR) forecasts based on global oil market variables tend to have lower MSPE at short horizons than forecasts based on oil futures prices, forecasts based on autoregressive (AR) and autoregressive moving average (ARMA) models, and the no-change forecast. In addition, these VAR models have consistently higher directional accuracy.  相似文献   

13.
Error measures for the evaluation of forecasts are usually based on the size of the forecast errors. Common measures are, e.g. the mean squared error (MSE), the mean absolute deviation (MAD) or the mean absolute percentage error (MAPE). Alternative measures for the comparison of forecasts are turning points or hits-and-misses, where an indicator loss function is used to decide if a forecast is of high quality or not. Here, we discuss the latter to obtain reliable combined forecasts. We apply several combination techniques to a set of German macroeconomic data. Furthermore, we perform a small simulation study for the combination of two biased forecasts.  相似文献   

14.
Many economic and financial time series exhibit heteroskedasticity, where the variability changes are often based on recent past shocks, which cause large or small fluctuations to cluster together. Classical ways of modelling the changing variance include the use of Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models and Neural Networks models. The paper starts with a comparative study of these two models, both in terms of capturing the non-linear or heteroskedastic structure and forecasting performance. Monthly and daily exchange rates for three different countries are implemented. The paper continues with different methods for combining forecasts of the volatility from the competing models, in order to improve forecasting accuracy. Traditional methods for combining the predicted values from different models, using various weighting schemes are considered, such as the simple average or methods that find the best weights in terms of minimizing the squared forecast error. The main purpose of the paper is, however, to propose an alternative methodology for combining forecasts effectively. The new, hereby-proposed non-linear, non-parametric, kernel-based method, is shown to have the basic advantage of not being affected by outliers, structural breaks or shocks to the system and it does not require a specific functional form for the combination.  相似文献   

15.
This article examines the prediction contest as a vehicle for aggregating the opinions of a crowd of experts. After proposing a general definition distinguishing prediction contests from other mechanisms for harnessing the wisdom of crowds, we focus on point-forecasting contests—contests in which forecasters submit point forecasts with a prize going to the entry closest to the quantity of interest. We first illustrate the incentive for forecasters to submit reports that exaggerate in the direction of their private information. Whereas this exaggeration raises a forecaster's mean squared error, it increases his or her chances of winning the contest. And in contrast to conventional wisdom, this nontruthful reporting usually improves the accuracy of the resulting crowd forecast. The source of this improvement is that exaggeration shifts weight away from public information (information known to all forecasters) and by so doing helps alleviate public knowledge bias. In the context of a simple theoretical model of overlapping information and forecaster behaviors, we present closed-form expressions for the mean squared error of the crowd forecasts which will help identify the situations in which point forecasting contests will be most useful.  相似文献   

16.
We consider measurement error models within the time series unobserved component framework. A variable of interest is observed with some measurement error and modelled as an unobserved component. The forecast and the prediction of this variable given the observed values is given by the Kalman filter and smoother along with their conditional variances. By expressing the forecasts and predictions as weighted averages of the observed values, we investigate the effect of estimation error in the measurement and observation noise variances. We also develop corrected standard errors for prediction and forecasting accounting for the fact that the measurement and observation error variances are estimated by the same sample that is used for forecasting and prediction purposes. We apply the theory to the Yellowstone grizzly bears and US index of production datasets.  相似文献   

17.
Classical time-series theory assumes values of the response variable to be ‘crisp’ or ‘precise’, which is quite often violated in reality. However, forecasting of such data can be carried out through fuzzy time-series analysis. This article presents an improved method of forecasting based on LR fuzzy sets as membership functions. As an illustration, the methodology is employed for forecasting India's total foodgrain production. For the data under consideration, superiority of proposed method over other competing methods is demonstrated in respect of modelling and forecasting on the basis of mean square error and average relative error criteria. Finally, out-of-sample forecasts are also obtained.  相似文献   

18.
ABSTRACT

We consider Pitman-closeness to evaluate the performance of univariate and multivariate forecasting methods. Optimal weights for the combination of forecasts are calculated with respect to this criterion. These weights depend on the assumption of the distribution of the individual forecasts errors. In the normal case they are identical with the optimal weights with respect to the MSE-criterion (univariate case) and with the optimal weights with respect to the MMSE-criterion (multivariate case). Further, we present a simple example to show how the different combination techniques perform. There we can see how much the optimal multivariate combination can outperform different other combinations. In practice, we can find multivariate forecasts e.g., in econometrics. There is often the situation that forecast institutes estimate several economic variables.  相似文献   

19.
This article proposes a dynamic framework for modeling and forecasting of realized covariance matrices using vine copulas to allow for more flexible dependencies between assets. Our model automatically guarantees positive definiteness of the forecast through the use of a Cholesky decomposition of the realized covariance matrix. We explicitly account for long-memory behavior by using fractionally integrated autoregressive moving average (ARFIMA) and heterogeneous autoregressive (HAR) models for the individual elements of the decomposition. Furthermore, our model incorporates non-Gaussian innovations and GARCH effects, accounting for volatility clustering and unconditional kurtosis. The dependence structure between assets is studied using vine copula constructions, which allow for nonlinearity and asymmetry without suffering from an inflexible tail behavior or symmetry restrictions as in conventional multivariate models. Further, the copulas have a direct impact on the point forecasts of the realized covariances matrices, due to being computed as a nonlinear transformation of the forecasts for the Cholesky matrix. Beside studying in-sample properties, we assess the usefulness of our method in a one-day-ahead forecasting framework, comparing recent types of models for the realized covariance matrix based on a model confidence set approach. Additionally, we find that in Value-at-Risk (VaR) forecasting, vine models require less capital requirements due to smoother and more accurate forecasts.  相似文献   

20.
In this article, we analyze issues of pooling models for a given set of N individual units observed over T periods of time. When the parameters of the models are different but exhibit some similarity, pooling may lead to a reduction of the mean squared error of the estimates and forecasts. We investigate theoretically and through simulations the conditions that lead to improved performance of forecasts based on pooled estimates. We show that the superiority of pooled forecasts in small samples can deteriorate as the sample size grows. Empirical results for postwar international real gross domestic product growth rates of 18 Organization for Economic Cooperation and Development countries using a model put forward by Garcia-Ferrer, Highfield, Palm, and Zellner and Hong, among others illustrate these findings. When allowing for contemporaneous residual correlation across countries, pooling restrictions and criteria have to be rejected when formally tested, but generalized least squares (GLS)-based pooled forecasts are found to outperform GLS-based individual and ordinary least squares-based pooled and individual forecasts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号