首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
2.
We compare the forecast accuracy of autoregressive integrated moving average (ARIMA) models based on data observed with high and low frequency, respectively. We discuss how, for instance, a quarterly model can be used to predict one quarter ahead even if only annual data are available, and we compare the variance of the prediction error in this case with the variance if quarterly observations were indeed available. Results on the expected information gain are presented for a number of ARIMA models including models that describe the seasonally adjusted gross national product (GNP) series in the Netherlands. Disaggregation from annual to quarterly GNP data has reduced the variance of short-run forecast errors considerably, but further disaggregation from quarterly to monthly data is found to hardly improve the accuracy of monthly forecasts.  相似文献   

3.
Accurate volatility forecasting is a key determinant for portfolio management, risk management and economic policy. The paper provides evidence that the sum of squared standardized forecast errors is a reliable measure for model evaluation when the predicted variable is the intra-day realized volatility. The forecasting evaluation is valid for standardized forecast errors with leptokurtic distribution as well as with leptokurtic and asymmetric distributions. Additionally, the widely applied forecasting evaluation function, the predicted mean-squared error, fails to select the adequate model in the case of models with residuals that are leptokurtically and asymmetrically distributed. Hence, the realized volatility forecasting evaluation should be based on the standardized forecast errors instead of their unstandardized version.  相似文献   

4.
Univariate time series models are estimated for sample periods ending with the enactment of major tax reductions in 1964 and 1981. These models are used to forecast government revenue for the period following the tax cut, and the pattern of forecast errors is examined. Unforecast revenue is negative and large relative to its standard error following the 1981 tax cuts but is close to zero following the 1964 cuts. This disparity occurs because national output behaved differently in the two cases, suggesting that short-run movements in output are dominated by factors other than tax rate changes.  相似文献   

5.
Two methods of using labor-market data as indicators of contemporaneous gross national product (GNP) are developed. The establishment survey data are used by inverting a partial-adjustment equation for hours. A second GNP forecast can be extracted from the household survey by using Okun's law. Using preliminary rather than final data adds about .2 to .4 percentage point to the expected value of the root mean squared errors and changes the weights that the pooling procedure assigns to the two forecasts. The use of preliminary rather than final data results in a procedure that assigns more importance to the Okun's-law forecast.  相似文献   

6.
We compare Bayesian and sample theory model specification criteria. For the Bayesian criteria we use the deviance information criterion and the cumulative density of the mean squared errors of forecast. For the sample theory criterion we use the conditional Kolmogorov test. We use Markov chain Monte Carlo methods to obtain the Bayesian criteria and bootstrap sampling to obtain the conditional Kolmogorov test. Two non nested models we consider are the CIR and Vasicek models for spot asset prices. Monte Carlo experiments show that the DIC performs better than the cumulative density of the mean squared errors of forecast and the CKT. According to the DIC and the mean squared errors of forecast, the CIR model explains the daily data on uncollateralized Japanese call rate from January 1, 1990 to April 18, 1996; but according to the CKT, neither the CIR nor Vasicek models explains the daily data.  相似文献   

7.
Long-memory processes, such as Autoregressive Fractionally Integrated Moving-Average processes—ARFIMA—are likely to lead the observer to make serious misspecification errors. Nonstationary ARFIMA processes can easily be misspecified as ARIMA models, thus confusing a fractional degree of integration with an integer one. Stationary persistent ARFIMA processes can be misspecified as nonstationary ARIMA models, thus leading to a serious increase of out-of-sample forecast errors. In this paper, we discuss three prototypical misspecification cases and derive the corresponding increase in mean square forecasting error for different lead times.  相似文献   

8.
This article is concerned with how the bootstrap can be applied to study conditional forecast error distributions and construct prediction regions for future observations in periodic time-varying state-space models. We derive, first, an algorithm for assessing the precision of quasi-maximum likelihood estimates of the parameters. As a result, the derived algorithm is exploited for numerically evaluating the conditional forecast accuracy of a periodic time series model expressed in state space form. We propose a method which requires the backward, or reverse-time, representation of the model for assessing conditional forecast errors. Finally, the small sample properties of the proposed procedures will be investigated by some simulation studies. Furthermore, we illustrate the results by applying the proposed method to a real time series.  相似文献   

9.
We consider measurement error models within the time series unobserved component framework. A variable of interest is observed with some measurement error and modelled as an unobserved component. The forecast and the prediction of this variable given the observed values is given by the Kalman filter and smoother along with their conditional variances. By expressing the forecasts and predictions as weighted averages of the observed values, we investigate the effect of estimation error in the measurement and observation noise variances. We also develop corrected standard errors for prediction and forecasting accounting for the fact that the measurement and observation error variances are estimated by the same sample that is used for forecasting and prediction purposes. We apply the theory to the Yellowstone grizzly bears and US index of production datasets.  相似文献   

10.
A number of volatility forecasting studies have led to the perception that the ARCH- and Stochastic Volatility-type models provide poor out-of-sample forecasts of volatility. This is primarily based on the use of traditional forecast evaluation criteria concerning the accuracy and the unbiasedness of forecasts. In this paper we provide an analytical assessment of volatility forecasting performance. We use the volatility and log volatility framework to prove how the inherent noise in the approximation of the true- and unobservable-volatility by the squared return, results in a misleading forecast evaluation, inflating the observed mean squared forecast error and invalidating the Diebold-Mariano statistic. We analytically characterize this noise and explicitly quantify its effects assuming normal errors. We extend our results using more general error structures such as the Compound Normal and the Gram-Charlier classes of distributions. We argue that evaluation problems are likely to be exacerbated by non-normality of the shocks and that non-linear and utility-based criteria can be more suitable for the evaluation of volatility forecasts.  相似文献   

11.
This paper is concerned with obtaining more accurate point forecasts in the presence of non-normal errors. Specifically, we apply the residual augmented least-squares (RALS) estimator to autoregressive models to utilize the additional moment restrictions embodied in non-normal errors. Monte Carlo experiments are performed to compare our RALS forecasts to forecasts based on the ordinary least-squares estimator and the least absolute deviations (LAD) estimator. We find that the RALS approach provides superior forecasts when the data are skewed. Compared to the LAD forecast, the RALS forecast has smaller mean squared prediction errors in the baseline case with normal errors.  相似文献   

12.
Error measures for the evaluation of forecasts are usually based on the size of the forecast errors. Common measures are, e.g. the mean squared error (MSE), the mean absolute deviation (MAD) or the mean absolute percentage error (MAPE). Alternative measures for the comparison of forecasts are turning points or hits-and-misses, where an indicator loss function is used to decide if a forecast is of high quality or not. Here, we discuss the latter to obtain reliable combined forecasts. We apply several combination techniques to a set of German macroeconomic data. Furthermore, we perform a small simulation study for the combination of two biased forecasts.  相似文献   

13.
This paper presents an extension of mean-squared forecast error (MSFE) model averaging for integrating linear regression models computed on data frames of various lengths. Proposed method is considered to be a preferable alternative to best model selection by various efficiency criteria such as Bayesian information criterion (BIC), Akaike information criterion (AIC), F-statistics and mean-squared error (MSE) as well as to Bayesian model averaging (BMA) and naïve simple forecast average. The method is developed to deal with possibly non-nested models having different number of observations and selects forecast weights by minimizing the unbiased estimator of MSFE. Proposed method also yields forecast confidence intervals with a given significance level what is not possible when applying other model averaging methods. In addition, out-of-sample simulation and empirical testing proves efficiency of such kind of averaging when forecasting economic processes.  相似文献   

14.
陶然 《统计研究》2012,29(12):81-87
根据普查数据生成过程,将实际普查汇总结果与目标总体真值的净误差定义为普查涵盖误差;从非抽样误差的作用分析,提出涵盖误差来源影响的三个假设,并论证采用净误差表现普查涵盖误差的合理性。在此基础上,将涵盖误差的产生机制和普查数据汇总模型结合,构建不同普查类型下计数与内容涵盖误差的模型与误差分解过程;以此论述了非抽样误差对涵盖误差的影响作用,以及计数涵盖误差和内容涵盖误差间的联系,为进一步研究普查数据质量评估与控制奠定理论基础。  相似文献   

15.
The bootstrap, like the jackknife, is a technique for estimating standard errors. The idea is to use Monte Carlo simulation, based on a nonparametric estimate of the underlying error distribution. The bootstrap will be applied to an econometric model describing the demand for capital, labor, energy, and materials. The model is fitted by three-stage least squares. In sharp contrast with previous results, the coefficient estimates and the estimated standard errors perform very well. However, the model's forecasts show serious bias and large random errors, significantly understated by the conventional standard error of forecast.  相似文献   

16.
This work deals with two methodologies for predicting incurred but not reported (IBNR) actuarial reserves. The first is the traditional chain ladder, which is extended for dealing with the calendar year IBNR reserve. The second is based on heteroscedastic regression models suitable to deal with the tail effect of the runoff triangle – and to forecast calendar year IBNR reserves as well. Theoretical results regarding closed expressions for IBNR predictors and mean squared errors are established – for the case of the second methodology, a Monte Carlo study is designed and implemented for accessing finite sample performances of feasible mean squared error formulae. Finally, the methods are implemented with two real data sets. The main conclusions: (i) considering tail effects does not imply theoretical and/or computational problems; and (ii) both methodologies are interesting to design softwares for IBNR reserve prediction.  相似文献   

17.
We compare the results obtained by applying the same signal-extraction procedures to two observationally equivalent state-space forms. The first model has different errors affecting the states and the observations, while the second has a single perturbation term which coincides with the one-step-ahead forecast error. The signals extracted from both forms are very similar but their variances are drastically different, because the states for the single-source error representation collapse to exact values while those coming from the multiple-error model remain uncertain. The implications of this result are discussed both with theoretical arguments and practical examples. We find that single error representations have advantages to compute the likelihood or to adjust for seasonality, while multiple error models are better suited to extract a trend indicator. Building on this analysis, it is natural to adopt a ‘best of both worlds’ approach, which applies each representation to the task in which it has comparative advantage.  相似文献   

18.
Summary In spite of widespread criticism, macroeconometric models are still most popular for forecasting and policy, analysis. When the most recent data available on both the exogenous and the endogenous variable are preliminaryestimates subject to a revision process, the estimators of the coefficients are affected by the presence of the preliminary data, the projections for the exogenous variables are affected by the presence of data uncertainty, the values of lagged dependent variables used as initial values for, forecasts are still subject to revisions. Since several provisional estimates of the value of a certain variable are available before the data are finalized, in this paper they are seen as repeated predictions of the same quantity (referring to different information sets not necessarily overlapping with one other) to be exploited in a forecast combination framework. The components of the asymptotic bias and of the asymptotic mean square prediction error related to data uncertainty can be reduced or eliminated by using a forecast combination technique which makes the deterministic and the Monte Carlo predictors not worse than either predictor used with or without provisional data. The precision of the forecast with the nonlinear model can be improved if the provisional data are not rational predictions of the final data and contain systematic effects. Economics Department European University Institute Thanks are due to my Ph. D. thesis advisor Bobby Mariano for his guidance and encouragment at various stages of this research. The comments of the participants in the Europan Meeting of the Econometric Society in Maastricht, Aug. 1994, helped in improving the presentation,. A grant from the NSF (SES 8604219) is gratefully acknowledged.  相似文献   

19.
《Econometric Reviews》2013,32(3):175-198
Abstract

A number of volatility forecasting studies have led to the perception that the ARCH- and Stochastic Volatility-type models provide poor out-of-sample forecasts of volatility. This is primarily based on the use of traditional forecast evaluation criteria concerning the accuracy and the unbiasedness of forecasts. In this paper we provide an analytical assessment of volatility forecasting performance. We use the volatility and log volatility framework to prove how the inherent noise in the approximation of the true- and unobservable-volatility by the squared return, results in a misleading forecast evaluation, inflating the observed mean squared forecast error and invalidating the Diebold–Mariano statistic. We analytically characterize this noise and explicitly quantify its effects assuming normal errors. We extend our results using more general error structures such as the Compound Normal and the Gram–Charlier classes of distributions. We argue that evaluation problems are likely to be exacerbated by non-normality of the shocks and that non-linear and utility-based criteria can be more suitable for the evaluation of volatility forecasts.  相似文献   

20.
韩本三  曹征  黎实 《统计研究》2012,29(7):81-85
 本文将RESET检验扩展到二元选择面板数据模型的设定,考察了固定效应Probit模型和Logit模型的设定检验,包括异方差、遗漏变量和分布误设的检验。模拟结果表明Logit模型的RESET设定检验显示良好的水平和功效,而Probit模型的RESET检验可能由于估计方法的选择导致在某些方面的功效表现不好。但总体说来,在二元选择面板数据模型的设定检验上,RESET检验仍然是一个较好的选择。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号