首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
We compare different approaches to accounting for parameter instability in the context of macroeconomic forecasting models that assume either small, frequent changes versus models whose parameters exhibit large, rare changes. An empirical out-of-sample forecasting exercise for U.S. gross domestic product (GDP) growth and inflation suggests that models that allow for parameter instability generate more accurate density forecasts than constant-parameter models although they fail to produce better point forecasts. Model combinations deliver similar gains in predictive performance although they fail to improve on the predictive accuracy of the single best model, which is a specification that allows for time-varying parameters and stochastic volatility. Supplementary materials for this article are available online.  相似文献   

2.
A method for combining forecasts may or may not account for dependence and differing precision among forecasts. In this article we test a variety of such methods in the context of combining forecasts of GNP from four major econometric models. The methods include one in which forecasting errors are jointly normally distributed and several variants of this model as well as some simpler procedures and a Bayesian approach with a prior distribution based on exchangeability of forecasters. The results indicate that a simple average, the normal model with an independence assumption, and the Bayesian model perform better than the other approaches that are studied here.  相似文献   

3.
This article provides a simple shrinkage representation that describes the operational characteristics of various forecasting methods designed for a large number of orthogonal predictors (such as principal components). These methods include pretest methods, Bayesian model averaging, empirical Bayes, and bagging. We compare empirically forecasts from these methods with dynamic factor model (DFM) forecasts using a U.S. macroeconomic dataset with 143 quarterly variables spanning 1960–2008. For most series, including measures of real economic activity, the shrinkage forecasts are inferior to the DFM forecasts. This article has online supplementary material.  相似文献   

4.
This article is concerned with the development of a statistical model-based approach to optimally combine forecasts derived from an extrapolative model, such as an autoregressive integrated moving average (ARIMA) time series model, with forecasts of a particular characteristic of the same series obtained from independent sources. The methods derived combine the strengths of all forecasting approaches considered in the combination scheme. The implications of the general theory are investigated in the context of some commonly encountered seasonal ARIMA models. An empirical example to illustrate the method is included.  相似文献   

5.
This paper describes the performance of specific-to-general composition of forecasting models that accord with (approximate) linear autoregressions. Monte Carlo experiments are complemented with ex-ante forecasting results for 97 macroeconomic time series collected for the G7 economies in Stock and Watson (J. Forecast. 23:405–430, 2004). In small samples, the specific-to-general strategy is superior in terms of ex-ante forecasting performance in comparison with a commonly applied strategy of successive model reduction according to weakest parameter significance. Applied to real data, the specific-to-general approach turns out to be preferable. In comparison with successive model reduction, the successive model expansion is less likely to involve overly large losses in forecast accuracy and is particularly recommended if the diagnosed prediction schemes are characterized by a medium to large number of predictors.  相似文献   

6.
In human mortality modelling, if a population consists of several subpopulations it can be desirable to model their mortality rates simultaneously while taking into account the heterogeneity among them. The mortality forecasting methods tend to result in divergent forecasts for subpopulations when independence is assumed. However, under closely related social, economic and biological backgrounds, mortality patterns of these subpopulations are expected to be non-divergent in the future. In this article, we propose a new method for coherent modelling and forecasting of mortality rates for multiple subpopulations, in the sense of nondivergent life expectancy among subpopulations. The mortality rates of subpopulations are treated as multilevel functional data and a weighted multilevel functional principal component (wMFPCA) approach is proposed to model and forecast them. The proposed model is applied to sex-specific data for nine developed countries, and the results show that, in terms of overall forecasting accuracy, the model outperforms the independent model and the Product-Ratio model as well as the unweighted multilevel functional principal component approach.  相似文献   

7.
Many recent articles have found that atheoretical forecasting methods using many predictors give better predictions for key macroeconomic variables than various small-model methods. The practical relevance of these results is open to question, however, because these articles generally use ex post revised data not available to forecasters and because no comparison is made to best actual practice. We provide some evidence on both of these points using a new large dataset of vintage data synchronized with the Fed’s Greenbook forecast. This dataset consist of a large number of variables as observed at the time of each Greenbook forecast since 1979. We compare realtime, large dataset predictions to both simple univariate methods and to the Greenbook forecast. For inflation we find that univariate methods are dominated by the best atheoretical large dataset methods and that these, in turn, are dominated by Greenbook. For GDP growth, in contrast, we find that once one takes account of Greenbook’s advantage in evaluating the current state of the economy, neither large dataset methods, nor the Greenbook process offers much advantage over a univariate autoregressive forecast.  相似文献   

8.
Online auctions have become increasingly popular in recent years, and as a consequence there is a growing body of empirical research on this topic. Most of that research treats data from online auctions as cross-sectional, and consequently ignores the changing dynamics that occur during an auction. In this article we take a different look at online auctions and propose to study an auction's price evolution and associated price dynamics. Specifically, we develop a dynamic forecasting system to predict the price of an ongoing auction. By dynamic, we mean that the model can predict the price of an auction “in progress” and can update its prediction based on newly arriving information. Forecasting price in online auctions is challenging because traditional forecasting methods cannot adequately account for two features of online auction data: (1) the unequal spacing of bids and (2) the changing dynamics of price and bidding throughout the auction. Our dynamic forecasting model accounts for these special features by using modern functional data analysis techniques. Specifically, we estimate an auction's price velocity and acceleration and use these dynamics, together with other auction-related information, to develop a dynamic functional forecasting model. We also use the functional context to systematically describe the empirical regularities of auction dynamics. We apply our method to a novel set of Harry Potter and Microsoft Xbox data and show that our forecasting model outperforms traditional methods.  相似文献   

9.
In this paper, we introduce the class of beta seasonal autoregressive moving average (βSARMA) models for modelling and forecasting time series data that assume values in the standard unit interval. It generalizes the class of beta autoregressive moving average models [Rocha AV and Cribari-Neto F. Beta autoregressive moving average models. Test. 2009;18(3):529–545] by incorporating seasonal dynamics to the model dynamic structure. Besides introducing the new class of models, we develop parameter estimation, hypothesis testing inference, and diagnostic analysis tools. We also discuss out-of-sample forecasting. In particular, we provide closed-form expressions for the conditional score vector and for the conditional Fisher information matrix. We also evaluate the finite sample performances of conditional maximum likelihood estimators and white noise tests using Monte Carlo simulations. An empirical application is presented and discussed.  相似文献   

10.
ABSTRACT

We consider Pitman-closeness to evaluate the performance of univariate and multivariate forecasting methods. Optimal weights for the combination of forecasts are calculated with respect to this criterion. These weights depend on the assumption of the distribution of the individual forecasts errors. In the normal case they are identical with the optimal weights with respect to the MSE-criterion (univariate case) and with the optimal weights with respect to the MMSE-criterion (multivariate case). Further, we present a simple example to show how the different combination techniques perform. There we can see how much the optimal multivariate combination can outperform different other combinations. In practice, we can find multivariate forecasts e.g., in econometrics. There is often the situation that forecast institutes estimate several economic variables.  相似文献   

11.
We propose a parametric nonlinear time-series model, namely the Autoregressive-Stochastic volatility with threshold (AR-SVT) model with mean equation for forecasting level and volatility. Methodology for estimation of parameters of this model is developed by first obtaining recursive Kalman filter time-update equation and then employing the unrestricted quasi-maximum likelihood method. Furthermore, optimal one-step and two-step-ahead out-of-sample forecasts formulae along with forecast error variances are derived analytically by recursive use of conditional expectation and variance. As an illustration, volatile all-India monthly spices export during the period January 2006 to January 2012 is considered. Entire data analysis is carried out using EViews and matrix laboratory (MATLAB) software packages. The AR-SVT model is fitted and interval forecasts for 10 hold-out data points are obtained. Superiority of this model for describing and forecasting over other competing models for volatility, namely AR-Generalized autoregressive conditional heteroscedastic, AR-Exponential GARCH, AR-Threshold GARCH, and AR-Stochastic volatility models is shown for the data under consideration. Finally, for the AR-SVT model, optimal out-of-sample forecasts along with forecasts of one-step-ahead variances are obtained.  相似文献   

12.
The effects of data uncertainty on real-time decision-making can be reduced by predicting data revisions to U.S. GDP growth. We show that survey forecasts efficiently predict the revision implicit in the second estimate of GDP growth, but that forecasting models incorporating monthly economic indicators and daily equity returns provide superior forecasts of the data revision implied by the release of the third estimate. We use forecasting models to measure the impact of surprises in GDP announcements on equity markets, and to analyze the effects of anticipated future revisions on announcement-day returns. We show that the publication of better than expected third-release GDP figures provides a boost to equity markets, and if future upward revisions are expected, the effects are enhanced during recessions.  相似文献   

13.
For over two decades, researchers have provided evidence that the yield curve, specifically the spread between long- and short-term interest rates, contains useful information for signaling future recessions. Despite these findings, forecasters appear to have generally placed too little weight on the yield spread when projecting declines in the aggregate economy. Indeed, we show that professional forecasters are worse at predicting recessions a few quarters ahead than a simple real-time forecasting model that is based on the yield spread. This relative forecast power of the yield curve remains a puzzle.

The appendix is included online as supplementary material.  相似文献   

14.
Some governments rely on centralized, official sets of population forecasts for planning capital facilities. But the nature of population forecasting, as well as the milieu of government forecasting in general, can lead to the creation of extrapolative forecasts not well suited to long-range planning. This report discusses these matters, and suggests that custom-made forecasts and the use of forecast guidelines and a review process stressing forecast assumption justification may be a more realistic basis for planning individual facilities than general-purpose, official forecasts.  相似文献   

15.
16.
This article develops the theory of multistep ahead forecasting for vector time series that exhibit temporal nonstationarity and co-integration. We treat the case of a semi-infinite past by developing the forecast filters and the forecast error filters explicitly. We also provide formulas for forecasting from a finite data sample. This latter application can be accomplished by using large matrices, which remains practicable when the total sample size is moderate. Expressions for the mean square error of forecasts are also derived and can be implemented readily. The flexibility and generality of these formulas are illustrated by four diverse applications: forecasting euro area macroeconomic aggregates; backcasting fertility rates by racial category; forecasting long memory inflation data; and forecasting regional housing starts using a seasonally co-integrated model.  相似文献   

17.
The most common forecasting methods in business are based on exponential smoothing, and the most common time series in business are inherently non‐negative. Therefore it is of interest to consider the properties of the potential stochastic models underlying exponential smoothing when applied to non‐negative data. We explore exponential smoothing state space models for non‐negative data under various assumptions about the innovations, or error, process. We first demonstrate that prediction distributions from some commonly used state space models may have an infinite variance beyond a certain forecasting horizon. For multiplicative error models that do not have this flaw, we show that sample paths will converge almost surely to zero even when the error distribution is non‐Gaussian. We propose a new model with similar properties to exponential smoothing, but which does not have these problems, and we develop some distributional properties for our new model. We then explore the implications of our results for inference, and compare the short‐term forecasting performance of the various models using data on the weekly sales of over 300 items of costume jewelry. The main findings of the research are that the Gaussian approximation is adequate for estimation and one‐step‐ahead forecasting. However, as the forecasting horizon increases, the approximate prediction intervals become increasingly problematic. When the model is to be used for simulation purposes, a suitably specified scheme must be employed.  相似文献   

18.
The present work proposes a new integer valued autoregressive model with Poisson marginal distribution based on the mixing Pegram and dependent Bernoulli thinning operators. Properties of the model are discussed. We consider several methods for estimating the unknown parameters of the model. Also, the classical and Bayesian approaches are used for forecasting. Simulations are performed for the performance of these estimators and forecasting methods. Finally, the analysis of two real data has been presented for illustrative purposes.  相似文献   

19.
We develop a hierarchical Gaussian process model for forecasting and inference of functional time series data. Unlike existing methods, our approach is especially suited for sparsely or irregularly sampled curves and for curves sampled with nonnegligible measurement error. The latent process is dynamically modeled as a functional autoregression (FAR) with Gaussian process innovations. We propose a fully nonparametric dynamic functional factor model for the dynamic innovation process, with broader applicability and improved computational efficiency over standard Gaussian process models. We prove finite-sample forecasting and interpolation optimality properties of the proposed model, which remain valid with the Gaussian assumption relaxed. An efficient Gibbs sampling algorithm is developed for estimation, inference, and forecasting, with extensions for FAR(p) models with model averaging over the lag p. Extensive simulations demonstrate substantial improvements in forecasting performance and recovery of the autoregressive surface over competing methods, especially under sparse designs. We apply the proposed methods to forecast nominal and real yield curves using daily U.S. data. Real yields are observed more sparsely than nominal yields, yet the proposed methods are highly competitive in both settings. Supplementary materials, including R code and the yield curve data, are available online.  相似文献   

20.
Nonparametric methods for the estimation of the link function in generalized linear models are able to avoid bias in the regression parameters. But for the estimation of the link typically the full model, which includes all predictors, has been used. When the number of predictors is large these methods fail since the full model cannot be estimated. In the present article a boosting type method is proposed that simultaneously selects predictors and estimates the link function. The method performs quite well in simulations and real data examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号