首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, we use U.S. real-time data to produce combined density nowcasts of quarterly Gross Domestic Product (GDP) growth, using a system of three commonly used model classes. We update the density nowcast for every new data release throughout the quarter, and highlight the importance of new information for nowcasting. Our results show that the logarithmic score of the predictive densities for U.S. GDP growth increase almost monotonically, as new information arrives during the quarter. While the ranking of the model classes changes during the quarter, the combined density nowcasts always perform well relative to the model classes in terms of both logarithmic scores and calibration tests. The density combination approach is superior to a simple model selection strategy and also performs better in terms of point forecast evaluation than standard point forecast combinations.  相似文献   

2.
The use of large-dimensional factor models in forecasting has received much attention in the literature with the consensus being that improvements on forecasts can be achieved when comparing with standard models. However, recent contributions in the literature have demonstrated that care needs to be taken when choosing which variables to include in the model. A number of different approaches to determining these variables have been put forward. These are, however, often based on ad hoc procedures or abandon the underlying theoretical factor model. In this article, we will take a different approach to the problem by using the least absolute shrinkage and selection operator (LASSO) as a variable selection method to choose between the possible variables and thus obtain sparse loadings from which factors or diffusion indexes can be formed. This allows us to build a more parsimonious factor model that is better suited for forecasting compared to the traditional principal components (PC) approach. We provide an asymptotic analysis of the estimator and illustrate its merits empirically in a forecasting experiment based on U.S. macroeconomic data. Overall we find that compared to PC we obtain improvements in forecasting accuracy and thus find it to be an important alternative to PC. Supplementary materials for this article are available online.  相似文献   

3.
Time-varying parameter models with stochastic volatility are widely used to study macroeconomic and financial data. These models are almost exclusively estimated using Bayesian methods. A common practice is to focus on prior distributions that themselves depend on relatively few hyperparameters such as the scaling factor for the prior covariance matrix of the residuals governing time variation in the parameters. The choice of these hyperparameters is crucial because their influence is sizeable for standard sample sizes. In this article, we treat the hyperparameters as part of a hierarchical model and propose a fast, tractable, easy-to-implement, and fully Bayesian approach to estimate those hyperparameters jointly with all other parameters in the model. We show via Monte Carlo simulations that, in this class of models, our approach can drastically improve on using fixed hyperparameters previously proposed in the literature. Supplementary materials for this article are available online.  相似文献   

4.
We compare different approaches to accounting for parameter instability in the context of macroeconomic forecasting models that assume either small, frequent changes versus models whose parameters exhibit large, rare changes. An empirical out-of-sample forecasting exercise for U.S. gross domestic product (GDP) growth and inflation suggests that models that allow for parameter instability generate more accurate density forecasts than constant-parameter models although they fail to produce better point forecasts. Model combinations deliver similar gains in predictive performance although they fail to improve on the predictive accuracy of the single best model, which is a specification that allows for time-varying parameters and stochastic volatility. Supplementary materials for this article are available online.  相似文献   

5.
The general pattern of estimated volatilities of macroeconomic and financial variables is often broadly similar. We propose two models in which conditional volatilities feature comovement and study them using U.S. macroeconomic data. The first model specifies the conditional volatilities as driven by a single common unobserved factor, plus an idiosyncratic component. We label this model BVAR with general factor stochastic volatility (BVAR-GFSV) and we show that the loss in terms of marginal likelihood from assuming a common factor for volatility is moderate. The second model, which we label BVAR with common stochastic volatility (BVAR-CSV), is a special case of the BVAR-GFSV in which the idiosyncratic component is eliminated and the loadings to the factor are set to 1 for all the conditional volatilities. Such restrictions permit a convenient Kronecker structure for the posterior variance of the VAR coefficients, which in turn permits estimating the model even with large datasets. While perhaps misspecified, the BVAR-CSV model is strongly supported by the data when compared against standard homoscedastic BVARs, and it can produce relatively good point and density forecasts by taking advantage of the information contained in large datasets.  相似文献   

6.
This article investigates the relevance of considering a large number of macroeconomic indicators to forecast the complete distribution of a variable. The baseline time series model is a semiparametric specification based on the quantile autoregressive (QAR) model that assumes that the quantiles depend on the lagged values of the variable. We then augment the time series model with macroeconomic information from a large dataset by including principal components or a subset of variables selected by LASSO. We forecast the distribution of the h-month growth rate for four economic variables from 1975 to 2011 and evaluate the forecast accuracy relative to a stochastic volatility model using the quantile score. The results for the output and employment measures indicate that the multivariate models outperform the time series forecasts, in particular at long horizons and in tails of the distribution, while for the inflation variables the improved performance occurs mostly at the 6-month horizon. We also illustrate the practical relevance of predicting the distribution by considering forecasts at three dates during the last recession.  相似文献   

7.
This article presents flexible new models for the dependence structure, or copula, of economic variables based on a latent factor structure. The proposed models are particularly attractive for relatively high-dimensional applications, involving 50 or more variables, and can be combined with semiparametric marginal distributions to obtain flexible multivariate distributions. Factor copulas generally lack a closed-form density, but we obtain analytical results for the implied tail dependence using extreme value theory, and we verify that simulation-based estimation using rank statistics is reliable even in high dimensions. We consider “scree” plots to aid the choice of the number of factors in the model. The model is applied to daily returns on all 100 constituents of the S&P 100 index, and we find significant evidence of tail dependence, heterogeneous dependence, and asymmetric dependence, with dependence being stronger in crashes than in booms. We also show that factor copula models provide superior estimates of some measures of systemic risk. Supplementary materials for this article are available online.  相似文献   

8.
Most existing reduced-form macroeconomic multivariate time series models employ elliptical disturbances, so that the forecast densities produced are symmetric. In this article, we use a copula model with asymmetric margins to produce forecast densities with the scope for severe departures from symmetry. Empirical and skew t distributions are employed for the margins, and a high-dimensional Gaussian copula is used to jointly capture cross-sectional and (multivariate) serial dependence. The copula parameter matrix is given by the correlation matrix of a latent stationary and Markov vector autoregression (VAR). We show that the likelihood can be evaluated efficiently using the unique partial correlations, and estimate the copula using Bayesian methods. We examine the forecasting performance of the model for four U.S. macroeconomic variables between 1975:Q1 and 2011:Q2 using quarterly real-time data. We find that the point and density forecasts from the copula model are competitive with those from a Bayesian VAR. During the recent recession the forecast densities exhibit substantial asymmetry, avoiding some of the pitfalls of the symmetric forecast densities from the Bayesian VAR. We show that the asymmetries in the predictive distributions of GDP growth and inflation are similar to those found in the probabilistic forecasts from the Survey of Professional Forecasters. Last, we find that unlike the linear VAR model, our fitted Gaussian copula models exhibit nonlinear dependencies between some macroeconomic variables. This article has online supplementary material.  相似文献   

9.
This article shows entropic tilting to be a flexible and powerful tool for combining medium-term forecasts from BVARs with short-term forecasts from other sources (nowcasts from either surveys or other models). Tilting systematically improves the accuracy of both point and density forecasts, and tilting the BVAR forecasts based on nowcast means and variances yields slightly greater gains in density accuracy than does just tilting based on the nowcast means. Hence, entropic tilting can offer—more so for persistent variables than not-persistent variables—some benefits for accurately estimating the uncertainty of multi-step forecasts that incorporate nowcast information.  相似文献   

10.
ABSTRACT

We propose point forecast accuracy measures based directly on distance of the forecast-error c.d.f. from the unit step function at 0 (“stochastic error distance,” or SED). We provide a precise characterization of the relationship between SED and standard predictive loss functions, and we show that all such loss functions can be written as weighted SEDs. The leading case is absolute error loss. Among other things, this suggests shifting attention away from conditional-mean forecasts and toward conditional-median forecasts.  相似文献   

11.
In this pedagogical article, distributional properties, some surprising, pertaining to the homogeneous Poisson process (HPP), when observed over a possibly random window, are presented. Properties of the gap-time that covered the termination time and the correlations among gap-times of the observed events are obtained. Inference procedures, such as estimation and model validation, based on event occurrence data over the observation window, are also presented. We envision that through the results in this article, a better appreciation of the subtleties involved in the modeling and analysis of recurrent events data will ensue, since the HPP is arguably one of the simplest among recurrent event models. In addition, the use of the theorem of total probability, Bayes’ theorem, the iterated rules of expectation, variance and covariance, and the renewal equation could be illustrative when teaching distribution theory, mathematical statistics, and stochastic processes at both the undergraduate and graduate levels. This article is targeted toward both instructors and students.  相似文献   

12.
在随机前沿模型中引入空间效应和技术无效率项的非连续性并构建了空间零无效率随机前沿模型,使用极大似然估计和JLMS方法得出参数和技术效率的估计。蒙特卡罗模拟表明:(1)逆似然比检验能以较高的准确率识别真实模型;(2)本方法在参数估计和技术效率的估计两方面均表现较好;(3)若真实模型为空间零无效率随机前沿模型但误用了空间随机前沿模型,参数估计和技术效率的估计两方面均表现较差。空间零无效率随机前沿模型有其存在的必要性。  相似文献   

13.
This article examines the prediction contest as a vehicle for aggregating the opinions of a crowd of experts. After proposing a general definition distinguishing prediction contests from other mechanisms for harnessing the wisdom of crowds, we focus on point-forecasting contests—contests in which forecasters submit point forecasts with a prize going to the entry closest to the quantity of interest. We first illustrate the incentive for forecasters to submit reports that exaggerate in the direction of their private information. Whereas this exaggeration raises a forecaster's mean squared error, it increases his or her chances of winning the contest. And in contrast to conventional wisdom, this nontruthful reporting usually improves the accuracy of the resulting crowd forecast. The source of this improvement is that exaggeration shifts weight away from public information (information known to all forecasters) and by so doing helps alleviate public knowledge bias. In the context of a simple theoretical model of overlapping information and forecaster behaviors, we present closed-form expressions for the mean squared error of the crowd forecasts which will help identify the situations in which point forecasting contests will be most useful.  相似文献   

14.
The likelihood function of a general nonlinear, non-Gaussian state space model is a high-dimensional integral with no closed-form solution. In this article, I show how to calculate the likelihood function exactly for a large class of non-Gaussian state space models that include stochastic intensity, stochastic volatility, and stochastic duration models among others. The state variables in this class follow a nonnegative stochastic process that is popular in econometrics for modeling volatility and intensities. In addition to calculating the likelihood, I also show how to perform filtering and smoothing to estimate the latent variables in the model. The procedures in this article can be used for either Bayesian or frequentist estimation of the model’s unknown parameters as well as the latent state variables. Supplementary materials for this article are available online.  相似文献   

15.
This article is concerned with how the bootstrap can be applied to study conditional forecast error distributions and construct prediction regions for future observations in periodic time-varying state-space models. We derive, first, an algorithm for assessing the precision of quasi-maximum likelihood estimates of the parameters. As a result, the derived algorithm is exploited for numerically evaluating the conditional forecast accuracy of a periodic time series model expressed in state space form. We propose a method which requires the backward, or reverse-time, representation of the model for assessing conditional forecast errors. Finally, the small sample properties of the proposed procedures will be investigated by some simulation studies. Furthermore, we illustrate the results by applying the proposed method to a real time series.  相似文献   

16.
We develop a discrete-time affine stochastic volatility model with time-varying conditional skewness (SVS). Importantly, we disentangle the dynamics of conditional volatility and conditional skewness in a coherent way. Our approach allows current asset returns to be asymmetric conditional on current factors and past information, which we term contemporaneous asymmetry. Conditional skewness is an explicit combination of the conditional leverage effect and contemporaneous asymmetry. We derive analytical formulas for various return moments that are used for generalized method of moments (GMM) estimation. Applying our approach to S&P500 index daily returns and option data, we show that one- and two-factor SVS models provide a better fit for both the historical and the risk-neutral distribution of returns, compared to existing affine generalized autoregressive conditional heteroscedasticity (GARCH), and stochastic volatility with jumps (SVJ) models. Our results are not due to an overparameterization of the model: the one-factor SVS models have the same number of parameters as their one-factor GARCH competitors and less than the SVJ benchmark.  相似文献   

17.
This paper deals with the pricing of derivatives written on several underlying assets or factors satisfying a multivariate model with Wishart stochastic volatility matrix. This multivariate stochastic volatility model leads to a closed-form solution for the conditional Laplace transform, and quasi-explicit solutions for derivative prices written on more than one asset or underlying factor. Two examples are presented: (i) a multiasset extension of the stochastic volatility model introduced by Heston (1993), and (ii) a model for credit risk analysis that extends the model of Merton (1974) to a framework with stochastic firm liability, stochastic volatility, and several firms. A bivariate version of the stochastic volatility model is estimated using stock prices and moment conditions derived from the joint unconditional Laplace transform of the stock returns.  相似文献   

18.
ABSTRACT

We investigate the semiparametric smooth coefficient stochastic frontier model for panel data in which the distribution of the composite error term is assumed to be of known form but depends on some environmental variables. We propose multi-step estimators for the smooth coefficient functions as well as the parameters of the distribution of the composite error term and obtain their asymptotic properties. The Monte Carlo study demonstrates that the proposed estimators perform well in finite samples. We also consider an application and perform model specification test, construct confidence intervals, and estimate efficiency scores that depend on some environmental variables. The application uses a panel data on 451 large U.S. firms to explore the effects of computerization on productivity. Results show that two popular parametric models used in the stochastic frontier literature are likely to be misspecified. Compared with the parametric estimates, our semiparametric model shows a positive and larger overall effect of computer capital on the productivity. The efficiency levels, however, were not much different among the models. Supplementary materials for this article are available online.  相似文献   

19.
In this article, we propose a new empirical information criterion (EIC) for model selection which penalizes the likelihood of the data by a non-linear function of the number of parameters in the model. It is designed to be used where there are a large number of time series to be forecast. However, a bootstrap version of the EIC can be used where there is a single time series to be forecast. The EIC provides a data-driven model selection tool that can be tuned to the particular forecasting task.

We compare the EIC with other model selection criteria including Akaike’s information criterion (AIC) and Schwarz’s Bayesian information criterion (BIC). The comparisons show that for the M3 forecasting competition data, the EIC outperforms both the AIC and BIC, particularly for longer forecast horizons. We also compare the criteria on simulated data and find that the EIC does better than existing criteria in that case also.  相似文献   

20.
Abstract. In this article, we estimate the parameters of a simple random network and a stochastic epidemic on that network using data consisting of recovery times of infected hosts. The SEIR epidemic model we fit has exponentially distributed transmission times with Gamma distributed exposed and infectious periods on a network where every edge exists with the same probability, independent of other edges. We employ a Bayesian framework and Markov chain Monte Carlo (MCMC) integration to make estimates of the joint posterior distribution of the model parameters. We discuss the accuracy of the parameter estimates under various prior assumptions and show that it is possible in many scientifically interesting cases to accurately recover the parameters. We demonstrate our approach by studying a measles outbreak in Hagelloch, Germany, in 1861 consisting of 188 affected individuals. We provide an R package to carry out these analyses, which is available publicly on the Comprehensive R Archive Network.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号