首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two methods of using labor-market data as indicators of contemporaneous gross national product (GNP) are developed. The establishment survey data are used by inverting a partial-adjustment equation for hours. A second GNP forecast can be extracted from the household survey by using Okun's law. Using preliminary rather than final data adds about .2 to .4 percentage point to the expected value of the root mean squared errors and changes the weights that the pooling procedure assigns to the two forecasts. The use of preliminary rather than final data results in a procedure that assigns more importance to the Okun's-law forecast.  相似文献   

2.
Recent advances in financial econometrics have allowed for the construction of efficient ex post measures of daily volatility. This paper investigates the importance of instability in models of realised volatility and their corresponding forecasts. Testing for model instability is conducted with a subsampling method. We show that removing structurally unstable data of a short duration has a negligible impact on the accuracy of conditional mean forecasts of volatility. In contrast, it does provide a substantial improvement in a model's forecast density of volatility. In addition, the forecasting performance improves, often dramatically, when we evaluate models on structurally stable data.  相似文献   

3.
Using daily prices from 496 corn cash markets for July 2006–February 2011, this study investigates short-run forecast performance of 31 individual and 10 composite models for each market at horizons of 5, 10, and 30 days. Over the performance evaluation period September 2010–February 2011, two composite models are optimal across horizons for different markets based on the mean-squared error. For around half of the markets at the horizon of 5 days and most of them at 10 and 30 days, the mean-squared error of a market's optimal model is significantly different from those of at least other 23 models evaluated for it. Root-mean-squared error reductions through switching from non-optimal models to the optimal are generally around 0.40%, 0.55%, and 0.87% at horizons of 5, 10, and 30 days.  相似文献   

4.
The Black-Scholes option pricing model assumes that (instantaneous) common stock returns are normally distributed. However, the observed distribution exhibits deviations from normality; in particular skewness and kurtosis. We attribute these deviations to gross data errors. Using options' transactions data, we establish that the sample standard deviation, sample skewness, and sample kurtosis contribute to the Black-Scholes model's observed mispricing of a sample from the Berkeley Options Data Base of 2323 call options written on 88 common stocks paying no dividends during the options'life. Following Huber's statement that the primary case for robust statistics is when the shape of the observed distribution deviates slightly from the assumed distribution (usually the Gaussian), we show that robust volatility estimators eliminate the mispricing with respect to sample skewness and sample kurtosis, and significantly improve the Black-Scholes model's pricing performance with respect to estimated volatility.  相似文献   

5.
In this article, we analyze issues of pooling models for a given set of N individual units observed over T periods of time. When the parameters of the models are different but exhibit some similarity, pooling may lead to a reduction of the mean squared error of the estimates and forecasts. We investigate theoretically and through simulations the conditions that lead to improved performance of forecasts based on pooled estimates. We show that the superiority of pooled forecasts in small samples can deteriorate as the sample size grows. Empirical results for postwar international real gross domestic product growth rates of 18 Organization for Economic Cooperation and Development countries using a model put forward by Garcia-Ferrer, Highfield, Palm, and Zellner and Hong, among others illustrate these findings. When allowing for contemporaneous residual correlation across countries, pooling restrictions and criteria have to be rejected when formally tested, but generalized least squares (GLS)-based pooled forecasts are found to outperform GLS-based individual and ordinary least squares-based pooled and individual forecasts.  相似文献   

6.
The bootstrap, like the jackknife, is a technique for estimating standard errors. The idea is to use Monte Carlo simulation, based on a nonparametric estimate of the underlying error distribution. The bootstrap will be applied to an econometric model describing the demand for capital, labor, energy, and materials. The model is fitted by three-stage least squares. In sharp contrast with previous results, the coefficient estimates and the estimated standard errors perform very well. However, the model's forecasts show serious bias and large random errors, significantly understated by the conventional standard error of forecast.  相似文献   

7.
This paper presents applications of statistical linear models in which a confidence interval is required for the ratio of linear combinations of the model's parameters, Fieller's theorem is used to obtain the solution.  相似文献   

8.
We propose a structural change test based on the recursive residuals with the local Fourier series estimators. The statistical properties of the proposed test are derived and the empirical properties are shown via simulation. We also consider other structural change tests based on CUSUM, MOSUM, moving estimates (ME), and empirical distribution functions with the recursive residuals and the ordinary residuals. Empirical powers are calculated in various structural change models for the comparison of those tests. These structural change tests are applied to South Korea's gross domestic product (GDP), South Korean Won to US Dollar currency exchange rates, and South Korea's Okun's law.  相似文献   

9.
In many financial applications, Poisson mixture regression models are commonly used to analyze heterogeneous count data. When fitting these models, the observed counts are supposed to come from two or more subpopulations and parameter estimation is typically performed by means of maximum likelihood via the Expectation–Maximization algorithm. In this study, we discuss briefly the procedure for fitting Poisson mixture regression models by means of maximum likelihood, the model selection and goodness-of-fit tests. These models are applied to a real data set for credit-scoring purposes. We aim to reveal the impact of demographic and financial variables in creating different groups of clients and to predict the group to which each client belongs, as well as his expected number of defaulted payments. The model's conclusions are very interesting, revealing that the population consists of three groups, contrasting with the traditional good versus bad categorization approach of the credit-scoring systems.  相似文献   

10.
The maximum likelihood equations for a multivariate normal model with structured mean and structured covariance matrix may not have an explicit solution. In some cases the model's error term may be decomposed as the sum of two independent error terms, each having a patterned covariance matrix, such that if one of the unobservable error terms is artificially treated as "missing data", the EM algorithm can be used to compute the maximum likelihood estimates for the original problem. Some decompositions produce likelihood equations which do not have an explicit solution at each iteration of the EM algorithm, but within-iteration explicit solutions are shown for two general classes of models including covariance component models used for analysis of longitudinal data.  相似文献   

11.
This article applies general engineering rules for describing the reliability of devices working under variable stresses. The approach is based on imposing completeness and physicality. Completeness refers to the model's capability for studying as many stated conditions as possible, and physicality refers to the model's capability for incorporating explanatory variables specified and related each other by the physical laws. The proposed reliability model has as many explanatory variables as necessary but only three unknown parameters, and hence, it allows the engineer to collect reliability data from different tests campaigns, and to extrapolate reliability results towards other operational and design points.  相似文献   

12.
A composite forecast combines two or more individual forecasts into a single estimate by way of a number of different averaging schemes. The easiest way to combine forecasts is through using a simple average. In this paper, the authors show that in many instances the simple average of individual forecasts approximates the optimal combining scheme. Results are expressed in terms of the probability that a composite forecast will improve upon an individual forecast.  相似文献   

13.
Recent research on finding appropriate composite endpoints for preclinical Alzheimer's disease has focused considerable effort on finding “optimized” weights in the construction of a weighted composite score. In this paper, several proposed methods are reviewed. Our results indicate no evidence that these methods will increase the power of the test statistics, and some of these weights will introduce biases to the study. Our recommendation is to focus on identifying more sensitive items from clinical practice and appropriate statistical analyses of a large Alzheimer's data set. Once a set of items has been selected, there is no evidence that adding weights will generate more sensitive composite endpoints.  相似文献   

14.
The consumption-based asset-pricing model predicts that excess yields are determined by the market's degree of relative risk aversion and by the covariances of per capita consumption growth with asset returns. Estimation and testing are complicated by the fact that the model's predictions relate to the instantaneous flow of consumption and point-in-time asset values, but only data on the integral or time average of the consumption flow are available. This article shows how to take account of the effects of time averaging on the covariances. We estimate the model's parameters and test the overidentifying restrictions using six different data sets.  相似文献   

15.
We compare different approaches to accounting for parameter instability in the context of macroeconomic forecasting models that assume either small, frequent changes versus models whose parameters exhibit large, rare changes. An empirical out-of-sample forecasting exercise for U.S. gross domestic product (GDP) growth and inflation suggests that models that allow for parameter instability generate more accurate density forecasts than constant-parameter models although they fail to produce better point forecasts. Model combinations deliver similar gains in predictive performance although they fail to improve on the predictive accuracy of the single best model, which is a specification that allows for time-varying parameters and stochastic volatility. Supplementary materials for this article are available online.  相似文献   

16.
We compare the forecast accuracy of autoregressive integrated moving average (ARIMA) models based on data observed with high and low frequency, respectively. We discuss how, for instance, a quarterly model can be used to predict one quarter ahead even if only annual data are available, and we compare the variance of the prediction error in this case with the variance if quarterly observations were indeed available. Results on the expected information gain are presented for a number of ARIMA models including models that describe the seasonally adjusted gross national product (GNP) series in the Netherlands. Disaggregation from annual to quarterly GNP data has reduced the variance of short-run forecast errors considerably, but further disaggregation from quarterly to monthly data is found to hardly improve the accuracy of monthly forecasts.  相似文献   

17.
We derive forecasts for Markov switching models that are optimal in the mean square forecast error (MSFE) sense by means of weighting observations. We provide analytic expressions of the weights conditional on the Markov states and conditional on state probabilities. This allows us to study the effect of uncertainty around states on forecasts. It emerges that, even in large samples, forecasting performance increases substantially when the construction of optimal weights takes uncertainty around states into account. Performance of the optimal weights is shown through simulations and an application to U.S. GNP, where using optimal weights leads to significant reductions in MSFE. Supplementary materials for this article are available online.  相似文献   

18.
Ashley (1983) gave a simple condition for determining when a forecast of an explanatory variable (Xt ) is sufficiently inaccurate that direct replacement of Xt by the forecast yields worse forecasts of the dependent variable than does respecification of the equation to omit Xt . Many available macroeconomic forecasts were shown to be of limited usefulness in direct replacement. Direct replacement, however, is not optimal if the forecast's distribution is known. Here optimal linear forms in commercial forecasts of several macroeconomic variables are obtained by using estimates of their distributions. Although they are an improvement on the raw forecasts (direct replacement), these optimal forms are still too inaccurate to be useful in replacing the actual explanatory variables in forecasting models. The results strongly indicate that optimal forms involving several commercial forecasts will not be very useful either. Thus Ashley's (1983) sufficient condition retains its value in gauging the usefulness of a forecast of an explanatory variable in a forecasting model, even though it focuses on direct replacement.  相似文献   

19.
We propose a simple method for evaluating the model that has been chosen by an adaptive regression procedure, our main focus being the lasso. This procedure deletes each chosen predictor and refits the lasso to get a set of models that are “close” to the chosen “base model,” and compares the error rates of the base model with that of nearby models. If the deletion of a predictor leads to significant deterioration in the model's predictive power, the predictor is called indispensable; otherwise, the nearby model is called acceptable and can serve as a good alternative to the base model. This provides both an assessment of the predictive contribution of each variable and a set of alternative models that may be used in place of the chosen model. We call this procedure “Next-Door analysis” since it examines models “next” to the base model. It can be applied to supervised learning problems with 1 penalization and stepwise procedures. We have implemented it in the R language as a library to accompany the well-known glmnet library. The Canadian Journal of Statistics 48: 447–470; 2020 © 2020 Statistical Society of Canada  相似文献   

20.
The present study empirically analyzes the validity of Wagner's Law for Indian economy. With the use of annual time series data from 1970–71 to 2013–14, all the six versions of Wagner's Law have been analyzed to test the relationship between government expenditure and gross domestic product. Wagner's Law states that the economic growth is the causative factor of the growth of the public expenditure. The study applied the unit root test and cointegration test to find the long-run relationship between government expenditure and gross domestic product. The present study employed the various econometric techniques such as unit root test, cointegration, and causality analysis for empirical analysis. The empirical analysis under study inferred mixed results of Wagner's Law for Indian economy, where four versions, namely Peacock, Gupta, Guffman, and Musgrave, found valid for Indian economy. The study concluded that the Wagner's Law is valid for the Indian economy except the Pryor and Mann Versions of the Wagner's Law.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号