排序方式: 共有64条查询结果,搜索用时 531 毫秒
1.
This article considers in-sample prediction and out-of-sample forecasting in regressions with many exogenous predictors. We consider four dimension-reduction devices: principal components, ridge, Landweber Fridman, and partial least squares. We derive rates of convergence for two representative models: an ill-posed model and an approximate factor model. The theory is developed for a large cross-section and a large time-series. As all these methods depend on a tuning parameter to be selected, we also propose data-driven selection methods based on cross-validation and establish their optimality. Monte Carlo simulations and an empirical application to forecasting inflation and output growth in the U.S. show that data-reduction methods outperform conventional methods in several relevant settings, and might effectively guard against instabilities in predictors’ forecasting ability. 相似文献
2.
3.
Ruhao Wu 《Journal of applied statistics》2019,46(10):1774-1791
In human mortality modelling, if a population consists of several subpopulations it can be desirable to model their mortality rates simultaneously while taking into account the heterogeneity among them. The mortality forecasting methods tend to result in divergent forecasts for subpopulations when independence is assumed. However, under closely related social, economic and biological backgrounds, mortality patterns of these subpopulations are expected to be non-divergent in the future. In this article, we propose a new method for coherent modelling and forecasting of mortality rates for multiple subpopulations, in the sense of nondivergent life expectancy among subpopulations. The mortality rates of subpopulations are treated as multilevel functional data and a weighted multilevel functional principal component (wMFPCA) approach is proposed to model and forecast them. The proposed model is applied to sex-specific data for nine developed countries, and the results show that, in terms of overall forecasting accuracy, the model outperforms the independent model and the Product-Ratio model as well as the unweighted multilevel functional principal component approach. 相似文献
4.
Mark Steyvers Thomas S. Wallsten Edgar C. Merkle Brandon M. Turner 《Risk analysis》2014,34(3):435-452
We propose the use of signal detection theory (SDT) to evaluate the performance of both probabilistic forecasting systems and individual forecasters. The main advantage of SDT is that it provides a principled way to distinguish the response from system diagnosticity, which is defined as the ability to distinguish events that occur from those that do not. There are two challenges in applying SDT to probabilistic forecasts. First, the SDT model must handle judged probabilities rather than the conventional binary decisions. Second, the model must be able to operate in the presence of sparse data generated within the context of human forecasting systems. Our approach is to specify a model of how individual forecasts are generated from underlying representations and use Bayesian inference to estimate the underlying latent parameters. Given our estimate of the underlying representations, features of the classic SDT model, such as the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC), follow immediately. We show how our approach allows ROC curves and AUCs to be applied to individuals within a group of forecasters, estimated as a function of time, and extended to measure differences in forecastability across different domains. Among the advantages of this method is that it depends only on the ordinal properties of the probabilistic forecasts. We conclude with a brief discussion of how this approach might facilitate decision making. 相似文献
5.
In this paper, we link production planning decisions to marketing decisions that involve the price of product groups. The focus of this paper is the development of a closed-loop procedure for aggregate production planning and pricing. We seek to satisfy uncertain demand while minimizing total costs that include material, labour, and inventory holding costs. The procedure is useful for variable demand to update short-term aggregate plans. 相似文献
6.
A method for combining forecasts may or may not account for dependence and differing precision among forecasts. In this article we test a variety of such methods in the context of combining forecasts of GNP from four major econometric models. The methods include one in which forecasting errors are jointly normally distributed and several variants of this model as well as some simpler procedures and a Bayesian approach with a prior distribution based on exchangeability of forecasters. The results indicate that a simple average, the normal model with an independence assumption, and the Bayesian model perform better than the other approaches that are studied here. 相似文献
7.
Knut Anton Mork 《商业与经济统计学杂志》2013,31(2):165-175
Revisions of the early GNP estimates may contain elements of measurement errors as well as forecast errors. These types of error behave differently but need to satisfy a common set of criteria for well-behavedness. This article tests these criteria for U.S. GNP revisions. The tests are similar to tests of rationality and are based on the generalized method of moments estimator. The flash, 15-day, and 45-day estimates are found to be ill behaved, but the 75-day estimate satisfies the criteria for well-behavedness. 相似文献
8.
The problems of assessing, comparing and combining probability forecasts for a binary events sequence are considered. A Gaussian threshold model (analytically of closed form) is introduced which allows generation of different probability forecast sequences valid for the same events. Chi - squared type test statistics, and also a marginal-conditional method are proposed for the assessment problem, and an asymptotic normality result is given. A graphical method is developed for the comparison problem, based upon decomposing arbitrary proper scoring rules into certain elementary scoring functions. The special role of the logarithmic scoring rule is examined in the context of Neyman - Pearson theory. 相似文献
9.
《Journal of gerontological social work》2013,56(2):21-40
Abstract Authors in the Encyclopedia of Social Work (formerly the Social Work Year Book), beginning in the 1930s, have proffered forecasts to help social workers anticipate the composition and needs of the current cohort of older adults. First 1980 and later the year 2000 were often targeted in these forecasts. The purpose of this article is to analyze accuracy and utility of these earlier forecasts and to examine implications for social service professionals now attempting to make forecasts and develop practice and policy strategies for the 21st century when the baby boom cohort will become the elder boom. 相似文献
10.
Jan G. De Gooijer 《Journal of applied statistics》2007,34(4):371-381
We compare and investigate Neyman's smooth test, its components, and the Kolmogorov-Smirnov (KS) goodness-of-fit test for testing the uniformity of multivariate forecast densities. Simulations indicate that the KS test lacks power when the forecast distributions are misspecified, especially for correlated sequences of random variables. Neyman's smooth test and its components work well in samples of size typically available, although there sometimes are size distortions. The components provide directed diagnosis regarding the kind of departure from the null. For illustration, the tests are applied to forecast densities obtained from a bivariate threshold model fitted to high-frequency financial data. 相似文献