首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Using daily prices from 496 corn cash markets for July 2006–February 2011, this study investigates short-run forecast performance of 31 individual and 10 composite models for each market at horizons of 5, 10, and 30 days. Over the performance evaluation period September 2010–February 2011, two composite models are optimal across horizons for different markets based on the mean-squared error. For around half of the markets at the horizon of 5 days and most of them at 10 and 30 days, the mean-squared error of a market's optimal model is significantly different from those of at least other 23 models evaluated for it. Root-mean-squared error reductions through switching from non-optimal models to the optimal are generally around 0.40%, 0.55%, and 0.87% at horizons of 5, 10, and 30 days.  相似文献   

2.
Releases of GDP data undergo a series of revisions over time. These revisions have an impact on the results of macroeconometric models documented by the growing literature on real-time data applications. Revisions of U.S. GDP data can be explained and are partly predictable according to Faust et al. (J. Money Credit Bank. 37(3):403–419, 2005) or Fixler and Grimm (J. Product. Anal. 25:213–229, 2006). This analysis proposes the inclusion of mixed frequency data for forecasting GDP revisions. Thereby, the information set available around the first data vintage can be better exploited than the pure quarterly data. In-sample and out-of-sample results suggest that forecasts of GDP revisions can be improved by using mixed frequency data.  相似文献   

3.
"The limitations of available migration data preclude a time-series approach of modeling interstate migration [in the United States]. The method presented here combines aspects of the demographic and economic approaches to forecasting migration in a manner compatible with existing data. Migration rates are modeled to change in response to changes in economic conditions. When applied to resently constructed data on migration based on income tax returns and then compared to standard demographic projections, the demographic-economic approach has a 20% lower total error in forecasting net migration by state for cohorts of labor-force age."  相似文献   

4.
This paper presents an empirical analysis of stochastic features of volatility in the Japanese stock price index, or TOPIX, using high-frequency data sampled every 5 min. The process of TOPIX is modeled by a stochastic differential equation with the time-homogeneous drift and diffusion coefficients. To avoid the risk of misspecification for the volatility function, which is defined by the squared diffusion coefficient, the local polynomial model is applied to the data, and then produced the estimates of the volatility function together with their confidence intervals. The result of the estimation suggests that the volatility function shows similar patterns for one period, but drastically changes for another.  相似文献   

5.
Summary. Many economic and social phenomena are measured by composite indicators computed as weighted averages of a set of elementary time series. Often data are collected by means of large sample surveys, and processing takes a long time, whereas the values of some elementary component series may be available a considerable time before the others and may be used for forecasting the composite index. This problem is addressed within the framework of prediction theory for stochastic processes. A method is proposed for exploiting anticipated information to minimize the mean-square forecast error, and for selecting the most useful elementary series. An application to the Italian general industrial production index is illustrated, which demonstrates that knowledge of anticipated values of some, or even just one, component series may reduce the forecast error considerably.  相似文献   

6.
7.
SUMMARY The combined array provides a powerful, more statistically rigorous alternative to Taguchi's crossed-array approach to robust parameter design. The combined array assumes a single linear model in the control and the noise factors. One may then find conditions for the control factors which will minimize an appropriate loss function that involves the noise factors. The most appropriate loss function is often simply the resulting process variance, recognizing that the noise factors are actually random effects in the process. Because the major focus of such an experiment is to optimize the estimated process variance, it is vital to understand the resulting prediction properties. This paper develops the mean squared error for the estimated process variance for the combined array approach, under the assumption that the model is correctly specified. Specific combined arrays are compared for robustness. A practical example outlines how this approach may be used to select appropriate combined arrays within a particular experimental situation.  相似文献   

8.
In this paper, we propose a new model of asset prices which takes account of the investment strategies of three different kinds of agents: the market-makers, who operate rationally on the basis of the asset fundamentals, the smart buy-and-sell agents, who intervene when the prices reach particular levels and the non-smart buy-and-sell agents, who trade infrequently, mainly following psychological motivations. The different behavior of these groups of agents can determine temporary inefficiences on financial markets and we show that, by considering these inefficiences, it is possible to improve forecasting of asset prices.  相似文献   

9.
Robust parameter design, originally proposed by Taguchi (1987. System of Experimental Design, vols. I and II. UNIPUB, New York), is an off-line production technique for reducing variation and improving a product's quality by using product arrays. However, the use of product arrays results in an exorbitant number of runs. To overcome the drawbacks of the product array several scientists proposed the use of combined arrays, where the control and noise factors are combined in a single array. In this paper, we use certain orthogonal arrays that are embedded into Hadamard matrices as combined arrays, in order to identify a model that contains all the main effects (control and noise) and their control-by-noise interactions with high efficiency. Aliasing of effects in each case is also discussed.  相似文献   

10.
ABSTRACT

For the rating process of Collateralized Debt Obligations', Moody's suggests the Diversity Score as a measure of diversification in the collateral pool. This measure is used in Moody's Binomial Expansion Technique to infer the probability of default and thus the expected Loss in the portfolio. In this paper, we examine the appropriateness of this approach to assess the reality of defaults using a copula approach and lower tail dependence.  相似文献   

11.
Various methods for clustering mixed-mode data are compared. It is found that a method based on a finite mixture model in which the observed categorical variables are generated from underlying continuous variables out-performs more conventional methods when applied to artificially generated data. This method also performs best when applied to Fisher's iris data in which two of the variables are categorized by applying thresholds.  相似文献   

12.
This article investigates the relevance of considering a large number of macroeconomic indicators to forecast the complete distribution of a variable. The baseline time series model is a semiparametric specification based on the quantile autoregressive (QAR) model that assumes that the quantiles depend on the lagged values of the variable. We then augment the time series model with macroeconomic information from a large dataset by including principal components or a subset of variables selected by LASSO. We forecast the distribution of the h-month growth rate for four economic variables from 1975 to 2011 and evaluate the forecast accuracy relative to a stochastic volatility model using the quantile score. The results for the output and employment measures indicate that the multivariate models outperform the time series forecasts, in particular at long horizons and in tails of the distribution, while for the inflation variables the improved performance occurs mostly at the 6-month horizon. We also illustrate the practical relevance of predicting the distribution by considering forecasts at three dates during the last recession.  相似文献   

13.
Modeling and forecasting of interest rates has traditionally proceeded in the framework of linear stationary methods such as ARMA and VAR, but only with moderate success. We examine here three methods, which account for several specific features of the real world asset prices such as nonstationarity and nonlinearity. Our three candidate methods are based, respectively, on a combined wavelet artificial neural network (WANN) analysis, a mixed spectrum (MS) analysis and nonlinear ARMA models with Fourier coefficients (FNLARMA). These models are applied to weekly data on interest rates in India and their forecasting performance is evaluated vis-à-vis three GARCH models [GARCH (1,1), GARCH-M (1,1) and EGARCH (1,1)] as well as the random walk model. Both the WANN and MS methods show marked improvement over other benchmark models, and may thus hold out several potentials for real world modeling and forecasting of financial data.  相似文献   

14.
King’s Point Optimal (PO) test of a simple null hypothesis is useful in a number of ways, for example it can be used to trace the power envelope against which existing tests can be compared. However, this test cannot always be constructed when testing a composite null hypothesis. It is suggested in the literature that approximate PO (APO) tests can overcome this problem, but they also have some drawbacks. This paper investigates if King’s PO test can be used for testing a composite null in the presence of nuisance parameters via a maximized Monte Carlo (MMC) approach, with encouraging results.  相似文献   

15.
This study compares two methods for handling missing data in longitudinal trials: one using the last-observation-carried-forward (LOCF) method and one based on a multivariate or mixed model for repeated measurements (MMRM). Using data sets simulated to match six actual trials, I imposed several drop-out mechanisms, and compared the methods in terms of bias in the treatment difference and power of the treatment comparison. With equal drop-out in Active and Placebo arms, LOCF generally underestimated the treatment effect; but with unequal drop-out, bias could be much larger and in either direction. In contrast, bias with the MMRM method was much smaller; and whereas MMRM rarely caused a difference in power of greater than 20%, LOCF caused a difference in power of greater than 20% in nearly half the simulations. Use of the LOCF method is therefore likely to misrepresent the results of a trial seriously, and so is not a good choice for primary analysis. In contrast, the MMRM method is unlikely to result in serious misinterpretation, unless the drop-out mechanism is missing not at random (MNAR) and there is substantially unequal drop-out. Moreover, MMRM is clearly more reliable and better grounded statistically. Neither method is capable of dealing on its own with trials involving MNAR drop-out mechanisms, for which sensitivity analysis is needed using more complex methods.  相似文献   

16.
This paper deals with the implementation of model selection criteria to data generated by ARMA processes. The recently introduced modified divergence information criterion is used and compared with traditional selection criteria like the Akaike information criterion (AIC) and the Schwarz information criterion (SIC). The appropriateness of the selected model is tested for one- and five-step ahead predictions with the use of the normalized mean squared forecast errors (NMSFE).  相似文献   

17.
"This article demonstrates the value of microdata for understanding the effect of wages on life cycle fertility dynamics. Conventional estimates of neoclassical economic fertility models obtained from linear aggregate time series regressions are widely criticized for being nonrobust when adjusted for serial correlation. Moreover, the forecasting power of these aggregative neoclassical models has been shown to be inferior when compared with conventional time series models that assign no role to wages. This article demonstrates that, when neoclassical models of fertility are estimated on microdata using methods that incorporate key demographic restrictions and when they are properly aggregated, they have considerable forecasting power." Data are from the 1981 Swedish Fertility Survey.  相似文献   

18.
Dankmar Böhing 《Statistics》2013,47(4):487-495
Tn optimal experimental design theory there are well-known situations, in which additional constraints are implied to the design set. These constraints destroy in general the simplex structure of the set of feasible points of the design set. Thus the available iteration procedures for the unrestricted case are no longer applicable.

In this paper a penalty approach is suggested which transforms the restricted problem to the unrestricted case and allows the application of well-known algorithms such as the Fedorov-Wynn-type or the projected gradient procedure.  相似文献   

19.
AStA Advances in Statistical Analysis - Spatial price comparisons rely to a high degree on the quality of the underlying price data that are collected within or across countries. Below the basic...  相似文献   

20.
Consider a stochastic process (X,A), where X represents the evolution of a system over time, and A is an associated point process that has stationary independent increments. Suppose we are interested in estimating the time average frequency of the process X being in a set of states. Often it is more convenient to have a sampling procedure for estimating the time average based on averaging the observed values of X(Tn) (Tn being a point of A) over a long period of time: the event average of the process. In this paper we examine the situation when the two procedures—event averaging and time averaging—produce the same estimate (the ASTA property: Arrivals See Time Averages). We prove a result stronger than ASTA. Under a lack-of-anticipation assumption we prove that the point process, A, restricted to any set of states, has the same probabilistic structure as the original point process. In particular, if the original point process is Poisson the new point process is still Poisson with the same parameter as the original point process. We develop our results in the more general setting of a stochastic process (X,A), that is, a process with an imbedded cumulative process, A={A(t),t0}, which is assumed to be a Levy process with non-decreasing sample paths. This framework allows for modeling fluid processes, as well as compound Poisson processes with non-integer increments. First, we state the result in discrete time; the discrete-time result is then extended to the continuous-time case using limiting arguments and weak-convergence theory. As a corollary we give a proof of ASTA under weak conditions and a simple, intuitive proof of (Poisson Arrivals See Time Averages) under the standard conditions. The results are useful in queueing and statistical sampling theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号