首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
Ashley (1983) gave a simple condition for determining when a forecast of an explanatory variable (Xt ) is sufficiently inaccurate that direct replacement of Xt by the forecast yields worse forecasts of the dependent variable than does respecification of the equation to omit Xt . Many available macroeconomic forecasts were shown to be of limited usefulness in direct replacement. Direct replacement, however, is not optimal if the forecast's distribution is known. Here optimal linear forms in commercial forecasts of several macroeconomic variables are obtained by using estimates of their distributions. Although they are an improvement on the raw forecasts (direct replacement), these optimal forms are still too inaccurate to be useful in replacing the actual explanatory variables in forecasting models. The results strongly indicate that optimal forms involving several commercial forecasts will not be very useful either. Thus Ashley's (1983) sufficient condition retains its value in gauging the usefulness of a forecast of an explanatory variable in a forecasting model, even though it focuses on direct replacement.  相似文献   

2.
Capacity utilization measures have traditionally been constructed as indexes of actual, as compared to “potential,” output. This potential or capacity output (Y*) can be represented within an economic model of the firm as the tangency between the short- and long-run average cost curves. Economic theoretical measures of capacity utilization (CU) can then be characterized as Y/Y* where Y is the realized level of output. These quantity or primal CU measures allow for economic interpretation; they provide explicit inference as to how changes in exogenous variables affect CU. Additional information for analyzing deviations from capacity production can be obtained by assessing the “dual” cost of the gap.

In this article the definitions and representations of primal-output and dual-cost CU measures are formalized within a dynamic model of a monopolistic firm. As an illustration of this approach to characterizing CU measures, a model is estimated for the U.S. automobile industry, 1959–1980, and primal and dual CU indexes are constructed. Application of these indexes to adjustment-of-productivity measures for “disequilibrium” is then carried out, using the dual-cost measure.  相似文献   

3.
We propose a parametric nonlinear time-series model, namely the Autoregressive-Stochastic volatility with threshold (AR-SVT) model with mean equation for forecasting level and volatility. Methodology for estimation of parameters of this model is developed by first obtaining recursive Kalman filter time-update equation and then employing the unrestricted quasi-maximum likelihood method. Furthermore, optimal one-step and two-step-ahead out-of-sample forecasts formulae along with forecast error variances are derived analytically by recursive use of conditional expectation and variance. As an illustration, volatile all-India monthly spices export during the period January 2006 to January 2012 is considered. Entire data analysis is carried out using EViews and matrix laboratory (MATLAB) software packages. The AR-SVT model is fitted and interval forecasts for 10 hold-out data points are obtained. Superiority of this model for describing and forecasting over other competing models for volatility, namely AR-Generalized autoregressive conditional heteroscedastic, AR-Exponential GARCH, AR-Threshold GARCH, and AR-Stochastic volatility models is shown for the data under consideration. Finally, for the AR-SVT model, optimal out-of-sample forecasts along with forecasts of one-step-ahead variances are obtained.  相似文献   

4.
Classical time-series theory assumes values of the response variable to be ‘crisp’ or ‘precise’, which is quite often violated in reality. However, forecasting of such data can be carried out through fuzzy time-series analysis. This article presents an improved method of forecasting based on LR fuzzy sets as membership functions. As an illustration, the methodology is employed for forecasting India's total foodgrain production. For the data under consideration, superiority of proposed method over other competing methods is demonstrated in respect of modelling and forecasting on the basis of mean square error and average relative error criteria. Finally, out-of-sample forecasts are also obtained.  相似文献   

5.
In this article, we test a new method of combining economic forecasts, the odds-matrix approach, using Clemen and Winkler's (1986) data on gross-national-product forecasts. For these data, the results show that the odds-matrix method is more accurate than each of the other methods tested and can be expected to be especially useful when data are nonstationary or sparse.  相似文献   

6.
This article shows when the theoretical Lagrange multiplier solution for combining forecasts has a regression representation. This solution is not optimal in general because it imposes a restriction on an otherwise more general linear form. The optimal linear predictor based on N forecasts is presented. This predictor is or is not a regression function depending on whether the latter function is linear. I also show that the Lagrange multiplier solution may often be nearly optimal. Hence, when estimating a composite forecast, the restriction imposed by this solution may prove useful. This observation is supported in an empirical example.  相似文献   

7.
Recent advances in financial econometrics have allowed for the construction of efficient ex post measures of daily volatility. This paper investigates the importance of instability in models of realised volatility and their corresponding forecasts. Testing for model instability is conducted with a subsampling method. We show that removing structurally unstable data of a short duration has a negligible impact on the accuracy of conditional mean forecasts of volatility. In contrast, it does provide a substantial improvement in a model's forecast density of volatility. In addition, the forecasting performance improves, often dramatically, when we evaluate models on structurally stable data.  相似文献   

8.
《Econometric Reviews》2013,32(3):175-198
Abstract

A number of volatility forecasting studies have led to the perception that the ARCH- and Stochastic Volatility-type models provide poor out-of-sample forecasts of volatility. This is primarily based on the use of traditional forecast evaluation criteria concerning the accuracy and the unbiasedness of forecasts. In this paper we provide an analytical assessment of volatility forecasting performance. We use the volatility and log volatility framework to prove how the inherent noise in the approximation of the true- and unobservable-volatility by the squared return, results in a misleading forecast evaluation, inflating the observed mean squared forecast error and invalidating the Diebold–Mariano statistic. We analytically characterize this noise and explicitly quantify its effects assuming normal errors. We extend our results using more general error structures such as the Compound Normal and the Gram–Charlier classes of distributions. We argue that evaluation problems are likely to be exacerbated by non-normality of the shocks and that non-linear and utility-based criteria can be more suitable for the evaluation of volatility forecasts.  相似文献   

9.
Hubert (1987Assignment Methods in Combinatorial Data Analysis) presented a class of permutation, or random assignment, techniques for assessing correspondence between general k-dimensional proximity measures on a set of “objects.” A major problem in higher-order assignment models is the prohibitive level of computation that is required. We present the first three exact moments of a test statistic for the symmetric cubic assignment model. Efficient computational formulas for the first three moments have been derived, thereby permitting approximation of the permutation distribution using well-known methods.  相似文献   

10.
Abstract

In categorical repeated audit controls, fallible auditors classify sample elements in order to estimate the population fraction of elements in certain categories. To take possible misclassifications into account, subsequent checks are performed with a decreasing number of observations. In this paper a model is presented for a general repeated audit control system, where k subsequent auditors classify elements into r categories. Two different subsampling procedures will be discussed, named “stratified” and “random” sampling. Although these two sampling methods lead to different probability distributions, it is shown that the likelihood inferences are identical. The MLE are derived and the situations with undefined MLE are examined in detail; it is shown that an unbiased MLE can be obtained by stratified sampling. Three different methods for constructing confidence upper limits are discussed; the Bayesian upper limit seems to be the most satisfactory. Our theoretical results are applied to two cases with r = 2 and k = 2 or 3, respectively.  相似文献   

11.
It has greater randomness that the existing characteristic value correction iteration method terminates the iterative progress by artificially setting threshold to acquire the better intermediate estimates than least squares. To address this problem, the novel characteristic value correction iteration method is proposed in this article. Considering the balance between solutions and residuals, L curve drawn to express the correlation of solutions and residuals and the iteration time correspond to the maximum curvature point is chosen as the iteration termination condition to obtain the “optimal” intermediate estimates. Numerical experiment is carried out to demonstrate the correctness of the theoretical analysis.  相似文献   

12.
It is well known that parameter estimates and forecasts are sensitive to assumptions about the tail behavior of the error distribution. In this article, we develop an approach to sequential inference that also simultaneously estimates the tail of the accompanying error distribution. Our simulation-based approach models errors with a tν-distribution and, as new data arrives, we sequentially compute the marginal posterior distribution of the tail thickness. Our method naturally incorporates fat-tailed error distributions and can be extended to other data features such as stochastic volatility. We show that the sequential Bayes factor provides an optimal test of fat-tails versus normality. We provide an empirical and theoretical analysis of the rate of learning of tail thickness under a default Jeffreys prior. We illustrate our sequential methodology on the British pound/U.S. dollar daily exchange rate data and on data from the 2008–2009 credit crisis using daily S&P500 returns. Our method naturally extends to multivariate and dynamic panel data.  相似文献   

13.
The gist of the quickest change-point detection problem is to detect the presence of a change in the statistical behavior of a series of sequentially made observations, and do so in an optimal detection-speed-versus-“false-positive”-risk manner. When optimality is understood either in the generalized Bayesian sense or as defined in Shiryaev's multi-cyclic setup, the so-called Shiryaev–Roberts (SR) detection procedure is known to be the “best one can do”, provided, however, that the observations’ pre- and post-change distributions are both fully specified. We consider a more realistic setup, viz. one where the post-change distribution is assumed known only up to a parameter, so that the latter may be misspecified. The question of interest is the sensitivity (or robustness) of the otherwise “best” SR procedure with respect to a possible misspecification of the post-change distribution parameter. To answer this question, we provide a case study where, in a specific Gaussian scenario, we allow the SR procedure to be “out of tune” in the way of the post-change distribution parameter, and numerically assess the effect of the “mistuning” on Shiryaev's (multi-cyclic) Stationary Average Detection Delay delivered by the SR procedure. The comprehensive quantitative robustness characterization of the SR procedure obtained in the study can be used to develop the respective theory as well as to provide a rational for practical design of the SR procedure. The overall qualitative conclusion of the study is an expected one: the SR procedure is less (more) robust for less (more) contrast changes and for lower (higher) levels of the false alarm risk.  相似文献   

14.
This article discusses the discretization of continuous-time filters for application to discrete time series sampled at any fixed frequency. In this approach, the filter is first set up directly in continuous-time; since the filter is expressed over a continuous range of lags, we also refer to them as continuous-lag filters. The second step is to discretize the filter itself. This approach applies to different problems in signal extraction, including trend or business cycle analysis, and the method allows for coherent design of discrete filters for observed data sampled as a stock or a flow, for nonstationary data with stochastic trend, and for different sampling frequencies. We derive explicit formulas for the mean squared error (MSE) optimal discretization filters. We also discuss the problem of optimal interpolation for nonstationary processes – namely, how to estimate the values of a process and its components at arbitrary times in-between the sampling times. A number of illustrations of discrete filter coefficient calculations are provided, including the local level model (LLM) trend filter, the smooth trend model (STM) trend filter, and the Band Pass (BP) filter. The essential methodology can be applied to other kinds of trend extraction problems. Finally, we provide an extended demonstration of the method on CPI flow data measured at monthly and annual sampling frequencies.  相似文献   

15.
The Box–Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet—a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time series. This bundle of many possible realizations is used to project the future in several ways. Highest probability forecast regions are formed and portrayed with computer graphics. The predictive density's shape is explored. Finally, we discuss a method that allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.  相似文献   

16.
In this note we consider the problems of optimal linear prediction (o.l.p.) and the minimum mean squared error prediction (m.m.s.e.p.) of a sequence Xt, which fits to a stationary and invertible ARMA model through the filter (1 - Bs)d Xt= Yt. It is shown that these two predictors are not identical in general from the theoretical point of view. Permitting the degree of differencing d to take any real value, a set of conditions for these commonly applied prediction formulas to be identical is given.  相似文献   

17.
The efficiency of a sequential test is related to the “importance” of the trials within the test. This relationship is used to find the optimal test for selecting the greater of two binomial probabilities, pα and pb, namely, the stopping rule is “gambler's ruin” and the optimal discipline when pα+pb 1 (≥ 1) is play-the-winner (loser), i.e. an α-trial which results in a success is followed by an α-trial (b-trial) whereas an α-trial which results in a failure is followed by α b-trid (α-trial) and correspondingly for b-trials.  相似文献   

18.
ABSTRACT

We propose point forecast accuracy measures based directly on distance of the forecast-error c.d.f. from the unit step function at 0 (“stochastic error distance,” or SED). We provide a precise characterization of the relationship between SED and standard predictive loss functions, and we show that all such loss functions can be written as weighted SEDs. The leading case is absolute error loss. Among other things, this suggests shifting attention away from conditional-mean forecasts and toward conditional-median forecasts.  相似文献   

19.
The confidence interval of the Kaplan–Meier estimate of the survival probability at a fixed time point is often constructed by the Greenwood formula. This normal approximation-based method can be looked as a Wald type confidence interval for a binomial proportion, the survival probability, using the “effective” sample size defined by Cutler and Ederer. Wald-type binomial confidence interval has been shown to perform poorly comparing to other methods. We choose three methods of binomial confidence intervals for the construction of confidence interval for survival probability: Wilson's method, Agresti–Coull's method, and higher-order asymptotic likelihood method. The methods of “effective” sample size proposed by Peto et al. and Dorey and Korn are also considered. The Greenwood formula is far from satisfactory, while confidence intervals based on the three methods of binomial proportion using Cutler and Ederer's “effective” sample size have much better performance.  相似文献   

20.
The Box-Jenkins method is a popular and important technique for modeling and forecasting of time series. Unfortunately the problem of determining the appropriate ARMA forecasting model (or indeed if an ARMA model holds) is a major drawback to the use of the Box-Jenkins methodology. Gray et al. (1978) and Woodward and Gray (1979) have proposed methods of estimating p and qin ARMA modeling based on the R and Sarrays that circumvent some of these modeling difficulties.

In this paper we generalize the R and S arrays by showing a relationship to Padé approximunts and then show that these arrays have a much wider application than in just determining model order. Particular non-ARMA models can be identified as well. This includes certain processes that consist of deterministic functions plus ARMA noise, indeed we believe that the combined R and S arrays are the best overall tool so fur developed for the identification of general 2nd order (not just stationary) time scries models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号