首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

We propose point forecast accuracy measures based directly on distance of the forecast-error c.d.f. from the unit step function at 0 (“stochastic error distance,” or SED). We provide a precise characterization of the relationship between SED and standard predictive loss functions, and we show that all such loss functions can be written as weighted SEDs. The leading case is absolute error loss. Among other things, this suggests shifting attention away from conditional-mean forecasts and toward conditional-median forecasts.  相似文献   

2.
ABSTRACT

An information framework is proposed for studying uncertainty and disagreement of economic forecasters. This framework builds upon the mixture model of combining density forecasts through a systematic application of the information theory. The framework encompasses the measures used in the literature and leads to their generalizations. The focal measure is the Jensen–Shannon divergence of the mixture which admits Kullback–Leibler and mutual information representations. Illustrations include exploring the dynamics of the individual and aggregate uncertainty about the US inflation rate using the survey of professional forecasters (SPF). We show that the normalized entropy index corrects some of the distortions caused by changes of the design of the SPF over time. Bayesian hierarchical models are used to examine the association of the inflation uncertainty with the anticipated inflation and the dispersion of point forecasts. Implementation of the information framework based on the variance and Dirichlet model for capturing uncertainty about the probability distribution of the economic variable are briefly discussed.  相似文献   

3.
ABSTRACT

Squared error loss remains the most commonly used loss function for constructing a Bayes estimator of the parameter of interest. However, it can lead to suboptimal solutions when a parameter is defined on a restricted space. It can also be an inappropriate choice in the context when an extreme overestimation and/or underestimation results in severe consequences and a more conservative estimator is preferred. We advocate a class of loss functions for parameters defined on restricted spaces which infinitely penalize boundary decisions like the squared error loss does on the real line. We also recall several properties of loss functions such as symmetry, convexity and invariance. We propose generalizations of the squared error loss function for parameters defined on the positive real line and on an interval. We provide explicit solutions for corresponding Bayes estimators and discuss multivariate extensions. Four well-known Bayesian estimation problems are used to demonstrate inferential benefits the novel Bayes estimators can provide in the context of restricted estimation.  相似文献   

4.
We study the information content of South African inflation survey data by determining the directional accuracy of both short-term and long-term forecasts. We use relative operating characteristic (ROC) curves, which have been applied in a variety of fields including weather forecasting and radiology, to ascertain the directional accuracy of the forecasts. A ROC curve summarizes the directional accuracy of forecasts by comparing the rate of true signals (sensitivity) with the rate of false signals (one minus specifity). A ROC curve goes beyond market-timing tests widely studied in earlier research as this comparison is carried out for many alternative values of a decision criterion that discriminates between signals (of a rising inflation rate) and nonsignals (of an unchanged or a falling inflation rate). We find consistent evidence that forecasts contain information with respect to the subsequent direction of change of the inflation rate.  相似文献   

5.
Using published interest rates forecasts issued by professional economists, two combination forecasts designed to improve the directional accuracy of interest rate forecasting are constructed. The first combination forecast takes a weighted average of the individual forecasters' predictions. The more successful the forecaster was in past forecasts at predicting the direction of change in interest rates, the greater is the weight given to his/her current forecast. The second combination forecast is simply the forecast issued by the forecaster who had the greatest success rate at predicting the direction of change in interest rates in previous forecasts. In cases where two or more forecasters tie for best historic directional accuracy track record, the arithmetic mean of these forecasters is used. The study finds that neither combination forecasting method performs better than coin-flipping at predicting the direction of change in interest rates. Nor does either method beat the simple arithmetic mean of the predictions of all the forecasters surveyed at predicting the direction of change in interest rates.  相似文献   

6.
Abstract.  We propose a global smoothing method based on polynomial splines for the estimation of functional coefficient regression models for non-linear time series. Consistency and rate of convergence results are given to support the proposed estimation method. Methods for automatic selection of the threshold variable and significant variables (or lags) are discussed. The estimated model is used to produce multi-step-ahead forecasts, including interval forecasts and density forecasts. The methodology is illustrated by simulations and two real data examples.  相似文献   

7.
ABSTRACT

In this paper, we derive the Bayes estimators of functions of parameters of the size-biased generalized power series distribution under squared error loss function and weighted square error loss function. The results of size-biased GPSD are then used to obtain particular cases of the size-biased negative binomial, size-biased logarithmic series, and size-biased Poisson distributions. These estimators are better than the classical minimum variance unbiased estimators in the sense that they increase the range of the estimation. Finally, an example is provided to illustrate the results and a goodness of fit test is done using the maximum likelihood and Bayes estimators.  相似文献   

8.
Abstract

In this paper, we assume that the lifetimes have a two-parameter Pareto distribution and discuss some results of progressive Type-II censored sample. We obtain maximum likelihood estimators and Bayes estimators of the unknown parameters under squared error loss and a precautionary loss functions in progressively Type-II censored sample. Robust Bayes estimation of unknown parameters over three different classes of priors under progressively Type-II censored sample, squared error loss, and precautionary loss functions are obtained. We discuss estimation of unknown parameters on competing risks progressive Type-II censoring. Finally, we consider the problem of estimating the common scale parameter of two Pareto distributions when samples are progressively Type-II censored.  相似文献   

9.
ABSTRACT

Background: Many exposures in epidemiological studies have nonlinear effects and the problem is to choose an appropriate functional relationship between such exposures and the outcome. One common approach is to investigate several parametric transformations of the covariate of interest, and to select a posteriori the function that fits the data the best. However, such approach may result in an inflated Type I error. Methods: Through a simulation study, we generated data from Cox's models with different transformations of a single continuous covariate. We investigated the Type I error rate and the power of the likelihood ratio test (LRT) corresponding to three different procedures that considered the same set of parametric dose-response functions. The first unconditional approach did not involve any model selection, while the second conditional approach was based on a posteriori selection of the parametric function. The proposed third approach was similar to the second except that it used a corrected critical value for the LRT to ensure a correct Type I error. Results: The Type I error rate of the second approach was two times higher than the nominal size. For simple monotone dose-response, the corrected test had similar power as the unconditional approach, while for non monotone, dose-response, it had a higher power. A real-life application that focused on the effect of body mass index on the risk of coronary heart disease death, illustrated the advantage of the proposed approach. Conclusion: Our results confirm that a posteriori selecting the functional form of the dose-response induces a Type I error inflation. The corrected procedure, which can be applied in a wide range of situations, may provide a good trade-off between Type I error and power.  相似文献   

10.
ABSTRACT

The paper deals with Bayes estimation of the exponentiated Weibull shape parameters under linex loss function when independent non-informative type of priors are available for the parameters. Generalized maximum likelihood estimators have also been obtained. Performances of the proposed Bayes estimator, generalized maximum likelihood estimators, posterior mean (i.e., Bayes estimator under squared error loss function) and maximum likelihood estimators have been studied on the basis of their risks under linex loss function. The comparison is based on a simulation study because the expressions for risk functions of these estimators cannot be obtained in nice closed forms.  相似文献   

11.
In the paper we consider minimisation of U-statistics with the weighted Lasso penalty and investigate their asymptotic properties in model selection and estimation. We prove that the use of appropriate weights in the penalty leads to the procedure that behaves like the oracle that knows the true model in advance, i.e. it is model selection consistent and estimates nonzero parameters with the standard rate. For the unweighted Lasso penalty, we obtain sufficient and necessary conditions for model selection consistency of estimators. The obtained results strongly based on the convexity of the loss function that is the main assumption of the paper. Our theorems can be applied to the ranking problem as well as generalised regression models. Thus, using U-statistics we can study more complex models (better describing real problems) than usually investigated linear or generalised linear models.  相似文献   

12.
Nakamura (1990) introduced an approach to estimation in measurement error models based on a corrected score function, and claimed that the estimators obtained are consistent for functional models. Proof of the claim essentially assumed the existence of a corrected log-likelihood for which differentiation with respect to model parameters can be interchanged with conditional expectation taken with respect to the measurement error distributions, given the response variables and true covariates. This paper deals with simple yet practical models for which the above assumption is false, i.e. a corrected score function for the model may not be obtained through differentiating a corrected log-likelihood although it exists. Alternative regularity conditions with no reference to log-likelihood are given, under which the corrected score functions yield consistent and asymptotically normal estimators. Application to functional comparative calibration yields interesting results.  相似文献   

13.
Abstract

In this article, a new composite quantile regression estimation (CQR) approach is proposed for partially linear varying coefficient models (PLVCM) under composite quantile loss function with B-spline approximations. The major advantage of the proposed procedures over the existing ones is easy to implement using existing software, and it requires no specification of the error distributions. Under the regularity conditions, the consistency and asymptotic normality of the estimators are also derived. Finally, a simulation study and a real data application are undertaken to assess the finite sample performance of the proposed estimation procedure.  相似文献   

14.
Abstract

In this work, we propose beta prime kernel estimator for estimation of a probability density functions defined with nonnegative support. For the proposed estimator, beta prime probability density function used as a kernel. It is free of boundary bias and nonnegative with a natural varying shape. We obtained the optimal rate of convergence for the mean squared error (MSE) and the mean integrated squared error (MISE). Also, we use adaptive Bayesian bandwidth selection method with Lindley approximation for heavy tailed distributions and compare its performance with the global least squares cross-validation bandwidth selection method. Simulation studies are performed to evaluate the average integrated squared error (ISE) of the proposed kernel estimator against some asymmetric competitors using Monte Carlo simulations. Moreover, real data sets are presented to illustrate the findings.  相似文献   

15.
Surveys of forecasters, containing respondents’ predictions of future values of key macroeconomic variables, receive a lot of attention in the financial press, from investors and from policy makers. They are apparently widely perceived to provide useful information about agents’ expectations. Nonetheless, these survey forecasts suffer from the crucial disadvantage that they are often quite stale, as they are released only infrequently. In this article, we propose MIDAS regression and Kalman filter methods for using asset price data to construct daily forecasts of upcoming survey releases. Our methods also allow us to predict actual outcomes, providing competing forecasts, and allow us to estimate what professional forecasters would predict if they were asked to make a forecast each day, making it possible to measure the effects of events and news announcements on expectations.  相似文献   

16.
ABSTRACT

Autoregressive Moving Average (ARMA) time series model fitting is a procedure often based on aggregate data, where parameter estimation plays a key role. Therefore, we analyze the effect of temporal aggregation on the accuracy of parameter estimation of mixed ARMA and MA models. We derive the expressions required to compute the parameter values of the aggregate models as functions of the basic model parameters in order to compare their estimation accuracy. To this end, a simulation experiment shows that aggregation causes a severe accuracy loss that increases with the order of aggregation, leading to poor accuracy.  相似文献   

17.
Abstract

This article discusses optimal confidence estimation for the geometric parameter and shows how different criteria can be used for evaluating confidence sets within the framework of tail functions theory. The confidence interval obtained using a particular tail function is studied and shown to outperform others, in the sense of having smaller width or expected width under a specified weight function. It is also shown that it may not be possible to find the most powerful test regarding the parameter using the Neyman-Pearson lemma. The theory is illustrated by application to a fecundability study.  相似文献   

18.
This article examines the prediction contest as a vehicle for aggregating the opinions of a crowd of experts. After proposing a general definition distinguishing prediction contests from other mechanisms for harnessing the wisdom of crowds, we focus on point-forecasting contests—contests in which forecasters submit point forecasts with a prize going to the entry closest to the quantity of interest. We first illustrate the incentive for forecasters to submit reports that exaggerate in the direction of their private information. Whereas this exaggeration raises a forecaster's mean squared error, it increases his or her chances of winning the contest. And in contrast to conventional wisdom, this nontruthful reporting usually improves the accuracy of the resulting crowd forecast. The source of this improvement is that exaggeration shifts weight away from public information (information known to all forecasters) and by so doing helps alleviate public knowledge bias. In the context of a simple theoretical model of overlapping information and forecaster behaviors, we present closed-form expressions for the mean squared error of the crowd forecasts which will help identify the situations in which point forecasting contests will be most useful.  相似文献   

19.
Ashley (1983) gave a simple condition for determining when a forecast of an explanatory variable (Xt ) is sufficiently inaccurate that direct replacement of Xt by the forecast yields worse forecasts of the dependent variable than does respecification of the equation to omit Xt . Many available macroeconomic forecasts were shown to be of limited usefulness in direct replacement. Direct replacement, however, is not optimal if the forecast's distribution is known. Here optimal linear forms in commercial forecasts of several macroeconomic variables are obtained by using estimates of their distributions. Although they are an improvement on the raw forecasts (direct replacement), these optimal forms are still too inaccurate to be useful in replacing the actual explanatory variables in forecasting models. The results strongly indicate that optimal forms involving several commercial forecasts will not be very useful either. Thus Ashley's (1983) sufficient condition retains its value in gauging the usefulness of a forecast of an explanatory variable in a forecasting model, even though it focuses on direct replacement.  相似文献   

20.
ABSTRACT

This work treats non-parametric estimation of multivariate probability mass functions, using multivariate discrete associated kernels. We propose a Bayesian local approach to select the matrix of bandwidths considering the multivariate Dirac Discrete Uniform and the product of binomial kernels, and treating the bandwidths as a diagonal matrix of parameters with some prior distribution. The performances of this approach and the cross-validation method are compared using simulations and real count data sets. The obtained results show that the Bayes local method performs better than cross-validation in terms of integrated squared error.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号