首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Comment     
We propose a sequential test for predictive ability for recursively assessing whether some economic variables have explanatory content for another variable. In the forecasting literature it is common to assess predictive ability by using “one-shot” tests at each estimation period. We show that this practice leads to size distortions, selects overfitted models and provides spurious evidence of in-sample predictive ability, and may lower the forecast accuracy of the model selected by the test. The usefulness of the proposed test is shown in well-known empirical applications to the real-time predictive content of money for output and the selection between linear and nonlinear models.  相似文献   

2.
建立基于银行目标导向的理财产品费率模型,通过对不同目标下银行行为的讨论,得出理财产品费率调整的一个分析框架。研究发现:存在使银行的经济效益和社会效益综合提升程度最大的最优理财产品费率;最优费率不是固定值,而是一个动态概念;单纯考虑经济效益时,银行将调高理财产品费率;银行调高理财产品费率,将提升自身综合目标的实现程度;银行调低理财产品费率,并不改变银行的综合目标。  相似文献   

3.
The posterior predictive p value (ppp) was invented as a Bayesian counterpart to classical p values. The methodology can be applied to discrepancy measures involving both data and parameters and can, hence, be targeted to check for various modeling assumptions. The interpretation can, however, be difficult since the distribution of the ppp value under modeling assumptions varies substantially between cases. A calibration procedure has been suggested, treating the ppp value as a test statistic in a prior predictive test. In this paper, we suggest that a prior predictive test may instead be based on the expected posterior discrepancy, which is somewhat simpler, both conceptually and computationally. Since both these methods require the simulation of a large posterior parameter sample for each of an equally large prior predictive data sample, we furthermore suggest to look for ways to match the given discrepancy by a computation‐saving conflict measure. This approach is also based on simulations but only requires sampling from two different distributions representing two contrasting information sources about a model parameter. The conflict measure methodology is also more flexible in that it handles non‐informative priors without difficulty. We compare the different approaches theoretically in some simple models and in a more complex applied example.  相似文献   

4.
We present a new semi-parametric model for the prediction of implied volatility surfaces that can be estimated using machine learning algorithms. Given a reasonable starting model, a boosting algorithm based on regression trees sequentially minimizes generalized residuals computed as differences between observed and estimated implied volatilities. To overcome the poor predictive power of existing models, we include a grid in the region of interest, and implement a cross-validation strategy to find an optimal stopping value for the boosting procedure. Back testing the out-of-sample performance on a large data set of implied volatilities from S&P 500 options, we provide empirical evidence of the strong predictive power of our model.  相似文献   

5.
Positive and negative predictive values describe the performance of a diagnostic test. There are several methods to test the equality of predictive values in paired designs. However, these methods were premised on large sample theory, and they may not be suitable for small‐size clinical trials because of inflation of the type 1 error rate. In this study, we propose an exact test to control the type 1 error rate strictly for conducting a small‐size clinical trial that investigates the equality of predictive values in paired designs. In addition, we execute simulation studies to evaluate the performance of the proposed exact test and existing methods in small‐size clinical trials. The proposed test can calculate the exact P value, and as a result of simulations, the empirical type 1 error rate for the proposed test did not exceed the significance level regardless of the setting, and the empirical power for the proposed test is not much different from the other methods based on large‐sample theory. Therefore, it is considered that the proposed exact test is useful when the type 1 error rate needs to be controlled strictly.  相似文献   

6.
In the context of the Cardiovascular Health Study, a comprehensive investigation into the risk factors for strokes, we apply Bayesian model averaging to the selection of variables in Cox proportional hazard models. We use an extension of the leaps-and-bounds algorithm for locating the models that are to be averaged over and make available S-PLUS software to implement the methods. Bayesian model averaging provides a posterior probability that each variable belongs in the model, a more directly interpretable measure of variable importance than a P -value. P -values from models preferred by stepwise methods tend to overstate the evidence for the predictive value of a variable and do not account for model uncertainty. We introduce the partial predictive score to evaluate predictive performance. For the Cardiovascular Health Study, Bayesian model averaging predictively outperforms standard model selection and does a better job of assessing who is at high risk for a stroke.  相似文献   

7.
This article considers testing the significance of a regressor with a near unit root in a predictive regression model. The procedures discussed in this article are nonparametric, so one can test the significance of a regressor without specifying a functional form. The results are used to test the null hypothesis that the entire function takes the value of zero. We show that the standardized test has a normal distribution regardless of whether there is a near unit root in the regressor. This is in contrast to tests based on linear regression for this model where tests have a nonstandard limiting distribution that depends on nuisance parameters. Our results have practical implications in testing the significance of a regressor since there is no need to conduct pretests for a unit root in the regressor and the same procedure can be used if the regressor has a unit root or not. A Monte Carlo experiment explores the performance of the test for various levels of persistence of the regressors and for various linear and nonlinear alternatives. The test has superior performance against certain nonlinear alternatives. An application of the test applied to stock returns shows how the test can improve inference about predictability.  相似文献   

8.
In this paper, we suggest a Bayesian panel (longitudinal) data approach to test for the economic growth convergence hypothesis. This approach can control for possible effects of initial income conditions, observed covariates and cross-sectional correlation of unobserved common error terms on inference procedures about the unit root hypothesis based on panel data dynamic models. Ignoring these effects can lead to spurious evidence supporting economic growth divergence. The application of our suggested approach to real gross domestic product panel data of the G7 countries indicates that the economic growth convergence hypothesis is supported by the data. Our empirical analysis shows that evidence of economic growth divergence for the G7 countries can be attributed to not accounting for the presence of exogenous covariates in the model.  相似文献   

9.
The existing dynamic models for realized covariance matrices do not account for an asymmetry with respect to price directions. We modify the recently proposed conditional autoregressive Wishart (CAW) model to allow for the leverage effect. In the conditional threshold autoregressive Wishart (CTAW) model and its variations the parameters governing each asset's volatility and covolatility dynamics are subject to switches that depend on signs of previous asset returns or previous market returns. We evaluate the predictive ability of the CTAW model and its restricted and extended specifications from both statistical and economic points of view. We find strong evidence that many CTAW specifications have a better in-sample fit and tend to have a better out-of-sample predictive ability than the original CAW model and its modifications.  相似文献   

10.
道路交通统计生命价值是道路交通安全项目经济评价的重要指标。运用意愿选择法和正交试验法设计了出行路径选择的调查问卷。假定时间、死亡风险为常数,费用服从对数正态分布,建立了基于Mixed Logit模型的统计生命价值评价模型。以大连市私家车出行者为调查对象获得调查数据,利用Monte Carlo方法并借助GAUSS软件对模型进行了150次仿真实验。研究表明:模型各参数的估计值具有较强集中性,t检验值具有较强显著性,模型优度比较高。统计生命价值服从参数为0.922和0.814的对数正态分布,其数学期望是35万元,可以作为道路交通安全项目经济评价的参考数据。  相似文献   

11.
12.
13.
Bayesian model building techniques are developed for data with a strong time series structure and possibly exogenous explanatory variables that have strong explanatory and predictive power. The emphasis is on finding whether there are any explanatory variables that might be used for modelling if the data have a strong time series structure that should also be included. We use a time series model that is linear in past observations and that can capture both stochastic and deterministic trend, seasonality and serial correlation. We propose the plotting of absolute predictive error against predictive standard deviation. A series of such plots is utilized to determine which of several nested and non-nested models is optimal in terms of minimizing the dispersion of the predictive distribution and restricting predictive outliers. We apply the techniques to modelling monthly counts of fatal road crashes in Australia where economic, consumption and weather variables are available and we find that three such variables should be included in addition to the time series filter. The approach leads to graphical techniques to determine strengths of relationships between the dependent variable and covariates and to detect model inadequacy as well as determining useful numerical summaries.  相似文献   

14.
The yield spread, measured as the difference between long- and short-term interest rates, is widely regarded as one of the strongest predictors of economic recessions. In this paper, we propose an enhanced recession prediction model that incorporates trends in the value of the yield spread. We expect our model to generate stronger recession signals because a steadily declining value of the yield spread typically indicates growing pessimism associated with the reduced future business activity. We capture trends in the yield spread by considering both the level of the yield spread at a lag of 12 months as well as its value at each of the previous two quarters leading up to the forecast origin, and we evaluate its predictive abilities using both logit and artificial neural network models. Our results indicate that models incorporating information from the time series of the yield spread correctly predict future recession periods much better than models only considering the spread value as of the forecast origin. Furthermore, the results are strongest for our artificial neural network model and logistic regression model that includes interaction terms, which we confirm using both a blocked cross-validation technique as well as an expanding estimation window approach.  相似文献   

15.
In this paper a test for outliers based on externally studentized residuals is shown to be related to a test for predictive failure. The relationships between a test for outliers, a test for a correlated mean shift and a test for an intercept shift are developed. A sequential testing procedure for outliers and structural change is shown to be independent, so that the overall size of the joint test can be determined exactly. It is established that a joint test for outliers and constancy of variances cannot be performed.  相似文献   

16.
Summary.  Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.  相似文献   

17.
The behaviour of market agents has been extensively covered in the literature. Risk averse behaviour, described by Von Neumann and Morgenstern (Theory of games and economic behavior. Princeton University Press, Princeton, 1944) via a concave utility function, is considered to be a cornerstone of classical economics. Agents prefer a fixed profit over an uncertain choice with the same expected value, however, lately there has been a lot of discussion about the empirical evidence of such risk averse behaviour. Some authors have shown that there are regions where market utility functions are locally convex. In this paper we construct a test to verify uncertainty about the concavity of agents’ utility function by testing the monotonicity of empirical pricing kernels (EPKs). A monotonically decreasing EPK corresponds to a concave utility function while a not monotonically decreasing EPK means non-averse pattern on one or more intervals of the utility function. We investigate the EPKs for German DAX data for the years 2000, 2002 and 2004 and find evidence of non-concave utility functions: the null hypothesis of a monotonically decreasing pricing kernel is rejected for the data under consideration. The test is based on approximations of spacings through exponential random variables. In a simulation we investigate its performance and calculate the critical values (surface).  相似文献   

18.
This paper examines the use of a residual bootstrap for bias correction in machine learning regression methods. Accounting for bias is an important obstacle in recent efforts to develop statistical inference for machine learning. We demonstrate empirically that the proposed bootstrap bias correction can lead to substantial improvements in both bias and predictive accuracy. In the context of ensembles of trees, we show that this correction can be approximated at only double the cost of training the original ensemble. Our method is shown to improve test set accuracy over random forests by up to 70% on example problems from the UCI repository.  相似文献   

19.
The rapid increase in the number of AIDS cases during the 1980s and the spread of the disease from the high-risk groups into the general population has created widespread concern. In particular, assessing the accuracy of the screening tests used to detect antibodies to the HIV (AIDS) virus in donated blood and determining the prevalance of the disease in the population are fundamental statistical problems. Because the prevalence of AIDS varies widely by geographic region and data on the number of infected blood donors are published regularly, Bayesian methods, which utilize prior results and update them as new data become available, are quite useful. In this paper we develop a Bayesian procedure for estimating the prevalence of a rare disease, the sensitivity and specificity of the screening tests, and the predictive value of a positive or negative screening test. We apply the procedure to data on blood donors in the United States and in Canada. Our results augment those described in Gastwirth (1987) using classical methods. Indeed, we show that the inclusion of sound prior knowledge into the statistical analysis does not yield sufficiently precise estimates of the predictive value of a positive test. Hence confirmatory testing is needed to obtain reliable estimates. The emphasis of the Bayesian predictive paradigm on prediction intervals for future data yields a valuable insight. We demonstrate that using them might have detected a decline in the specificity of the most frequently used screening test earlier than it apparently was.  相似文献   

20.
A general inductive Bayesian classification framework is considered using a simultaneous predictive distribution for test items. We introduce a principle of generative supervised and semi-supervised classification based on marginalizing the joint posterior distribution of labels for all test items. The simultaneous and marginalized classifiers arise under different loss functions, while both acknowledge jointly all uncertainty about the labels of test items and the generating probability measures of the classes. We illustrate for data from multiple finite alphabets that such classifiers achieve higher correct classification rates than a standard marginal predictive classifier which labels all test items independently, when training data are sparse. In the supervised case for multiple finite alphabets the simultaneous and the marginal classifiers are proven to become equal under generalized exchangeability when the amount of training data increases. Hence, the marginal classifier can be interpreted as an asymptotic approximation to the simultaneous classifier for finite sets of training data. It is also shown that such convergence is not guaranteed in the semi-supervised setting, where the marginal classifier does not provide a consistent approximation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号