首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
This paper presents an extension of mean-squared forecast error (MSFE) model averaging for integrating linear regression models computed on data frames of various lengths. Proposed method is considered to be a preferable alternative to best model selection by various efficiency criteria such as Bayesian information criterion (BIC), Akaike information criterion (AIC), F-statistics and mean-squared error (MSE) as well as to Bayesian model averaging (BMA) and naïve simple forecast average. The method is developed to deal with possibly non-nested models having different number of observations and selects forecast weights by minimizing the unbiased estimator of MSFE. Proposed method also yields forecast confidence intervals with a given significance level what is not possible when applying other model averaging methods. In addition, out-of-sample simulation and empirical testing proves efficiency of such kind of averaging when forecasting economic processes.  相似文献   

2.
Summary. We obtain the residual information criterion RIC, a selection criterion based on the residual log-likelihood, for regression models including classical regression models, Box–Cox transformation models, weighted regression models and regression models with autoregressive moving average errors. We show that RIC is a consistent criterion, and that simulation studies for each of the four models indicate that RIC provides better model order choices than the Akaike information criterion, corrected Akaike information criterion, final prediction error, C p and R adj2, except when the sample size is small and the signal-to-noise ratio is weak. In this case, none of the criteria performs well. Monte Carlo results also show that RIC is superior to the consistent Bayesian information criterion BIC when the signal-to-noise ratio is not weak, and it is comparable with BIC when the signal-to-noise ratio is weak and the sample size is large.  相似文献   

3.
ABSTRACT

We consider multiple regression (MR) model averaging using the focused information criterion (FIC). Our approach is motivated by the problem of implementing a mean-variance portfolio choice rule. The usual approach is to estimate parameters ignoring the intention to use them in portfolio choice. We develop an estimation method that focuses on the trading rule of interest. Asymptotic distributions of submodel estimators in the MR case are derived using a localization framework. The localization is of both regression coefficients and error covariances. Distributions of submodel estimators are used for model selection with the FIC. This allows comparison of submodels using the risk of portfolio rule estimators. FIC model averaging estimators are then characterized. This extension further improves risk properties. We show in simulations that applying these methods in the portfolio choice case results in improved estimates compared with several competitors. An application to futures data shows superior performance as well.  相似文献   

4.
We study model selection and model averaging in semiparametric partially linear models with missing responses. An imputation method is used to estimate the linear regression coefficients and the nonparametric function. We show that the corresponding estimators of the linear regression coefficients are asymptotically normal. Then a focused information criterion and frequentist model average estimators are proposed and their theoretical properties are established. Simulation studies are performed to demonstrate the superiority of the proposed methods over the existing strategies in terms of mean squared error and coverage probability. Finally, the approach is applied to a real data case.  相似文献   

5.
In the context of the Cardiovascular Health Study, a comprehensive investigation into the risk factors for strokes, we apply Bayesian model averaging to the selection of variables in Cox proportional hazard models. We use an extension of the leaps-and-bounds algorithm for locating the models that are to be averaged over and make available S-PLUS software to implement the methods. Bayesian model averaging provides a posterior probability that each variable belongs in the model, a more directly interpretable measure of variable importance than a P -value. P -values from models preferred by stepwise methods tend to overstate the evidence for the predictive value of a variable and do not account for model uncertainty. We introduce the partial predictive score to evaluate predictive performance. For the Cardiovascular Health Study, Bayesian model averaging predictively outperforms standard model selection and does a better job of assessing who is at high risk for a stroke.  相似文献   

6.
Variational Bayes (VB) estimation is a fast alternative to Markov Chain Monte Carlo for performing approximate Baesian inference. This procedure can be an efficient and effective means of analyzing large datasets. However, VB estimation is often criticised, typically on empirical grounds, for being unable to produce valid statistical inferences. In this article we refute this criticism for one of the simplest models where Bayesian inference is not analytically tractable, that is, the Bayesian linear model (for a particular choice of priors). We prove that under mild regularity conditions, VB based estimators enjoy some desirable frequentist properties such as consistency and can be used to obtain asymptotically valid standard errors. In addition to these results we introduce two VB information criteria: the variational Akaike information criterion and the variational Bayesian information criterion. We show that variational Akaike information criterion is asymptotically equivalent to the frequentist Akaike information criterion and that the variational Bayesian information criterion is first order equivalent to the Bayesian information criterion in linear regression. These results motivate the potential use of the variational information criteria for more complex models. We support our theoretical results with numerical examples.  相似文献   

7.
We study the focused information criterion and frequentist model averaging and their application to post‐model‐selection inference for weighted composite quantile regression (WCQR) in the context of the additive partial linear models. With the non‐parametric functions approximated by polynomial splines, we show that, under certain conditions, the asymptotic distribution of the frequentist model averaging WCQR‐estimator of a focused parameter is a non‐linear mixture of normal distributions. This asymptotic distribution is used to construct confidence intervals that achieve the nominal coverage probability. With properly chosen weights, the focused information criterion based WCQR estimators are not only robust to outliers and non‐normal residuals but also can achieve efficiency close to the maximum likelihood estimator, without assuming the true error distribution. Simulation studies and a real data analysis are used to illustrate the effectiveness of the proposed procedure.  相似文献   

8.
Monte Carlo experiments are conducted to compare the Bayesian and sample theory model selection criteria in choosing the univariate probit and logit models. We use five criteria: the deviance information criterion (DIC), predictive deviance information criterion (PDIC), Akaike information criterion (AIC), weighted, and unweighted sums of squared errors. The first two criteria are Bayesian while the others are sample theory criteria. The results show that if data are balanced none of the model selection criteria considered in this article can distinguish the probit and logit models. If data are unbalanced and the sample size is large the DIC and AIC choose the correct models better than the other criteria. We show that if unbalanced binary data are generated by a leptokurtic distribution the logit model is preferred over the probit model. The probit model is preferred if unbalanced data are generated by a platykurtic distribution. We apply the model selection criteria to the probit and logit models that link the ups and downs of the returns on S&P500 to the crude oil price.  相似文献   

9.
In this article, we study model selection and model averaging in quantile regression. Under general conditions, we develop a focused information criterion and a frequentist model average estimator for the parameters in quantile regression model, and examine their theoretical properties. The new procedures provide a robust alternative to the least squares method or likelihood method, and a major advantage of the proposed procedures is that when the variance of random error is infinite, the proposed procedure works beautifully while the least squares method breaks down. A simulation study and a real data example are presented to show that the proposed method performs well with a finite sample and is easy to use in practice.  相似文献   

10.
The autoregressive (AR) model is a popular method for fitting and prediction in analyzing time-dependent data, where selecting an accurate model among considered orders is a crucial issue. Two commonly used selection criteria are the Akaike information criterion and the Bayesian information criterion. However, the two criteria are known to suffer potential problems regarding overfit and underfit, respectively. Therefore, using them would perform well in some situations, but poorly in others. In this paper, we propose a new criterion in terms of the prediction perspective based on the concept of generalized degrees of freedom for AR model selection. We derive an approximately unbiased estimator of mean-squared prediction errors based on a data perturbation technique for selecting the order parameter, where the estimation uncertainty involved in a modeling procedure is considered. Some numerical experiments are performed to illustrate the superiority of the proposed method over some commonly used order selection criteria. Finally, the methodology is applied to a real data example to predict the weekly rate of return on the stock price of Taiwan Semiconductor Manufacturing Company and the results indicate that the proposed method is satisfactory.  相似文献   

11.
A popular account for the demise of the U.K.’s monetary targeting regime in the 1980s blames the fluctuating predictive relationships between broad money and inflation and real output growth. Yet ex post policy analysis based on heavily revised data suggests no fluctuations in the predictive content of money. In this paper, we investigate the predictive relationships for inflation and output growth using both real-time and heavily revised data. We consider a large set of recursively estimated vector autoregressive (VAR) and vector error correction models (VECM). These models differ in terms of lag length and the number of cointegrating relationships. We use Bayesian model averaging (BMA) to demonstrate that real-time monetary policymakers faced considerable model uncertainty. The in-sample predictive content of money fluctuated during the 1980s as a result of data revisions in the presence of model uncertainty. This feature is only apparent with real-time data as heavily revised data obscure these fluctuations. Out-of-sample predictive evaluations rarely suggest that money matters for either inflation or real output. We conclude that both data revisions and model uncertainty contributed to the demise of the U.K.’s monetary targeting regime.  相似文献   

12.
M-estimation is a widely used technique for robust statistical inference. In this paper, we study model selection and model averaging for M-estimation to simultaneously improve the coverage probability of confidence intervals of the parameters of interest and reduce the impact of heavy-tailed errors or outliers in the response. Under general conditions, we develop robust versions of the focused information criterion and a frequentist model average estimator for M-estimation, and we examine their theoretical properties. In addition, we carry out extensive simulation studies as well as two real examples to assess the performance of our new procedure, and find that the proposed method produces satisfactory results.  相似文献   

13.
We begin by recalling the tripartite division of statistical problems into three classes, M-closed, M-complete, and M-open and then reviewing the key ideas of introductory Shannon theory. Focusing on the related but distinct goals of model selection and prediction, we argue that different techniques for these two goals are appropriate for the three different problem classes. For M-closed problems we give relative entropy justification that the Bayes information criterion (BIC) is appropriate for model selection and that the Bayes model average is information optimal for prediction. For M-complete problems, we discuss the principle of maximum entropy and a way to use the rate distortion function to bypass the inaccessibility of the true distribution. For prediction in the M-complete class, there is little work done on information based model averaging so we discuss the Akaike information criterion (AIC) and its properties and variants.

For the M-open class, we argue that essentially only predictive criteria are suitable. Thus, as an analog to model selection, we present the key ideas of prediction along a string under a codelength criterion and propose a general form of this criterion. Since little work appears to have been done on information methods for general prediction in the M-open class of problems, we mention the field of information theoretic learning in certain general function spaces.  相似文献   

14.
The theoretical price of a financial option is given by the expectation of its discounted expiry time payoff. The computation of this expectation depends on the density of the value of the underlying instrument at expiry time. This density depends on both the parametric model assumed for the behaviour of the underlying, and the values of parameters within the model, such as volatility. However neither the model, nor the parameter values are known. Common practice when pricing options is to assume a specific model, such as geometric Brownian Motion, and to use point estimates of the model parameters, thereby precisely defining a density function.We explicitly acknowledge the uncertainty of model and parameters by constructing the predictive density of the underlying as an average of model predictive densities, weighted by each model's posterior probability. A model's predictive density is constructed by integrating its transition density function by the posterior distribution of its parameters. This is an extension to Bayesian model averaging. Sampling importance-resampling and Monte Carlo algorithms implement the computation. The advantage of this method is that rather than falsely assuming the model and parameter values are known, inherent ignorance is acknowledged and dealt with in a mathematically logical manner, which utilises all information from past and current observations to generate and update option prices. Moreover point estimates for parameters are unnecessary. We use this method to price a European Call option on a share index.  相似文献   

15.
In order to make predictions of future values of a time series, one needs to specify a forecasting model. A popular choice is an autoregressive time‐series model, for which the order of the model is chosen by an information criterion. We propose an extension of the focused information criterion (FIC) for model‐order selection, with emphasis on a high predictive accuracy (i.e. the mean squared forecast error is low). We obtain theoretical results and illustrate by means of a simulation study and some real data examples that the FIC is a valid alternative to the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) for selection of a prediction model. We also illustrate the possibility of using the FIC for purposes other than forecasting, and explore its use in an extended model.  相似文献   

16.
Important progress has been made with model averaging methods over the past decades. For spatial data, however, the idea of model averaging has not been applied well. This article studies model averaging methods for the spatial geostatistical linear model. A spatial Mallows criterion is developed to choose weights for the model averaging estimator. The resulting estimator can achieve asymptotic optimality in terms of L2 loss. Simulation experiments reveal that our proposed estimator is superior to the model averaging estimator by the Mallows criterion developed for ordinary linear models [Hansen, 2007] and the model selection estimator using the corrected Akaike's information criterion, developed for geostatistical linear models [Hoeting et al., 2006]. The Canadian Journal of Statistics 47: 336–351; 2019 © 2019 Statistical Society of Canada  相似文献   

17.
To analyse the risk factors of coronary heart disease (CHD), we apply the Bayesian model averaging approach that formalizes the model selection process and deals with model uncertainty in a discrete-time survival model to the data from the Framingham Heart Study. We also use the Alternating Conditional Expectation algorithm to transform the risk factors, such that their relationships with CHD are best described, overcoming the problem of coding such variables subjectively. For the Framingham Study, the Bayesian model averaging approach, which makes inferences about the effects of covariates on CHD based on an average of the posterior distributions of the set of identified models, outperforms the stepwise method in predictive performance. We also show that age, cholesterol, and smoking are nonlinearly associated with the occurrence of CHD and that P-values from models selected from stepwise methods tend to overestimate the evidence for the predictive value of a risk factor and ignore model uncertainty.  相似文献   

18.
Bayesian estimation via MCMC methods opens up new possibilities in estimating complex models. However, there is still considerable debate about how selection among a set of candidate models, or averaging over closely competing models, might be undertaken. This article considers simple approaches for model averaging and choice using predictive and likelihood criteria and associated model weights on the basis of output for models that run in parallel. The operation of such procedures is illustrated with real data sets and a linear regression with simulated data where the true model is known.  相似文献   

19.
This paper is concerned with model selection and model averaging procedures for partially linear single-index models. The profile least squares procedure is employed to estimate regression coefficients for the full model and submodels. We show that the estimators for submodels are asymptotically normal. Based on the asymptotic distribution of the estimators, we derive the focused information criterion (FIC), formulate the frequentist model average (FMA) estimators and construct proper confidence intervals for FMA estimators and FIC estimator, a special case of FMA estimators. Monte Carlo studies are performed to demonstrate the superiority of the proposed method over the full model, and over models chosen by AIC or BIC in terms of coverage probability and mean squared error. Our approach is further applied to real data from a male fertility study to explore potential factors related to sperm concentration and estimate the relationship between sperm concentration and monobutyl phthalate.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号