首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stock & Watson (1999) consider the relative quality of different univariate forecasting techniques. This paper extends their study on forecasting practice, comparing the forecasting performance of two popular model selection procedures, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). This paper considers several topics: how AIC and BIC choose lags in autoregressive models on actual series, how models so selected forecast relative to an AR(4) model, the effect of using a maximum lag on model selection, and the forecasting performance of combining AR(4), AIC, and BIC models with an equal weight.  相似文献   

2.
Autoregressive model is a popular method for analysing the time dependent data, where selection of order parameter is imperative. Two commonly used selection criteria are the Akaike information criterion (AIC) and the Bayesian information criterion (BIC), which are known to suffer the potential problems regarding overfit and underfit, respectively. To our knowledge, there does not exist a criterion in the literature that can satisfactorily perform under various situations. Therefore, in this paper, we focus on forecasting the future values of an observed time series and propose an adaptive idea to combine the advantages of AIC and BIC but to mitigate their weaknesses based on the concept of generalized degrees of freedom. Instead of applying a fixed criterion to select the order parameter, we propose an approximately unbiased estimator of mean squared prediction errors based on a data perturbation technique for fairly comparing between AIC and BIC. Then use the selected criterion to determine the final order parameter. Some numerical experiments are performed to show the superiority of the proposed method and a real data set of the retail price index of China from 1952 to 2008 is also applied for illustration.  相似文献   

3.
This paper presents an extension of mean-squared forecast error (MSFE) model averaging for integrating linear regression models computed on data frames of various lengths. Proposed method is considered to be a preferable alternative to best model selection by various efficiency criteria such as Bayesian information criterion (BIC), Akaike information criterion (AIC), F-statistics and mean-squared error (MSE) as well as to Bayesian model averaging (BMA) and naïve simple forecast average. The method is developed to deal with possibly non-nested models having different number of observations and selects forecast weights by minimizing the unbiased estimator of MSFE. Proposed method also yields forecast confidence intervals with a given significance level what is not possible when applying other model averaging methods. In addition, out-of-sample simulation and empirical testing proves efficiency of such kind of averaging when forecasting economic processes.  相似文献   

4.
In order to make predictions of future values of a time series, one needs to specify a forecasting model. A popular choice is an autoregressive time‐series model, for which the order of the model is chosen by an information criterion. We propose an extension of the focused information criterion (FIC) for model‐order selection, with emphasis on a high predictive accuracy (i.e. the mean squared forecast error is low). We obtain theoretical results and illustrate by means of a simulation study and some real data examples that the FIC is a valid alternative to the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) for selection of a prediction model. We also illustrate the possibility of using the FIC for purposes other than forecasting, and explore its use in an extended model.  相似文献   

5.
ABSTRACT

Inflated data are prevalent in many situations and a variety of inflated models with extensions have been derived to fit data with excessive counts of some particular responses. The family of information criteria (IC) has been used to compare the fit of models for selection purposes. Yet despite the common use in statistical applications, there are not too many studies evaluating the performance of IC in inflated models. In this study, we studied the performance of IC for data with dual-inflated data. The new zero- and K-inflated Poisson (ZKIP) regression model and conventional inflated models including Poisson regression and zero-inflated Poisson (ZIP) regression were fitted for dual-inflated data and the performance of IC were compared. The effect of sample sizes and the proportions of inflated observations towards selection performance were also examined. The results suggest that the Bayesian information criterion (BIC) and consistent Akaike information criterion (CAIC) are more accurate than the Akaike information criterion (AIC) in terms of model selection when the true model is simple (i.e. Poisson regression (POI)). For more complex models, such as ZIP and ZKIP, the AIC was consistently better than the BIC and CAIC, although it did not reach high levels of accuracy when sample size and the proportion of zero observations were small. The AIC tended to over-fit the data for the POI, whereas the BIC and CAIC tended to under-parameterize the data for ZIP and ZKIP. Therefore, it is desirable to study other model selection criteria for dual-inflated data with small sample size.  相似文献   

6.
This paper derives Akaike information criterion (AIC), corrected AIC, the Bayesian information criterion (BIC) and Hannan and Quinn’s information criterion for approximate factor models assuming a large number of cross-sectional observations and studies the consistency properties of these information criteria. It also reports extensive simulation results comparing the performance of the extant and new procedures for the selection of the number of factors. The simulation results show the di?culty of determining which criterion performs best. In practice, it is advisable to consider several criteria at the same time, especially Hannan and Quinn’s information criterion, Bai and Ng’s ICp2 and BIC3, and Onatski’s and Ahn and Horenstein’s eigenvalue-based criteria. The model-selection criteria considered in this paper are also applied to Stock and Watson’s two macroeconomic data sets. The results differ considerably depending on the model-selection criterion in use, but evidence suggesting five factors for the first data and five to seven factors for the second data is obtainable.  相似文献   

7.
This paper proposes an adaptive model selection criterion with a data-driven penalty term. We treat model selection as an equality constrained minimization problem and develop an adaptive model selection procedure based on the Lagrange optimization method. In contrast to Akaike's information criterion (AIC), Bayesian information criterion (BIC) and most other existing criteria, this new criterion is to minimize the model size and take a measure of lack-of-fit as an adaptive penalty. Both theoretical results and simulations illustrate the power of this criterion with respect to consistency and pointwise asymptotic loss efficiency in the parametric and nonparametric cases.  相似文献   

8.
The goal of the current paper is to compare consistent and inconsistent model selection criteria by looking at their convergence rates (to be defined in the first section). The prototypes of the two types of criteria are the AIC and BIC criterion respectively. For linear regression models with normally distributed errors, we show that the convergence rates for AIC and BIC are 0(n-1) and 0((n log n)-1/2) respectively. When the error distributions are unknown, the two criteria become indistinguishable, all having convergence rate O(n-1/2). We also argue that the BIC criterion has nearly optimal convergence rate. The results partially justified some of the controversial simulation results in which inconsistent criteria seem to outperform consistent ones.  相似文献   

9.
In medical studies, Cox proportional hazards model is a commonly used method to deal with the right-censored survival data accompanied by many explanatory covariates. In practice, the Akaike's information criterion (AIC) or the Bayesian information criterion (BIC) is usually used to select an appropriate subset of covariates. It is well known that neither the AIC criterion nor the BIC criterion dominates for all situations. In this paper, we propose an adaptive-Cox model averaging procedure to get a more robust hazard estimator. First, by applying AIC and BIC criteria to perturbed datasets, we obtain two model averaging (MA) estimated survival curves, called AIC-MA and BIC-MA. Then, based on Kullback–Leibler loss, a better estimate of survival curve between AIC-MA and BIC-MA is chosen, which results in an adaptive-Cox estimate of survival curve. Simulation results show the superiority of our approach and an application of the proposed method is also presented by analyzing the German Breast Cancer Study dataset.  相似文献   

10.
This paper deals with the implementation of model selection criteria to data generated by ARMA processes. The recently introduced modified divergence information criterion is used and compared with traditional selection criteria like the Akaike information criterion (AIC) and the Schwarz information criterion (SIC). The appropriateness of the selected model is tested for one- and five-step ahead predictions with the use of the normalized mean squared forecast errors (NMSFE).  相似文献   

11.
内容提要:向量自回归模型是多元时间序列分析中最常用的方法之一。在建立模型的过程中模型选择是非常重要的一个环节,如果候选模型不是很多时,可以通过比较每个模型的准则值如AIC、AICc、BIC或HQ进行模型选择。可是,当存在大量候选模型时,我们无法一一比较每个模型的准则值。为了解决这个问题,本文提出一个基于吉伯斯样本生成器的向量自回归模型选择方法,结果表明应用该方法能够从大量候选模型中准确、高效地确认准则值最小的模型。  相似文献   

12.
In the problem of selecting variables in a multivariate linear regression model, we derive new Bayesian information criteria based on a prior mixing a smooth distribution and a delta distribution. Each of them can be interpreted as a fusion of the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Inheriting their asymptotic properties, our information criteria are consistent in variable selection in both the large-sample and the high-dimensional asymptotic frameworks. In numerical simulations, variable selection methods based on our information criteria choose the true set of variables with high probability in most cases.  相似文献   

13.
Model choice is one of the most crucial aspect in any statistical data analysis. It is well known that most models are just an approximation to the true data-generating process but among such model approximations, it is our goal to select the ‘best’ one. Researchers typically consider a finite number of plausible models in statistical applications, and the related statistical inference depends on the chosen model. Hence, model comparison is required to identify the ‘best’ model among several such candidate models. This article considers the problem of model selection for spatial data. The issue of model selection for spatial models has been addressed in the literature by the use of traditional information criteria-based methods, even though such criteria have been developed based on the assumption of independent observations. We evaluate the performance of some of the popular model selection critera via Monte Carlo simulation experiments using small to moderate samples. In particular, we compare the performance of some of the most popular information criteria such as Akaike information criterion (AIC), Bayesian information criterion, and corrected AIC in selecting the true model. The ability of these criteria to select the correct model is evaluated under several scenarios. This comparison is made using various spatial covariance models ranging from stationary isotropic to nonstationary models.  相似文献   

14.
In a recent volume of this journal, Holden [Testing the normality assumption in the Tobit Model, J. Appl. Stat. 31 (2004) pp. 521–532] presents Monte Carlo evidence comparing several tests for departures from normality in the Tobit Model. This study adds to the work of Holden by considering another test, and several information criteria, for detecting departures from normality in the Tobit Model. The test given here is a modified likelihood ratio statistic based on a partially adaptive estimator of the Censored Regression Model using the approach of Caudill [A partially adaptive estimator for the Censored Regression Model based on a mixture of normal distributions, Working Paper, Department of Economics, Auburn University, 2007]. The information criteria examined include the Akaike’s Information Criterion (AIC), the Consistent AIC (CAIC), the Bayesian information criterion (BIC), and the Akaike’s BIC (ABIC). In terms of fewest ‘rejections’ of a true null, the best performance is exhibited by the CAIC and the BIC, although, like some of the statistics examined by Holden, there are computational difficulties with each.  相似文献   

15.
Monte Carlo experiments are conducted to compare the Bayesian and sample theory model selection criteria in choosing the univariate probit and logit models. We use five criteria: the deviance information criterion (DIC), predictive deviance information criterion (PDIC), Akaike information criterion (AIC), weighted, and unweighted sums of squared errors. The first two criteria are Bayesian while the others are sample theory criteria. The results show that if data are balanced none of the model selection criteria considered in this article can distinguish the probit and logit models. If data are unbalanced and the sample size is large the DIC and AIC choose the correct models better than the other criteria. We show that if unbalanced binary data are generated by a leptokurtic distribution the logit model is preferred over the probit model. The probit model is preferred if unbalanced data are generated by a platykurtic distribution. We apply the model selection criteria to the probit and logit models that link the ups and downs of the returns on S&P500 to the crude oil price.  相似文献   

16.
Multilevel latent class analysis is conducive to providing more effective information on both individual and group typologies. However, model selection issues still need further investigation. Current study probed into issue of high-level class numeration for a more complex model using AIC, AIC3, BIC, and BIC*. Data simulation was conducted and its result was verified by empirical data. The result demonstrated that these criteria have a certain inclination relative to sample sizes. Sample size per group plays an evident role in improving accuracy of AIC3 and BIC. The complex model requires more sample size per group to ensure accurate class numeration.  相似文献   

17.
This paper investigates, by means of Monte Carlo simulation, the effects of different choices of order for autoregressive approximation on the fully efficient parameter estimates for autoregressive moving average models. Four order selection criteria, AIC, BIC, HQ and PKK, were compared and different model structures with varying sample sizes were used to contrast the performance of the criteria. Some asymptotic results which provide a useful guide for assessing the performance of these criteria are presented. The results of this comparison show that there are marked differences in the accuracy implied using these alternative criteria in small sample situations and that it is preferable to apply BIC criterion, which leads to greater precision of Gaussian likelihood estimates, in such cases. Implications of the findings of this study for the estimation of time series models are highlighted.  相似文献   

18.
Model selection strategies play an important, if not explicit, role in quantitative research. The inferential properties of these strategies are largely unknown, therefore, there is little basis for recommending (or avoiding) any particular set of strategies. In this paper, we evaluate several commonly used model selection procedures [Bayesian information criterion (BIC), adjusted R 2, Mallows’ C p, Akaike information criteria (AIC), AICc, and stepwise regression] using Monte-Carlo simulation of model selection when the true data generating processes (DGP) are known.

We find that the ability of these selection procedures to include important variables and exclude irrelevant variables increases with the size of the sample and decreases with the amount of noise in the model. None of the model selection procedures do well in small samples, even when the true DGP is largely deterministic; thus, data mining in small samples should be avoided entirely. Instead, the implicit uncertainty in model specification should be explicitly discussed. In large samples, BIC is better than the other procedures at correctly identifying most of the generating processes we simulated, and stepwise does almost as well. In the absence of strong theory, both BIC and stepwise appear to be reasonable model selection strategies in large samples. Under the conditions simulated, adjusted R 2, Mallows’ C p AIC, and AICc are clearly inferior and should be avoided.  相似文献   


19.
We derive and investigate a variant of AIC, the Akaike information criterion, for model selection in settings where the observed data is incomplete. Our variant is based on the motivation provided for the PDIO (‘predictive divergence for incomplete observation models’) criterion of Shimodaira (1994, in: Selecting Models from Data: Artificial Intelligence and Statistics IV, Lecture Notes in Statistics, vol. 89, Springer, New York, pp. 21–29). However, our variant differs from PDIO in its ‘goodness-of-fit’ term. Unlike AIC and PDIO, which require the computation of the observed-data empirical log-likelihood, our criterion can be evaluated using only complete-data tools, readily available through the EM algorithm and the SEM (‘supplemented’ EM) algorithm of Meng and Rubin (Journal of the American Statistical Association 86 (1991) 899–909). We compare the performance of our AIC variant to that of both AIC and PDIO in simulations where the data being modeled contains missing values. The results indicate that our criterion is less prone to overfitting than AIC and less prone to underfitting than PDIO.  相似文献   

20.
In time series modeling consistent criteria like Bayesian Information Criterion (BIC) outperform in terms of predictability loss-efficient criteria like Akaike Information Criterion (AIC) when data are generated by a finite-order autoregressive process, and the reverse is true when data are generated by an infinite-order autoregressive process. Since in practice we don’t know the data-generating process, it is useful to have an adaptive criterion that behaves as either a consistent or just as a loss-efficient criterion, whichever performs better. Here we derive such a criterion. Moreover, our criterion is adaptive to effective sample sizes and not sensitive to maximum a priori determined order limits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号