首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 199 毫秒
1.
对于半连续两部回归模型,考虑到每个回归部分都会遇到大量的候选变量,此时就会产生变量选择问题。文章主要研究Bernoulli-Normal两部回归模型的变量选择问题。先提出一种基于Lasso惩罚函数的变量选择方法,但考虑到Lasso估计量不具有Oracle性质,又提出一种基于自适应Lasso惩罚函数的变量选择方法。模拟结果表明:两种方法都能够对Bernoulli-Normal回归模型进行变量选择,且自适应Lasso方法的变量选择性能往往优于Lasso方法。  相似文献   

2.
将变量选择引入空间计量模型,讨论具有自回归误差项的空间自回归模型的变量选择问题。在残差非正态独立同分布的条件下,通过最大化信息熵,提出空间信息准则,并证明其在该模型变量选择中具有一致性。模拟研究结果表明:无论对单个系数还是对全部系数,空间信息准则都能很好识别,且与经典的赤池准则相比具有较大的优势。因此,空间信息准则是一种更为有效的变量选择方法。  相似文献   

3.
模型选择是建立时间序列模型的基础.文章将AR模型选择转化为多目标决策问题,并采用熵值法定权进行求解,在此基础上建立了模型选择的多准则方法,从而有效的解决了模型选择问题.该方法还可推广到滑动平均自回归(ARMA)的选择,由此提供了一条普遍适用的建模思路.  相似文献   

4.
变量选择是处理高维统计模型的基本方法,在回归模型的变量选择中SCAD惩罚函数不仅可以很好地选择出正确模型,同时还可以对参数进行估计,而且还具有oracle性质,但这些良好的性质是基于选择出一个合适的调节参数。目前国内关于调节参数选择方面大多是对于变量选择问题的研究,针对广义线性模型基于SCAD惩罚使用新方法 ERIC准则进行调节参数的选择,并证明在一定条件下经过该准则选择的模型具有一致性。模拟与实证分析结果表明,ERIC方法在选择调节参数方面优于传统的CV准则、AIC准则和BIC准则。  相似文献   

5.
文章利用小波分析与自回归模型相结合的方法来建模分析时间序列,这种方法主要是在尺度函数逼近和自回归模型的基础上建立的。小波分析提供了一种多尺度函数逼近的方法,而自回归模型能够预测时间序列。文章的对CPI序列进行了离散小波分解,并重构得到了尺度序列和每层的细节序列;然后分别对其建立自回归模型并预测每个序列的下一个值,将得到的预测值相加得到了CPI预测值,再用预测值,利用建立的模型进行预测;最后,用标准差来衡量估计量的好坏。  相似文献   

6.
文章将自适应Lasso变量选择方法扩展到变系数向量自回归模型(TVP-VAR)中.利用所提出方法对2005-2014年航空煤油价格与民航货邮与旅客周转量月度数据进行分析,并与其他四种方法进行了比较,结果显示:与常系数VAR模型相比,变系数VAR模型能够显著提高模型的拟合与预测精度.提出的自适应Lasso变系数模型一致优于Belmonte,Koop和Korobolis(2014)提出的Lasso变系数模型.  相似文献   

7.
文章针对传统DEMATEL构建IDRM方法的不足,将结构向量自回归(SVAR)模型运用于DEMATEL中,建立了一套完整的IDRM流程;并结合一个算例详细描述SVAR模型方法具体的操作步骤,对此结果显示,SVAR模型构建IDRM的方法能更客观、合理地反映系统中各变量之间的相互影响.  相似文献   

8.
基于粗糙集理论与支持向量回归的预测模型   总被引:1,自引:0,他引:1  
文章将粗糙集与支持向量回归有机结合起来,建立了一种基于粗糙集数据预处理的支持向量回归模型,有效地克服了支持向量回归对样本属性重要性不加区分以及处理大量数据时运算速度慢等缺陷,并将该模型成功地应用到我国粮食产量的预测中,取得了较好的预测效果。  相似文献   

9.
信用评分是各类机构进行信用管理的有效工具,有着广泛的应用前景。随着计量技术的发展,信用评分方法也不断革新,为实际应用提供了多种选择。选取Logistic回归、分类树两种统计方法及代表信用评分发展趋势的人工智能神经网络中的多层感知器、径向基网络、自组织特征映射网络、支持向量机等共六种模型,运用较大样本量的个体工商户数据在一致的框架下进行检验。结果表明:Logistic回归模型与支持向量机两种方法在错分率、稳定性及适用性方面较为优越,其中支持向量机作为人工智能评分方法的最新应用之一,其综合性能更为突出。  相似文献   

10.
文章建立了向量自回归模型,在估计无约束向量自回归参数的基础上,通过脉冲响应函数、方差分解,探讨了通货膨胀、房地产价格指数、股票价格指数和货币供给量的动态关系。  相似文献   

11.
Summary. The classical approach to statistical analysis is usually based upon finding values for model parameters that maximize the likelihood function. Model choice in this context is often also based on the likelihood function, but with the addition of a penalty term for the number of parameters. Though models may be compared pairwise by using likelihood ratio tests for example, various criteria such as the Akaike information criterion have been proposed as alternatives when multiple models need to be compared. In practical terms, the classical approach to model selection usually involves maximizing the likelihood function associated with each competing model and then calculating the corresponding criteria value(s). However, when large numbers of models are possible, this quickly becomes infeasible unless a method that simultaneously maximizes over both parameter and model space is available. We propose an extension to the traditional simulated annealing algorithm that allows for moves that not only change parameter values but also move between competing models. This transdimensional simulated annealing algorithm can therefore be used to locate models and parameters that minimize criteria such as the Akaike information criterion, but within a single algorithm, removing the need for large numbers of simulations to be run. We discuss the implementation of the transdimensional simulated annealing algorithm and use simulation studies to examine its performance in realistically complex modelling situations. We illustrate our ideas with a pedagogic example based on the analysis of an autoregressive time series and two more detailed examples: one on variable selection for logistic regression and the other on model selection for the analysis of integrated recapture–recovery data.  相似文献   

12.
Model choice is one of the most crucial aspect in any statistical data analysis. It is well known that most models are just an approximation to the true data-generating process but among such model approximations, it is our goal to select the ‘best’ one. Researchers typically consider a finite number of plausible models in statistical applications, and the related statistical inference depends on the chosen model. Hence, model comparison is required to identify the ‘best’ model among several such candidate models. This article considers the problem of model selection for spatial data. The issue of model selection for spatial models has been addressed in the literature by the use of traditional information criteria-based methods, even though such criteria have been developed based on the assumption of independent observations. We evaluate the performance of some of the popular model selection critera via Monte Carlo simulation experiments using small to moderate samples. In particular, we compare the performance of some of the most popular information criteria such as Akaike information criterion (AIC), Bayesian information criterion, and corrected AIC in selecting the true model. The ability of these criteria to select the correct model is evaluated under several scenarios. This comparison is made using various spatial covariance models ranging from stationary isotropic to nonstationary models.  相似文献   

13.
ABSTRACT

Model selection can be defined as the task of estimating the performance of different models in order to choose the most parsimonious one, among a potentially very large set of candidate statistical models. We propose a graphical representation to be considered as an extension to the class of mixed models of the deviance plot proposed in the literature within the framework of classical and generalized linear models. This graphical representation allows, once a reduced number of models have been selected, to identify important covariates focusing only on the fixed effects component, assuming the random part properly specified. Nevertheless, we suggest also a standalone figure representing the residual random variance ratio: a cross-evaluation of the two graphical representations will allow to derive some conclusions on the random part specification of the model and a more accurate selection of the final model.  相似文献   

14.
In this study, we propose a prior on restricted Vector Autoregressive (VAR) models. The prior setting permits efficient Markov Chain Monte Carlo (MCMC) sampling from the posterior of the VAR parameters and estimation of the Bayes factor. Numerical simulations show that when the sample size is small, the Bayes factor is more effective in selecting the correct model than the commonly used Schwarz criterion. We conduct Bayesian hypothesis testing of VAR models on the macroeconomic, state-, and sector-specific effects of employment growth.  相似文献   

15.
Abstract. Lasso and other regularization procedures are attractive methods for variable selection, subject to a proper choice of shrinkage parameter. Given a set of potential subsets produced by a regularization algorithm, a consistent model selection criterion is proposed to select the best one among this preselected set. The approach leads to a fast and efficient procedure for variable selection, especially in high‐dimensional settings. Model selection consistency of the suggested criterion is proven when the number of covariates d is fixed. Simulation studies suggest that the criterion still enjoys model selection consistency when d is much larger than the sample size. The simulations also show that our approach for variable selection works surprisingly well in comparison with existing competitors. The method is also applied to a real data set.  相似文献   

16.

This work is motivated by the need to find experimental designs which are robust under different model assumptions. We measure robustness by calculating a measure of design efficiency with respect to a design optimality criterion and say that a design is robust if it is reasonably efficient under different model scenarios. We discuss two design criteria and an algorithm which can be used to obtain robust designs. The first criterion employs a Bayesian-type approach by putting a prior or weight on each candidate model and possibly priors on the corresponding model parameters. We define the first criterion as the expected value of the design efficiency over the priors. The second design criterion we study is the minimax design which minimizes the worst value of a design criterion over all candidate models. We establish conditions when these two criteria are equivalent when there are two candidate models. We apply our findings to the area of accelerated life testing and perform sensitivity analysis of designs with respect to priors and misspecification of planning values.  相似文献   

17.
Generalized linear models (GLMs) are widely studied to deal with complex response variables. For the analysis of categorical dependent variables with more than two response categories, multivariate GLMs are presented to build the relationship between this polytomous response and a set of regressors. Traditional variable selection approaches have been proposed for the multivariate GLM with a canonical link function when the number of parameters is fixed in the literature. However, in many model selection problems, the number of parameters may be large and grow with the sample size. In this paper, we present a new selection criterion to the model with a diverging number of parameters. Under suitable conditions, the criterion is shown to be model selection consistent. A simulation study and a real data analysis are conducted to support theoretical findings.  相似文献   

18.
Model selection is the most persuasive problem in generalized linear models. A model selection criterion based on deviance called the deviance-based criterion (DBC) is proposed. The DBC is obtained by penalizing the difference between the deviance of the fitted model and the full model. Under certain weak conditions, DBC is shown to be a consistent model selection criterion in the sense that with probability approaching to one, the selected model asymptotically equals the optimal model relating response and predictors. Further, the use of DBC in link function selection is also discussed. We compare the proposed model selection criterion with existing methods. The small sample efficiency of proposed model selection criterion is evaluated by the simulation study.  相似文献   

19.
We consider the problem of selecting a regression model from a large class of possible models in the case where no true model is believed to exist. In practice few statisticians, or scientists who employ statistical methods, believe that a "true" model exists, but nonetheless they seek to select a model as a proxy from which they want to predict. Unlike much of the recent work in this area we address this problem explicitly. We develop Bayesian predictive model selection techniques when proper conjugate priors are used and obtain an easily computed expression for the model selection criterion. We also derive expressions for updating the value of the statistic when a predictor is dropped from the model and apply this approach to a large well-known data set.  相似文献   

20.
Model selection problems arise while constructing unbiased or asymptotically unbiased estimators of measures known as discrepancies to find the best model. Most of the usual criteria are based on goodness-of-fit and parsimony. They aim to maximize a transformed version of likelihood. For linear regression models with normally distributed error, the situation is less clear when two models are equivalent: are they close to or far from the unknown true model? In this work, based on stochastic simulation and parametric simulation, we study the results of Vuong's test, Cox's test, Akaike's information criterion, Bayesian information criterion, Kullback information criterion and bias corrected Kullback information criterion and the ability of these tests to discriminate between non-nested linear models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号