首页 | 本学科首页   官方微博 | 高级检索  
     检索      


UPPER BOUNDS ON THE MINIMUM COVERAGE PROBABILITY OF CONFIDENCE INTERVALS IN REGRESSION AFTER MODEL SELECTION
Authors:Paul  Kabaila and Khageswor  Giri
Institution:La Trobe University
Abstract:We consider a linear regression model, with the parameter of interest a specified linear combination of the components of the regression parameter vector. We suppose that, as a first step, a data-based model selection (e.g. by preliminary hypothesis tests or minimizing the Akaike information criterion – AIC) is used to select a model. It is common statistical practice to then construct a confidence interval for the parameter of interest, based on the assumption that the selected model had been given to us  a priori . This assumption is false, and it can lead to a confidence interval with poor coverage properties. We provide an easily computed finite-sample upper bound (calculated by repeated numerical evaluation of a double integral) to the minimum coverage probability of this confidence interval. This bound applies for model selection by any of the following methods: minimum AIC, minimum Bayesian information criterion (BIC), maximum adjusted  R 2, minimum Mallows'   C P   and  t -tests. The importance of this upper bound is that it delineates general categories of design matrices and model selection procedures for which this confidence interval has poor coverage properties. This upper bound is shown to be a finite-sample analogue of an earlier large-sample upper bound due to Kabaila and Leeb.
Keywords:Adjusted R2-statistic  AIC  'best subset'  regression  BIC  Mallows' criterion              t-tests
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号