首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In real‐data analysis, deciding the best subset of variables in regression models is an important problem. Akaike's information criterion (AIC) is often used in order to select variables in many fields. When the sample size is not so large, the AIC has a non‐negligible bias that will detrimentally affect variable selection. The present paper considers a bias correction of AIC for selecting variables in the generalized linear model (GLM). The GLM can express a number of statistical models by changing the distribution and the link function, such as the normal linear regression model, the logistic regression model, and the probit model, which are currently commonly used in a number of applied fields. In the present study, we obtain a simple expression for a bias‐corrected AIC (corrected AIC, or CAIC) in GLMs. Furthermore, we provide an ‘R’ code based on our formula. A numerical study reveals that the CAIC has better performance than the AIC for variable selection.  相似文献   

2.
We study the correlation structure for a mixture of ordinal and continuous repeated measures using a Bayesian approach. We assume a multivariate probit model for the ordinal variables and a normal linear regression for the continuous variables, where latent normal variables underlying the ordinal data are correlated with continuous variables in the model. Due to the probit model assumption, we are required to sample a covariance matrix with some of the diagonal elements equal to one. The key computational idea is to use parameter-extended data augmentation, which involves applying the Metropolis-Hastings algorithm to get a sample from the posterior distribution of the covariance matrix incorporating the relevant restrictions. The methodology is illustrated through a simulated example and through an application to data from the UCLA Brain Injury Research Center.  相似文献   

3.
Many areas of statistical modeling are plagued by the “curse of dimensionality,” in which there are more variables than observations. This is especially true when developing functional regression models where the independent dataset is some type of spectral decomposition, such as data from near-infrared spectroscopy. While we could develop a very complex model by simply taking enough samples (such that n > p), this could prove impossible or prohibitively expensive. In addition, a regression model developed like this could turn out to be highly inefficient, as spectral data usually exhibit high multicollinearity. In this article, we propose a two-part algorithm for selecting an effective and efficient functional regression model. Our algorithm begins by evaluating a subset of discrete wavelet transformations, allowing for variation in both wavelet and filter number. Next, we perform an intermediate processing step to remove variables with low correlation to the response data. Finally, we use the genetic algorithm to perform a stochastic search through the subset regression model space, driven by an information-theoretic objective function. We allow our algorithm to develop the regression model for each response variable independently, so as to optimally model each variable. We demonstrate our method on the familiar biscuit dough dataset, which has been used in a similar context by several researchers. Our results demonstrate both the flexibility and the power of our algorithm. For each response variable, a different subset model is selected, and different wavelet transformations are used. The models developed by our algorithm show an improvement, as measured by lower mean error, over results in the published literature.  相似文献   

4.
Albert and Chib introduced a complete Bayesian method to analyze data arising from the generalized linear model in which they used the Gibbs sampling algorithm facilitated by latent variables. Recently, Cowles proposed an alternative algorithm to accelerate the convergence of the Albert-Chib algorithm. The novelty in this latter algorithm is achieved by using a Hastings algorithm to generate latent variables and bin boundary parameters jointly instead of individually from their respective full conditionals. In the same spirit, we reparameterize the cumulative-link generalized linear model to accelerate the convergence of Cowles’ algorithm even further. One important advantage of our method is that for the three-bin problem it does not require the Hastings algorithm. In addition, for problems with more than three bins, while the Hastings algorithm is required, we provide a proposal density based on the Dirichlet distribution which is more natural than the truncated normal density used in the competing algorithm. Also, using diagnostic procedures recommended in the literature for the Markov chain Monte Carlo algorithm (both single and multiple runs) we show that our algorithm is substantially better than the one recently obtained. Precisely, our algorithm provides faster convergence and smaller autocorrelations between the iterates. Using the probit link function, extensive results are obtained for the three-bin and the five-bin multinomial ordinal data problems.  相似文献   

5.
This article describes a convenient method of selecting Metropolis– Hastings proposal distributions for multinomial logit models. There are two key ideas involved. The first is that multinomial logit models have a latent variable representation similar to that exploited by Albert and Chib (J Am Stat Assoc 88:669–679, 1993) for probit regression. Augmenting the latent variables replaces the multinomial logit likelihood function with the complete data likelihood for a linear model with extreme value errors. While no conjugate prior is available for this model, a least squares estimate of the parameters is easily obtained. The asymptotic sampling distribution of the least squares estimate is Gaussian with known variance. The second key idea in this paper is to generate a Metropolis–Hastings proposal distribution by conditioning on the estimator instead of the full data set. The resulting sampler has many of the benefits of so-called tailored or approximation Metropolis–Hastings samplers. However, because the proposal distributions are available in closed form they can be implemented without numerical methods for exploring the posterior distribution. The algorithm converges geometrically ergodically, its computational burden is minor, and it requires minimal user input. Improvements to the sampler’s mixing rate are investigated. The algorithm is also applied to partial credit models describing ordinal item response data from the 1998 National Assessment of Educational Progress. Its application to hierarchical models and Poisson regression are briefly discussed.  相似文献   

6.
This article presents a Bayesian analysis of a multinomial probit model by building on previous work that specified priors on identified parameters. The main contribution of our article is to propose a prior on the covariance matrix of the latent utilities that permits elements of the inverse of the covariance matrix to be identically zero. This allows a parsimonious representation of the covariance matrix when such parsimony exists. The methodology is applied to both simulated and real data, and its ability to obtain more efficient estimators of the covariance matrix and regression coefficients is assessed using simulated data.  相似文献   

7.
I propose a Lagrange multiplier test for the multinomial logit model against the dogit model (Gaudry and Dagenais 1979) as the alternative hypothesis. In view of the well-known drawback of the restrictive property of independence from irrelevant alternatives implied by the multinomial logit model, a specification test has much to recommend it. Finite sample properties of the test are studied using a Monte Carlo experiment, and the test's power against the nested multinomial logit model and the multinomial probit model is investigated. The test is found to be sensitive to the values of the regression parameters of the linear random utility function.  相似文献   

8.
A new functional form of the response probability for a qualitative response model is proposed. The new model is flexible enough to avoid the constraint of independence from irrelevant alternatives, which is perceived as a weakness of the multinomial logit model in some applications. It is computationally simpler than the multinomial probit model and is promising for analyzing problems with a moderate number of alternatives.  相似文献   

9.
Abstract

The regression model with ordinal outcome has been widely used in a lot of fields because of its significant effect. Moreover, predictors measured with error and multicollinearity are long-standing problems and often occur in regression analysis. However there are not many studies on dealing with measurement error models with generally ordinal response, even fewer when they suffer from multicollinearity. The purpose of this article is to estimate parameters of ordinal probit models with measurement error and multicollinearity. First, we propose to use regression calibration and refined regression calibration to estimate parameters in ordinal probit models with measurement error. Second, we develop new methods to obtain estimators of parameters in the presence of multicollinearity and measurement error in ordinal probit model. Furthermore we also extend all the methods to quadratic ordinal probit models and talk about the situation in ordinal logistic models. These estimators are consistent and asymptotically normally distributed under general conditions. They are easy to compute, perform well and are robust against the normality assumption for the predictor variables in our simulation studies. The proposed methods are applied to some real datasets.  相似文献   

10.
Variable selection is an important task in regression analysis. Performance of the statistical model highly depends on the determination of the subset of predictors. There are several methods to select most relevant variables to construct a good model. However in practice, the dependent variable may have positive continuous values and not normally distributed. In such situations, gamma distribution is more suitable than normal for building a regression model. This paper introduces an heuristic approach to perform variable selection using artificial bee colony optimization for gamma regression models. We evaluated the proposed method against with classical selection methods such as backward and stepwise. Both simulation studies and real data set examples proved the accuracy of our selection procedure.  相似文献   

11.
Using a multivariate latent variable approach, this article proposes some new general models to analyze the correlated bounded continuous and categorical (nominal or/and ordinal) responses with and without non-ignorable missing values. First, we discuss regression methods for jointly analyzing continuous, nominal, and ordinal responses that we motivated by analyzing data from studies of toxicity development. Second, using the beta and Dirichlet distributions, we extend the models so that some bounded continuous responses are replaced for continuous responses. The joint distribution of the bounded continuous, nominal and ordinal variables is decomposed into a marginal multinomial distribution for the nominal variable and a conditional multivariate joint distribution for the bounded continuous and ordinal variables given the nominal variable. We estimate the regression parameters under the new general location models using the maximum-likelihood method. Sensitivity analysis is also performed to study the influence of small perturbations of the parameters of the missing mechanisms of the model on the maximal normal curvature. The proposed models are applied to two data sets: BMI, Steatosis and Osteoporosis data and Tehran household expenditure budgets.  相似文献   

12.
This paper compares the application of different versions of the simulated counterparts of the Wald test, the score test, and the likelihood ratio test in one- and multiperiod multinomial probit models. Monte Carlo experiments show that the use of the simple form of the simulated likelihood ratio test delivers relatively robust results regarding the testing of several multinomial probit model specifications. In contrast, the inclusion of the Hessian matrix of the simulated loglikelihood function into the simulated score test and (in the multiperiod multinomial probit model) particularly the inclusion of the quasi-maximum likelihood theory into the simulated likelihood ratio test leads to substantial computational problems. The combined application of the quasi-maximum likelihood theory with the simulated Wald test or the simulated score test is not systematically superior to the application of the other versions of these two simulated classical tests either. Neither an increase in the number of observations nor in the number of random draws in the incorporated Geweke-Hajivassiliou-Keane simulator systematically lead to more precise conformities between the frequencies of type I errors and the basic significance levels. An increase in the number of observations only decreases the frequencies of type II errors, particularly regarding the simulated classical testing of multiperiod multinomial probit model specifications.  相似文献   

13.
This note considers a method for estimating regression parameters from the data containing measurement errors using some natural estimates of the unobserved explanatory variables. It is shown that the resulting estimator is consistent not only in the usual linear regression model but also in the probit model and regression models with censoship or truncation. However, it fails to be consistent in nonlinear regression models except for special cases.  相似文献   

14.
This note considers a method for estimating regression parameters from the data containing measurement errors using some natural estimates of the unobserved explanatory variables. It is shown that the resulting estimator is consistent not only in the usual linear regression model but also in the probit model and regression models with censoship or truncation. However, it fails to be consistent in nonlinear regression models except for special cases.  相似文献   

15.
Summary.  We develop an efficient way to select the best subset autoregressive model with exogenous variables and generalized autoregressive conditional heteroscedasticity errors. One main feature of our method is to select important autoregressive and exogenous variables, and at the same time to estimate the unknown parameters. The method proposed uses the stochastic search idea. By adopting Markov chain Monte Carlo techniques, we can identify the best subset model from a large of number of possible choices. A simulation experiment shows that the method is very effective. Misspecification in the mean equation can also be detected by our model selection method. In the application to the stock-market data of seven countries, the lagged 1 US return is found to have a strong influence on the other stock-market returns.  相似文献   

16.
Zero inflated Poisson regression is a model commonly used to analyze data with excessive zeros. Although many models have been developed to fit zero-inflated data, most of them strongly depend on the special features of the individual data. For example, there is a need for new models when dealing with truncated and inflated data. In this paper, we propose a new model that is sufficiently flexible to model inflation and truncation simultaneously, and which is a mixture of a multinomial logistic and a truncated Poisson regression, in which the multinomial logistic component models the occurrence of excessive counts. The truncated Poisson regression models the counts that are assumed to follow a truncated Poisson distribution. The performance of our proposed model is evaluated through simulation studies, and our model is found to have the smallest mean absolute error and best model fit. In the empirical example, the data are truncated with inflated values of zero and fourteen, and the results show that our model has a better fit than the other competing models.  相似文献   

17.
Partial linear varying coefficient models (PLVCM) are often considered for analysing longitudinal data for a good balance between flexibility and parsimony. The existing estimation and variable selection methods for this model are mainly built upon which subset of variables have linear or varying effect on the response is known in advance, or say, model structure is determined. However, in application, this is unreasonable. In this work, we propose a simultaneous structure estimation and variable selection method, which can do simultaneous coefficient estimation and three types of selections: varying and constant effects selection, relevant variable selection. It can be easily implemented in one step by employing a penalized M-type regression, which uses a general loss function to treat mean, median, quantile and robust mean regressions in a unified framework. Consistency in the three types of selections and oracle property in estimation are established as well. Simulation studies and real data analysis also confirm our method.  相似文献   

18.
Summary  In panel studies binary outcome measures together with time stationary and time varying explanatory variables are collected over time on the same individual. Therefore, a regression analysis for this type of data must allow for the correlation among the outcomes of an individual. The multivariate probit model of Ashford and Sowden (1970) was the first regression model for multivariate binary responses. However, a likelihood analysis of the multivariate probit model with general correlation structure for higher dimensions is intractable due to the maximization over high dimensional integrals thus severely restricting ist applicability so far. Czado (1996) developed a Markov Chain Monte Carlo (MCMC) algorithm to overcome this difficulty. In this paper we present an application of this algorithm to unemployment data from the Panel Study of Income Dynamics involving 11 waves of the panel study. In addition we adapt Bayesian model checking techniques based on the posterior predictive distribution (see for example Gelman et al. (1996)) for the multivariate probit model. These help to identify mean and correlation specification which fit the data well. C. Czado was supported by research grant OGP0089858 of the Natural Sciences and Engineering Research Council of Canada.  相似文献   

19.
The aim of this paper is to propose methods of detecting change in the coefficients of a multinomial logistic regression model for categorical time series offline. The alternatives to the null hypothesis of stationarity can be either the hypothesis that it is not true, or that there is a temporary change in the sequence. We use the efficient score vector of the partial likelihood function. This has several advantages. First, the alternative value of the parameter does not have to be estimated; hence, we have a procedure that has a simple structure with only one parameter estimation using all available observations. This is in contrast with the generalized likelihood ratio-based change point tests. The efficient score vector is used in various ways. As a vector, its components correspond to the different components of the multinomial logistic regression model’s parameter vector. Using its quadratic form a test can be defined, where the presence of a change in any or all parameters is tested for. If there are too many parameters one can test for any subset while treating the rest as nuisance parameters. Our motivating example is a DNA sequence of four categories, and our test result shows that in the published data the distribution of the four categories is not stationary.  相似文献   

20.
Variable selection is an important decision process in consumer credit scoring. However, with the rapid growth in credit industry, especially, after the rising of e-commerce, a huge amount of information on customer behavior is available to provide more informative implication of consumer credit scoring. In this study, a hybrid quadratic programming model is proposed for consumer credit scoring problems by variable selection. The proposed model is then solved with a bisection method based on Tabu search algorithm (BMTS), and the solution of this model provides alternative subsets of variables in different sizes. The final subset of variables used in consumer credit scoring model is selected based on both the size (number of variables in a subset) and predictive (classification) accuracy rate. Simulation studies are used to measure the performances of the proposed model, illustrating its effectiveness for simultaneous variable selection as well as classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号