首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article, we study stepwise AIC method for variable selection comparing with other stepwise method for variable selection, such as, Partial F, Partial Correlation, and Semi-Partial Correlation in linear regression modeling. Then we show mathematically that the stepwise AIC method and other stepwise methods lead to the same method as Partial F. Hence, there are more reasons to use the stepwise AIC method than the other stepwise methods for variable selection, since the stepwise AIC method is a model selection method that can be easily managed and can be widely extended to more generalized models and applied to non normally distributed data. We also treat problems that always appear in applications, that are validation of selected variables and problem of collinearity.  相似文献   

2.
In order to explore and compare a finite number T of data sets by applying functional principal component analysis (FPCA) to the T associated probability density functions, we estimate these density functions by using the multivariate kernel method. The data set sizes being fixed, we study the behaviour of this FPCA under the assumption that all the bandwidth matrices used in the estimation of densities are proportional to a common parameter h and proportional to either the variance matrices or the identity matrix. In this context, we propose a selection criterion of the parameter h which depends only on the data and the FPCA method. Then, on simulated examples, we compare the quality of approximation of the FPCA when the bandwidth matrices are selected using either the previous criterion or two other classical bandwidth selection methods, that is, a plug-in or a cross-validation method.  相似文献   

3.
Strong orthogonal arrays (SOAs) were recently introduced and studied as a class of space‐filling designs for computer experiments. An important problem that has not been addressed in the literature is that of design selection for such arrays. In this article, we conduct a systematic investigation into this problem, and we focus on the most useful SOA(n,m,4,2 + )s and SOA(n,m,4,2)s. This article first addresses the problem of design selection for SOAs of strength 2+ by examining their three‐dimensional projections. Both theoretical and computational results are presented. When SOAs of strength 2+ do not exist, we formulate a general framework for the selection of SOAs of strength 2 by looking at their two‐dimensional projections. The approach is fruitful, as it is applicable when SOAs of strength 2+ do not exist and it gives rise to them when they do. The Canadian Journal of Statistics 47: 302–314; 2019 © 2019 Statistical Society of Canada  相似文献   

4.
Variable selection is an important issue in all regression analysis, and in this article, we investigate the simultaneous variable selection in joint location and scale models of the skew-t-normal distribution when the dataset under consideration involves heavy tail and asymmetric outcomes. We propose a unified penalized likelihood method which can simultaneously select significant variables in the location and scale models. Furthermore, the proposed variable selection method can simultaneously perform parameter estimation and variable selection in the location and scale models. With appropriate selection of the tuning parameters, we establish the consistency and the oracle property of the regularized estimators. These estimators are compared by simulation studies.  相似文献   

5.
In this paper we propose a new identification method based on the residual white noise autoregressive criterion (Pukkila et al., 1990) to select the order of VARMA structures. Results from extensive simulation experiments based on different model structures with varying number of observations and number of component series are used to demonstrate the performance of this new procedure. We also use economic and business data to compare the model structures selected by this order selection method with those identified in other published studies.  相似文献   

6.
Stepwise methods for variable selection are frequently used to determine the predictors of an outcome in generalized linear models. Although it is widely used within the scientific community, it is well known that the tests on the explained deviance of the selected model are biased. This arises from the fact that the traditional test statistics upon which these methods are based were intended for testing pre-specified hypotheses; instead, the tested model is selected through a data-driven procedure. A multiplicity problem therefore arises. In this work, we define and discuss a nonparametric procedure to adjust the p-value of the selected model of any stepwise selection method. The unbiasedness and consistency of the method is also proved. A simulation study shows the validity of this procedure. Theoretical differences with previous works in the same field are also discussed.  相似文献   

7.
The varying coefficient model (VCM) is an important generalization of the linear regression model and many existing estimation procedures for VCM were built on L 2 loss, which is popular for its mathematical beauty but is not robust to non-normal errors and outliers. In this paper, we address the problem of both robustness and efficiency of estimation and variable selection procedure based on the convex combined loss of L 1 and L 2 instead of only quadratic loss for VCM. By using local linear modeling method, the asymptotic normality of estimation is driven and a useful selection method is proposed for the weight of composite L 1 and L 2. Then the variable selection procedure is given by combining local kernel smoothing with adaptive group LASSO. With appropriate selection of tuning parameters by Bayesian information criterion (BIC) the theoretical properties of the new procedure, including consistency in variable selection and the oracle property in estimation, are established. The finite sample performance of the new method is investigated through simulation studies and the analysis of body fat data. Numerical studies show that the new method is better than or at least as well as the least square-based method in terms of both robustness and efficiency for variable selection.  相似文献   

8.
The main focus of our paper is to compare the performance of different model selection criteria used for multivariate reduced rank time series. We consider one of the most commonly used reduced rank model, that is, the reduced rank vector autoregression (RRVAR (p, r)) introduced by Velu et al. [Reduced rank models for multiple time series. Biometrika. 1986;7(31):105–118]. In our study, the most popular model selection criteria are included. The criteria are divided into two groups, that is, simultaneous selection and two-step selection criteria, accordingly. Methods from the former group select both an autoregressive order p and a rank r simultaneously, while in the case of two-step criteria, first an optimal order p is chosen (using model selection criteria intended for the unrestricted VAR model) and then an optimal rank r of coefficient matrices is selected (e.g. by means of sequential testing). Considered model selection criteria include well-known information criteria (such as Akaike information criterion, Schwarz criterion, Hannan–Quinn criterion, etc.) as well as widely used sequential tests (e.g. the Bartlett test) and the bootstrap method. An extensive simulation study is carried out in order to investigate the efficiency of all model selection criteria included in our study. The analysis takes into account 34 methods, including 6 simultaneous methods and 28 two-step approaches, accordingly. In order to carefully analyse how different factors affect performance of model selection criteria, we consider over 150 simulation settings. In particular, we investigate the influence of the following factors: time series dimension, different covariance structure, different level of correlation among components and different level of noise (variance). Moreover, we analyse the prediction accuracy concerned with the application of the RRVAR model and compare it with results obtained for the unrestricted vector autoregression. In this paper, we also present a real data application of model selection criteria for the RRVAR model using the Polish macroeconomic time series data observed in the period 1997–2007.  相似文献   

9.
Although the t-type estimator is a kind of M-estimator with scale optimization, it has some advantages over the M-estimator. In this article, we first propose a t-type joint generalized linear model as a robust extension to the classical joint generalized linear models for modeling data containing extreme or outlying observations. Next, we develop a t-type pseudo-likelihood (TPL) approach, which can be viewed as a robust version to the existing pseudo-likelihood (PL) approach. To determine which variables significantly affect the variance of the response variable, we then propose a unified penalized maximum TPL method to simultaneously select significant variables for the mean and dispersion models in t-type joint generalized linear models. Thus, the proposed variable selection method can simultaneously perform parameter estimation and variable selection in the mean and dispersion models. With appropriate selection of the tuning parameters, we establish the consistency and the oracle property of the regularized estimators. Simulation studies are conducted to illustrate the proposed methods.  相似文献   

10.
ABSTRACT

In this article, we propose a more general criterion called Sp -criterion, for subset selection in the multiple linear regression Model. Many subset selection methods are based on the Least Squares (LS) estimator of β, but whenever the data contain an influential observation or the distribution of the error variable deviates from normality, the LS estimator performs ‘poorly’ and hence a method based on this estimator (for example, Mallows’ Cp -criterion) tends to select a ‘wrong’ subset. The proposed method overcomes this drawback and its main feature is that it can be used with any type of estimator (either the LS estimator or any robust estimator) of β without any need for modification of the proposed criterion. Moreover, this technique is operationally simple to implement as compared to other existing criteria. The method is illustrated with examples.  相似文献   

11.
The problem of detecting multiple undocumented change-points in a historical temperature sequence with simple linear trend is formulated by a linear model. We apply adaptive least absolute shrinkage and selection operator (Lasso) to estimate the number and locations of change-points. Model selection criteria are used to choose the Lasso smoothing parameter. As adaptive Lasso may overestimate the number of change-points, we perform post-selection on change-points detected by adaptive Lasso using multivariate t simultaneous confidence intervals. Our method is demonstrated on the annual temperature data (year: 1902–2000) from Tuscaloosa, Alabama.  相似文献   

12.
In this article, we consider the problem of selecting functional variables using the L1 regularization in a functional linear regression model with a scalar response and functional predictors, in the presence of outliers. Since the LASSO is a special case of the penalized least-square regression with L1 penalty function, it suffers from the heavy-tailed errors and/or outliers in data. Recently, Least Absolute Deviation (LAD) and the LASSO methods have been combined (the LAD-LASSO regression method) to carry out robust parameter estimation and variable selection simultaneously for a multiple linear regression model. However, variable selection of the functional predictors based on LASSO fails since multiple parameters exist for a functional predictor. Therefore, group LASSO is used for selecting functional predictors since group LASSO selects grouped variables rather than individual variables. In this study, we propose a robust functional predictor selection method, the LAD-group LASSO, for a functional linear regression model with a scalar response and functional predictors. We illustrate the performance of the LAD-group LASSO on both simulated and real data.  相似文献   

13.
In this article, a robust variable selection procedure based on the weighted composite quantile regression (WCQR) is proposed. Compared with the composite quantile regression (CQR), WCQR is robust to heavy-tailed errors and outliers in the explanatory variables. For the choice of the weights in the WCQR, we employ a weighting scheme based on the principal component method. To select variables with grouping effect, we consider WCQR with SCAD-L2 penalization. Furthermore, under some suitable assumptions, the theoretical properties, including the consistency and oracle property of the estimator, are established with a diverging number of parameters. In addition, we study the numerical performance of the proposed method in the case of ultrahigh-dimensional data. Simulation studies and real examples are provided to demonstrate the superiority of our method over the CQR method when there are outliers in the explanatory variables and/or the random error is from a heavy-tailed distribution.  相似文献   

14.
In this work, we develop a method of adaptive non‐parametric estimation, based on ‘warped’ kernels. The aim is to estimate a real‐valued function s from a sample of random couples (X,Y). We deal with transformed data (Φ(X),Y), with Φ a one‐to‐one function, to build a collection of kernel estimators. The data‐driven bandwidth selection is performed with a method inspired by Goldenshluger and Lepski (Ann. Statist., 39, 2011, 1608). The method permits to handle various problems such as additive and multiplicative regression, conditional density estimation, hazard rate estimation based on randomly right‐censored data, and cumulative distribution function estimation from current‐status data. The interest is threefold. First, the squared‐bias/variance trade‐off is automatically realized. Next, non‐asymptotic risk bounds are derived. Lastly, the estimator is easily computed, thanks to its simple expression: a short simulation study is presented.  相似文献   

15.
Traditional parametric and nonparametric regression techniques encounter serious over smoothing problems when jump point discontinuities exist in the underlying mean function. Recently, Chu, Glad, Godtliebsen and Marron (1998) developed a method using a modified M-smoothing technique to preserve jumps and spikes while producing a smooth estimate of the mean function. The performance of Chu etal.'s (1998) method is quite sensitive to the choice of the required bandwidths g and h. Furthermore, it is not obvious how to extend certain commonly used automatic bandwidth selection procedures when jumps and spikes are present. In this paper we propose a rule of thumb method of choosing the smoothing parameters based on asymptotic optimal bandwidth formulas and robust estimates of unknown quantities. We also evaluate the proposed bandwidth selection method via a small simulation study.  相似文献   

16.
Abstract

The Birnbaum-Saunders (BS) distribution is an asymmetric probability model that is receiving considerable attention. In this article, we propose a methodology based on a new class of BS models generated from the Student-t distribution. We obtain a recurrence relationship for a BS distribution based on a nonlinear skew–t distribution. Model parameters estimators are obtained by means of the maximum likelihood method, which are evaluated by Monte Carlo simulations. We illustrate the obtained results by analyzing two real data sets. These data analyses allow the adequacy of the proposed model to be shown and discussed by applying model selection tools.  相似文献   

17.
In this article, we are concerned with detecting the true structure of a functional polynomial regression with autoregressive (AR) errors. The first issue is to detect which orders of the polynomial are significant in functional polynomial regression. The second issue is to detect which orders of the AR process in the AR errors are significant. We propose a shrinkage method to deal with the two problems: polynomial order selection and autoregressive order selection. Simulation studies demonstrate that the new method can identify the true structure. One empirical example is also presented to illustrate the usefulness of our method.  相似文献   

18.
Regression procedures are not only hindered by large p and small n, but can also suffer in cases when outliers are present or the data generating mechanisms are heavy tailed. Since the penalized estimates like the least absolute shrinkage and selection operator (LASSO) are equipped to deal with the large p small n by encouraging sparsity, we combine a LASSO type penalty with the absolute deviation loss function, instead of the standard least squares loss, to handle the presence of outliers and heavy tails. The model is cast in a Bayesian setting and a Gibbs sampler is derived to efficiently sample from the posterior distribution. We compare our method to existing methods in a simulation study as well as on a prostate cancer data set and a base deficit data set from trauma patients.  相似文献   

19.
Abstract. Lasso and other regularization procedures are attractive methods for variable selection, subject to a proper choice of shrinkage parameter. Given a set of potential subsets produced by a regularization algorithm, a consistent model selection criterion is proposed to select the best one among this preselected set. The approach leads to a fast and efficient procedure for variable selection, especially in high‐dimensional settings. Model selection consistency of the suggested criterion is proven when the number of covariates d is fixed. Simulation studies suggest that the criterion still enjoys model selection consistency when d is much larger than the sample size. The simulations also show that our approach for variable selection works surprisingly well in comparison with existing competitors. The method is also applied to a real data set.  相似文献   

20.
Qiu and Sheng has proposed a powerful and robust two-stage procedure to compare two hazard rate functions. In this paper we improve their method by using the Fisher test to combine the asymptotically independent p-values obtained from the two stages of their procedure. In addition, we extend the procedure to situations with multiple hazard rate functions. Our comprehensive simulation study shows that the proposed method has a good performance in terms of controlling the type I error rate and of detecting power. Three real data applications are considered for illustrating the use of the new method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号