首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 571 毫秒
1.
We consider the problem of variable selection in high-dimensional partially linear models with longitudinal data. A variable selection procedure is proposed based on the smooth-threshold generalized estimating equation (SGEE). The proposed procedure automatically eliminates inactive predictors by setting the corresponding parameters to be zero, and simultaneously estimates the nonzero regression coefficients by solving the SGEE. We establish the asymptotic properties in a high-dimensional framework where the number of covariates pn increases as the number of clusters n increases. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed variable selection procedure.  相似文献   

2.
ABSTRACT

We study partial linear models where the linear covariates are endogenous and cause an over-identified problem. We propose combining the profile principle with local linear approximation and the generalized moment methods (GMM) to estimate the parameters of interest. We show that the profiled GMM estimators are root? n consistent and asymptotically normally distributed. By appropriately choosing the weight matrix, the estimators can attain the efficiency bound. We further consider variable selection by using the moment restrictions imposed on endogenous variables when the dimension of the covariates may be diverging with the sample size, and propose a penalized GMM procedure, which is shown to have the sparsity property. We establish asymptotic normality of the resulting estimators of the nonzero parameters. Simulation studies have been presented to assess the finite-sample performance of the proposed procedure.  相似文献   

3.
This article proposes a variable selection procedure for partially linear models with right-censored data via penalized least squares. We apply the SCAD penalty to select significant variables and estimate unknown parameters simultaneously. The sampling properties for the proposed procedure are investigated. The rate of convergence and the asymptotic normality of the proposed estimators are established. Furthermore, the SCAD-penalized estimators of the nonzero coefficients are shown to have the asymptotic oracle property. In addition, an iterative algorithm is proposed to find the solution of the penalized least squares. Simulation studies are conducted to examine the finite sample performance of the proposed method.  相似文献   

4.
We propose penalized minimum φ-divergence estimator for parameter estimation and variable selection in logistic regression. Using an appropriate penalty function, we show that penalized φ-divergence estimator has oracle property. With probability tending to 1, penalized φ-divergence estimator identifies the true model and estimates nonzero coefficients as efficiently as if the sparsity of the true model was known in advance. The advantage of penalized φ-divergence estimator is that it produces estimates of nonzero parameters efficiently than penalized maximum likelihood estimator when sample size is small and is equivalent to it for large one. Numerical simulations confirm our findings.  相似文献   

5.
In this article, the partially linear covariate-adjusted regression models are considered, and the penalized least-squares procedure is proposed to simultaneously select variables and estimate the parametric components. The rate of convergence and the asymptotic normality of the resulting estimators are established under some regularization conditions. With the proper choices of the penalty functions and tuning parameters, it is shown that the proposed procedure can be as efficient as the oracle estimators. Some Monte Carlo simulation studies and a real data application are carried out to assess the finite sample performances for the proposed method.  相似文献   

6.
ABSTRACT

In this paper, we propose a new efficient and robust penalized estimating procedure for varying-coefficient single-index models based on modal regression and basis function approximations. The proposed procedure simultaneously solves two types of problems: separation of varying and constant effects and selection of variables with non zero coefficients for both non parametric and index components using three smoothly clipped absolute deviation (SCAD) penalties. With appropriate selection of the tuning parameters, the new method possesses the consistency in variable selection and the separation of varying and constant coefficients. In addition, the estimators of varying coefficients possess the optimal convergence rate and the estimators of constant coefficients and index parameters have the oracle property. Finally, we investigate the finite sample performance of the proposed method through a simulation study and real data analysis.  相似文献   

7.
We propose a new adaptive L1 penalized quantile regression estimator for high-dimensional sparse regression models with heterogeneous error sequences. We show that under weaker conditions compared with alternative procedures, the adaptive L1 quantile regression selects the true underlying model with probability converging to one, and the unique estimates of nonzero coefficients it provides have the same asymptotic normal distribution as the quantile estimator which uses only the covariates with non-zero impact on the response. Thus, the adaptive L1 quantile regression enjoys oracle properties. We propose a completely data driven choice of the penalty level λnλn, which ensures good performance of the adaptive L1 quantile regression. Extensive Monte Carlo simulation studies have been conducted to demonstrate the finite sample performance of the proposed method.  相似文献   

8.
Liang H  Liu X  Li R  Tsai CL 《Annals of statistics》2010,38(6):3811-3836
In partially linear single-index models, we obtain the semiparametrically efficient profile least-squares estimators of regression coefficients. We also employ the smoothly clipped absolute deviation penalty (SCAD) approach to simultaneously select variables and estimate regression coefficients. We show that the resulting SCAD estimators are consistent and possess the oracle property. Subsequently, we demonstrate that a proposed tuning parameter selector, BIC, identifies the true model consistently. Finally, we develop a linear hypothesis test for the parametric coefficients and a goodness-of-fit test for the nonparametric component, respectively. Monte Carlo studies are also presented.  相似文献   

9.
This paper studies the sparsity selection and estimation in nonparametric additive models. The sparsity refers to two types — across and within variables. Sparsity across variables corresponds to the irrelevant components in the models; sparsity within variables corresponds to zero function values over the sub-domains of the relevant components. To select and estimate the sparsity, I approximate each component by B-splines and propose a group bridge penalized method, which can simultaneously identify the zero functions and zero structures of the nonzero functions. Simulation studies demonstrate the effectiveness of the proposed method in sparsity selection and estimation across and within variables.  相似文献   

10.
Abstract

In this article, we consider a panel data partially linear regression model with fixed effect and non parametric time trend function. The data can be dependent cross individuals through linear regressor and error components. Unlike the methods using non parametric smoothing technique, a difference-based method is proposed to estimate linear regression coefficients of the model to avoid bandwidth selection. Here the difference technique is employed to eliminate the non parametric function effect, not the fixed effects, on linear regressor coefficient estimation totally. Therefore, a more efficient estimator for parametric part is anticipated, which is shown to be true by the simulation results. For the non parametric component, the polynomial spline technique is implemented. The asymptotic properties of estimators for parametric and non parametric parts are presented. We also show how to select informative ones from a number of covariates in the linear part by using smoothly clipped absolute deviation-penalized estimators on a difference-based least-squares objective function, and the resulting estimators perform asymptotically as well as the oracle procedure in terms of selecting the correct model.  相似文献   

11.
In this paper, a new estimation procedure based on composite quantile regression and functional principal component analysis (PCA) method is proposed for the partially functional linear regression models (PFLRMs). The proposed estimation method can simultaneously estimate both the parametric regression coefficients and functional coefficient components without specification of the error distributions. The proposed estimation method is shown to be more efficient empirically for non-normal random error, especially for Cauchy error, and almost as efficient for normal random errors. Furthermore, based on the proposed estimation procedure, we use the penalized composite quantile regression method to study variable selection for parametric part in the PFLRMs. Under certain regularity conditions, consistency, asymptotic normality, and Oracle property of the resulting estimators are derived. Simulation studies and a real data analysis are conducted to assess the finite sample performance of the proposed methods.  相似文献   

12.
We propose a robust regression method called regression with outlier shrinkage (ROS) for the traditional n>pn>p cases. It improves over the other robust regression methods such as least trimmed squares (LTS) in the sense that it can achieve maximum breakdown value and full asymptotic efficiency simultaneously. Moreover, its computational complexity is no more than that of LTS. We also propose a sparse estimator, called sparse regression with outlier shrinkage (SROS), for robust variable selection and estimation. It is proven that SROS can not only give consistent selection but also estimate the nonzero coefficients with full asymptotic efficiency under the normal model. In addition, we introduce a concept of nearly regression equivariant estimator for understanding the breakdown properties of sparse estimators, and prove that SROS achieves the maximum breakdown value of nearly regression equivariant estimators. Numerical examples are presented to illustrate our methods.  相似文献   

13.
In the economics and biological gene expression study area where a large number of variables will be involved, even when the predictors are independent, as long as the dimension is high, the maximum sample correlation can be large. Variable selection is a fundamental method to deal with such models. The ridge regression performs well when the predictors are highly correlated and some nonconcave penalized thresholding estimators enjoy the nice oracle property. In order to provide a satisfactory solution to the collinearity problem, in this paper we report the combined-penalization (CP) mixed by the nonconcave penalty and ridge, with a diverging number of parameters. It is observed that the CP estimator with a diverging number of parameters can correctly select covariates with nonzero coefficients and can estimate parameters simultaneously in the presence of multicollinearity. Simulation studies and a real data example demonstrate the well performance of the proposed method.  相似文献   

14.
ABSTRACT

In this paper, we study a novelly robust variable selection and parametric component identification simultaneously in varying coefficient models. The proposed estimator is based on spline approximation and two smoothly clipped absolute deviation (SCAD) penalties through rank regression, which is robust with respect to heavy-tailed errors or outliers in the response. Furthermore, when the tuning parameter is chosen by modified BIC criterion, we show that the proposed procedure is consistent both in variable selection and the separation of varying and constant coefficients. In addition, the estimators of varying coefficients possess the optimal convergence rate under some assumptions, and the estimators of constant coefficients have the same asymptotic distribution as their counterparts obtained when the true model is known. Simulation studies and a real data example are undertaken to assess the finite sample performance of the proposed variable selection procedure.  相似文献   

15.
The high-dimensional data arises in diverse fields of sciences, engineering and humanities. Variable selection plays an important role in dealing with high dimensional statistical modelling. In this article, we study the variable selection of quadratic approximation via the smoothly clipped absolute deviation (SCAD) penalty with a diverging number of parameters. We provide a unified method to select variables and estimate parameters for various of high dimensional models. Under appropriate conditions and with a proper regularization parameter, we show that the estimator has consistency and sparsity, and the estimators of nonzero coefficients enjoy the asymptotic normality as they would have if the zero coefficients were known in advance. In addition, under some mild conditions, we can obtain the global solution of the penalized objective function with the SCAD penalty. Numerical studies and a real data analysis are carried out to confirm the performance of the proposed method.  相似文献   

16.
Abstract

In this article, we propose a new penalized-likelihood method to conduct model selection for finite mixture of regression models. The penalties are imposed on mixing proportions and regression coefficients, and hence order selection of the mixture and the variable selection in each component can be simultaneously conducted. The consistency of order selection and the consistency of variable selection are investigated. A modified EM algorithm is proposed to maximize the penalized log-likelihood function. Numerical simulations are conducted to demonstrate the finite sample performance of the estimation procedure. The proposed methodology is further illustrated via real data analysis.  相似文献   

17.
In this paper, we consider the prediction problem in multiple linear regression model in which the number of predictor variables, p, is extremely large compared to the number of available observations, n  . The least-squares predictor based on a generalized inverse is not efficient. We propose six empirical Bayes estimators of the regression parameters. Three of them are shown to have uniformly lower prediction error than the least-squares predictors when the vector of regressor variables are assumed to be random with mean vector zero and the covariance matrix (1/n)XtX(1/n)XtX where Xt=(x1,…,xn)Xt=(x1,,xn) is the p×np×n matrix of observations on the regressor vector centered from their sample means. For other estimators, we use simulation to show its superiority over the least-squares predictor.  相似文献   

18.
We consider two estimation schemes based on penalized quasilikelihood and quasi-pseudo-likelihood in Poisson mixed models. The asymptotic bias in regression coefficients and variance components estimated by penalized quasilikelihood (PQL) is studied for small values of the variance components. We show the PQL estimators of both regression coefficients and variance components in Poisson mixed models have a smaller order of bias compared to those for binomial data. Unbiased estimating equations based on quasi-pseudo-likelihood are proposed and are shown to yield consistent estimators under some regularity conditions. The finite sample performance of these two methods is compared through a simulation study.  相似文献   

19.
ABSTRACT

As a compromise between parametric regression and non-parametric regression models, partially linear models are frequently used in statistical modelling. This paper is concerned with the estimation of partially linear regression model in the presence of multicollinearity. Based on the profile least-squares approach, we propose a novel principal components regression (PCR) estimator for the parametric component. When some additional linear restrictions on the parametric component are available, we construct a corresponding restricted PCR estimator. Some simulations are conducted to examine the performance of our proposed estimators and the results are satisfactory. Finally, a real data example is analysed.  相似文献   

20.
Penalized regression methods have for quite some time been a popular choice for addressing challenges in high dimensional data analysis. Despite their popularity, their application to time series data has been limited. This paper concerns bridge penalized methods in a linear regression time series model. We first prove consistency, sparsity and asymptotic normality of bridge estimators under a general mixing model. Next, as a special case of mixing errors, we consider bridge regression with autoregressive and moving average (ARMA) error models and develop a computational algorithm that can simultaneously select important predictors and the orders of ARMA models. Simulated and real data examples demonstrate the effective performance of the proposed algorithm and the improvement over ordinary bridge regression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号