Laplace Error Penalty‐based Variable Selection in High Dimension |
| |
Authors: | Canhong Wen Xueqin Wang Shaoli Wang |
| |
Affiliation: | 1. Southern China Research Center of Statistical Science and Department of Statistical Science, School of Mathematics and Computational ScienceSun Yat‐Sen University;2. Southern China Research Center of Statistical Science, Department of Statistical Science, School of Mathematics and Computational Science and Zhongshan School of MedicineSun Yat‐Sen University;3. School of Statistics and ManagementShanghai University of Finance and Economics |
| |
Abstract: | We propose the Laplace Error Penalty (LEP) function for variable selection in high‐dimensional regression. Unlike penalty functions using piecewise splines construction, the LEP is constructed as an exponential function with two tuning parameters and is infinitely differentiable everywhere except at the origin. With this construction, the LEP‐based procedure acquires extra flexibility in variable selection, admits a unified derivative formula in optimization and is able to approximate the L0 penalty as close as possible. We show that the LEP procedure can identify relevant predictors in exponentially high‐dimensional regression with normal errors. We also establish the oracle property for the LEP estimator. Although not being convex, the LEP yields a convex penalized least squares function under mild conditions if p is no greater than n. A coordinate descent majorization‐minimization algorithm is introduced to implement the LEP procedure. In simulations and a real data analysis, the LEP methodology performs favorably among competitive procedures. |
| |
Keywords: | coordinate descent majorization minimization high‐dimensional regression Laplace error penalty oracle property smooth penalty function sparsity variable selection |
|