首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

In this paper, we propose a new efficient and robust penalized estimating procedure for varying-coefficient single-index models based on modal regression and basis function approximations. The proposed procedure simultaneously solves two types of problems: separation of varying and constant effects and selection of variables with non zero coefficients for both non parametric and index components using three smoothly clipped absolute deviation (SCAD) penalties. With appropriate selection of the tuning parameters, the new method possesses the consistency in variable selection and the separation of varying and constant coefficients. In addition, the estimators of varying coefficients possess the optimal convergence rate and the estimators of constant coefficients and index parameters have the oracle property. Finally, we investigate the finite sample performance of the proposed method through a simulation study and real data analysis.  相似文献   

2.
In this paper, a new estimation procedure based on composite quantile regression and functional principal component analysis (PCA) method is proposed for the partially functional linear regression models (PFLRMs). The proposed estimation method can simultaneously estimate both the parametric regression coefficients and functional coefficient components without specification of the error distributions. The proposed estimation method is shown to be more efficient empirically for non-normal random error, especially for Cauchy error, and almost as efficient for normal random errors. Furthermore, based on the proposed estimation procedure, we use the penalized composite quantile regression method to study variable selection for parametric part in the PFLRMs. Under certain regularity conditions, consistency, asymptotic normality, and Oracle property of the resulting estimators are derived. Simulation studies and a real data analysis are conducted to assess the finite sample performance of the proposed methods.  相似文献   

3.
The high-dimensional data arises in diverse fields of sciences, engineering and humanities. Variable selection plays an important role in dealing with high dimensional statistical modelling. In this article, we study the variable selection of quadratic approximation via the smoothly clipped absolute deviation (SCAD) penalty with a diverging number of parameters. We provide a unified method to select variables and estimate parameters for various of high dimensional models. Under appropriate conditions and with a proper regularization parameter, we show that the estimator has consistency and sparsity, and the estimators of nonzero coefficients enjoy the asymptotic normality as they would have if the zero coefficients were known in advance. In addition, under some mild conditions, we can obtain the global solution of the penalized objective function with the SCAD penalty. Numerical studies and a real data analysis are carried out to confirm the performance of the proposed method.  相似文献   

4.
The beta regression models are commonly used by practitioners to model variables that assume values in the standard unit interval (0, 1). In this paper, we consider the issue of variable selection for beta regression models with varying dispersion (VBRM), in which both the mean and the dispersion depend upon predictor variables. Based on a penalized likelihood method, the consistency and the oracle property of the penalized estimators are established. Following the coordinate descent algorithm idea of generalized linear models, we develop new variable selection procedure for the VBRM, which can efficiently simultaneously estimate and select important variables in both mean model and dispersion model. Simulation studies and body fat data analysis are presented to illustrate the proposed methods.  相似文献   

5.
U-estimates are defined as maximizers of objective functions that are U-statistics. As an alternative to M-estimates, U-estimates have been extensively used in linear regression, classification, survival analysis, and many other areas. They may rely on weaker data and model assumptions and be preferred over alternatives. In this article, we investigate penalized variable selection with U-estimates. We propose smooth approximations of the objective functions, which can greatly reduce computational cost without affecting asymptotic properties. We study penalized variable selection using penalties that have been well investigated with M-estimates, including the LASSO, adaptive LASSO, and bridge, and establish their asymptotic properties. Generically applicable computational algorithms are described. Performance of the penalized U-estimates is assessed using numerical studies.  相似文献   

6.
To predict stock market behaviors, we use a factor-augmented predictive regression with shrinkage to incorporate the information available across literally thousands of financial and economic variables. The system is constructed in terms of both expected returns and the tails of the return distribution. We develop the variable selection consistency and asymptotic normality of the estimator. To select the regularization parameter, we employ the prediction error, with the aim of predicting the behavior of the stock market. Through analysis of the Tokyo Stock Exchange, we find that a large number of variables provide useful information for predicting stock market behaviors.  相似文献   

7.
Hailin Sang 《Statistics》2015,49(1):187-208
We propose a sparse coefficient estimation and automated model selection procedure for autoregressive processes with heavy-tailed innovations based on penalized conditional maximum likelihood. Under mild moment conditions on the innovation processes, the penalized conditional maximum likelihood estimator satisfies a strong consistency, OP(N?1/2) consistency, and the oracle properties, where N is the sample size. We have the freedom in choosing penalty functions based on the weak conditions on them. Two penalty functions, least absolute shrinkage and selection operator and smoothly clipped average deviation, are compared. The proposed method provides a distribution-based penalized inference to AR models, which is especially useful when the other estimation methods fail or under perform for AR processes with heavy-tailed innovations [Feigin, Resnick. Pitfalls of fitting autoregressive models for heavy-tailed time series. Extremes. 1999;1:391–422]. A simulation study confirms our theoretical results. At the end, we apply our method to a historical price data of the US Industrial Production Index for consumer goods, and obtain very promising results.  相似文献   

8.
Penalized regression methods have for quite some time been a popular choice for addressing challenges in high dimensional data analysis. Despite their popularity, their application to time series data has been limited. This paper concerns bridge penalized methods in a linear regression time series model. We first prove consistency, sparsity and asymptotic normality of bridge estimators under a general mixing model. Next, as a special case of mixing errors, we consider bridge regression with autoregressive and moving average (ARMA) error models and develop a computational algorithm that can simultaneously select important predictors and the orders of ARMA models. Simulated and real data examples demonstrate the effective performance of the proposed algorithm and the improvement over ordinary bridge regression.  相似文献   

9.
In this paper we are concerned with the problems of variable selection and estimation in double generalized linear models in which both the mean and the dispersion are allowed to depend on explanatory variables. We propose a maximum penalized pseudo-likelihood method when the number of parameters diverges with the sample size. With appropriate selection of the tuning parameters, the consistency of the variable selection procedure and asymptotic properties of the resulting estimators are established. We also carry out simulation studies and a real data analysis to assess the finite sample performance of the proposed variable selection procedure, showing that the proposed variable selection method works satisfactorily.  相似文献   

10.
Motivated by an entropy inequality, we propose for the first time a penalized profile likelihood method for simultaneously selecting significant variables and estimating unknown coefficients in multiple linear regression models in this article. The new method is robust to outliers or errors with heavy tails and works well even for error with infinite variance. Our proposed approach outperforms the adaptive lasso in both theory and practice. It is observed from the simulation studies that (i) the new approach possesses higher probability of correctly selecting the exact model than the least absolute deviation lasso and the adaptively penalized composite quantile regression approach and (ii) exact model selection via our proposed approach is robust regardless of the error distribution. An application to a real dataset is also provided.  相似文献   

11.
In high-dimensional regression problems regularization methods have been a popular choice to address variable selection and multicollinearity. In this paper we study bridge regression that adaptively selects the penalty order from data and produces flexible solutions in various settings. We implement bridge regression based on the local linear and quadratic approximations to circumvent the nonconvex optimization problem. Our numerical study shows that the proposed bridge estimators are a robust choice in various circumstances compared to other penalized regression methods such as the ridge, lasso, and elastic net. In addition, we propose group bridge estimators that select grouped variables and study their asymptotic properties when the number of covariates increases along with the sample size. These estimators are also applied to varying-coefficient models. Numerical examples show superior performances of the proposed group bridge estimators in comparisons with other existing methods.  相似文献   

12.
In this paper, we investigate robust parameter estimation and variable selection for binary regression models with grouped data. We investigate estimation procedures based on the minimum-distance approach. In particular, we employ minimum Hellinger and minimum symmetric chi-squared distances criteria and propose regularized minimum-distance estimators. These estimators appear to possess a certain degree of automatic robustness against model misspecification and/or for potential outliers. We show that the proposed non-penalized and penalized minimum-distance estimators are efficient under the model and simultaneously have excellent robustness properties. We study their asymptotic properties such as consistency, asymptotic normality and oracle properties. Using Monte Carlo studies, we examine the small-sample and robustness properties of the proposed estimators and compare them with traditional likelihood estimators. We also study two real-data applications to illustrate our methods. The numerical studies indicate the satisfactory finite-sample performance of our procedures.  相似文献   

13.
针对纵向数据半参数模型E(y|x,t)=XTβ+f(t),采用惩罚二次推断函数方法同时估计模型中的回归参数β和未知光滑函数f(t)。首先利用截断幂函数基对未知光滑函数进行基函数展开近似,然后利用惩罚样条的思想构造关于回归参数和基函数系数的惩罚二次推断函数,最小化惩罚二次推断函数便可得到回归参数和基函数系数的惩罚二次推断函数估计。理论结果显示,估计结果具有相合性和渐近正态性,通过数值方法也得到了较好的模拟结果。  相似文献   

14.
This paper considers robust variable selection in semiparametric modeling for longitudinal data with an unspecified dependence structure. First, by basis spline approximation and using a general formulation to treat mean, median, quantile and robust mean regressions in one setting, we propose a weighted M-type regression estimator, which achieves robustness against outliers in both the response and covariates directions, and can accommodate heterogeneity, and the asymptotic properties are also established. Furthermore, a penalized weighted M-type estimator is proposed, which can do estimation and select relevant nonparametric and parametric components simultaneously, and robustly. Without any specification of error distribution and intra-subject dependence structure, the variable selection method works beautifully, including consistency in variable selection and oracle property in estimation. Simulation studies also confirm our method and theories.  相似文献   

15.
In many scientific investigations, a large number of input variables are given at the early stage of modeling and identifying the variables predictive of the response is often a main purpose of such investigations. Recently, the support vector machine has become an important tool in classification problems of many fields. Several variants of the support vector machine adopting different penalties in its objective function have been proposed. This paper deals with the Fisher consistency and the oracle property of support vector machines in the setting where the dimension of inputs is fixed. First, we study the Fisher consistency of the support vector machine over the class of affine functions. It is shown that the function class for decision functions is crucial for the Fisher consistency. Second, we study the oracle property of the penalized support vector machines with the smoothly clipped absolute deviation penalty. Once we have addressed the Fisher consistency of the support vector machine over the class of affine functions, the oracle property appears to be meaningful in the context of classification. A simulation study is provided in order to show small sample properties of the penalized support vector machines with the smoothly clipped absolute deviation penalty.  相似文献   

16.
To perform regression analysis in high dimensions, lasso or ridge estimation are a common choice. However, it has been shown that these methods are not robust to outliers. Therefore, alternatives as penalized M-estimation or the sparse least trimmed squares (LTS) estimator have been proposed. The robustness of these regression methods can be measured with the influence function. It quantifies the effect of infinitesimal perturbations in the data. Furthermore, it can be used to compute the asymptotic variance and the mean-squared error (MSE). In this paper we compute the influence function, the asymptotic variance and the MSE for penalized M-estimators and the sparse LTS estimator. The asymptotic biasedness of the estimators make the calculations non-standard. We show that only M-estimators with a loss function with a bounded derivative are robust against regression outliers. In particular, the lasso has an unbounded influence function.  相似文献   

17.
The least product relative error (LPRE) estimator and test statistic to test linear hypotheses of regression parameters in the multiplicative regression model are studied when the number of covariate variables increases with the sample size. Some properties of the LPRE estimator and test statistic are obtained such as consistency, Bahadur presentation, and asymptotic distributions. Furthermore, we extend the LPRE to a more general relative error criterion and provide their statistical properties. Numerical studies including simulations and two real examples show that the proposed estimation performs well.  相似文献   

18.
We propose a penalized quantile regression for partially linear varying coefficient (VC) model with longitudinal data to select relevant non parametric and parametric components simultaneously. Selection consistency and oracle property are established. Furthermore, if linear part and VC part are unknown, we propose a new unified method, which can do three types of selections: separation of varying and constant effects, selection of relevant variables, and it can be carried out conveniently in one step. Consistency in the three types of selections and oracle property in estimation are established as well. Simulation studies and real data analysis also confirm our method.  相似文献   

19.
王小燕等 《统计研究》2014,31(9):107-112
变量选择是统计建模的重要环节,选择合适的变量可以建立结构简单、预测精准的稳健模型。本文在logistic回归下提出了新的双层变量选择惩罚方法——adaptive Sparse Group Lasso(adSGL),其独特之处在于基于变量的分组结构作筛选,实现了组内和组间双层选择。该方法的优点是对各单个系数和组系数采取不同程度的惩罚,避免了过度惩罚大系数,从而提高了模型的估计和预测精度。求解的难点是惩罚似然函数不是严格凸的,因此本文基于组坐标下降法求解模型,并建立了调整参数的选取准则。模拟分析表明,对比现有代表性方法Sparse Group Lasso、Group Lasso及Lasso,adSGL法不仅提高了双层选择精度,而且降低了模型误差。最后本文将adSGL法应用到信用卡信用评分研究,对比logistic回归,它具有更高的分类精度和稳健性。  相似文献   

20.
A regression model with skew-normal errors provides a useful extension for ordinary normal regression models when the data set under consideration involves asymmetric outcomes. Variable selection is an important issue in all regression analyses, and in this paper, we investigate the simultaneously variable selection in joint location and scale models of the skew-normal distribution. We propose a unified penalized likelihood method which can simultaneously select significant variables in the location and scale models. Furthermore, the proposed variable selection method can simultaneously perform parameter estimation and variable selection in the location and scale models. With appropriate selection of the tuning parameters, we establish the consistency and the oracle property of the regularized estimators. Simulation studies and a real example are used to illustrate the proposed methodologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号