首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We study sparse high dimensional additive model fitting via penalization with sparsity-smoothness penalties. We review several existing algorithms that have been developed for this problem in the recent literature, highlighting the connections between them, and present some computationally efficient algorithms for fitting such models. Furthermore, using reasonable assumptions and exploiting recent results on group LASSO-like procedures, we take advantage of several oracle results which yield asymptotic optimality of estimators for high-dimensional but sparse additive models. Finally, variable selection procedures are compared with some high-dimensional testing procedures available in the literature for testing the presence of additive components.  相似文献   

2.
There are several procedures for fitting generalized additive models, i.e. regression models for an exponential family response where the influence of each single covariates is assumed to have unknown, potentially non-linear shape. Simulated data are used to compare a smoothing parameter optimization approach for selection of smoothness and of covariates, a stepwise approach, a mixed model approach, and a procedure based on boosting techniques. In particular it is investigated how the performance of procedures is linked to amount of information, type of response, total number of covariates, number of influential covariates, and extent of non-linearity. Measures for comparison are prediction performance, identification of influential covariates, and smoothness of fitted functions. One result is that the mixed model approach returns sparse fits with frequently over-smoothed functions, while the functions are less smooth for the boosting approach and variable selection is less strict. The other approaches are in between with respect to these measures. The boosting procedure is seen to perform very well when little information is available and/or when a large number of covariates is to be investigated. It is somewhat surprising that in scenarios with low information the fitting of a linear model, even with stepwise variable selection, has not much advantage over the fitting of an additive model when the true underlying structure is linear. In cases with more information the prediction performance of all procedures is very similar. So, in difficult data situations the boosting approach can be recommended, in others the procedures can be chosen conditional on the aim of the analysis.  相似文献   

3.
In this paper, we translate variable selection for linear regression into multiple testing, and select significant variables according to testing result. New variable selection procedures are proposed based on the optimal discovery procedure (ODP) in multiple testing. Due to ODP’s optimality, if we guarantee the number of significant variables included, it will include less non significant variables than marginal p-value based methods. Consistency of our procedures is obtained in theory and simulation. Simulation results suggest that procedures based on multiple testing have improvement over procedures based on selection criteria, and our new procedures have better performance than marginal p-value based procedures.  相似文献   

4.
We propose two new procedures based on multiple hypothesis testing for correct support estimation in high‐dimensional sparse linear models. We conclusively prove that both procedures are powerful and do not require the sample size to be large. The first procedure tackles the atypical setting of ordered variable selection through an extension of a testing procedure previously developed in the context of a linear hypothesis. The second procedure is the main contribution of this paper. It enables data analysts to perform support estimation in the general high‐dimensional framework of non‐ordered variable selection. A thorough simulation study and applications to real datasets using the R package mht shows that our non‐ordered variable procedure produces excellent results in terms of correct support estimation as well as in terms of mean square errors and false discovery rate, when compared to common methods such as the Lasso, the SCAD penalty, forward regression or the false discovery rate procedure (FDR).  相似文献   

5.
Selection of the important variables is one of the most important model selection problems in statistical applications. In this article, we address variable selection in finite mixture of generalized semiparametric models. To overcome computational burden, we introduce a class of variable selection procedures for finite mixture of generalized semiparametric models using penalized approach for variable selection. Estimation of nonparametric component will be done via multivariate kernel regression. It is shown that the new method is consistent for variable selection and the performance of proposed method will be assessed via simulation.  相似文献   

6.
This paper focuses on the variable selection for semiparametric varying coefficient partially linear model when the covariates are measured with additive errors and the response is missing. An adaptive lasso estimator and the smoothly clipped absolute deviation estimator as a comparison for the parameters are proposed. With the proper selection of regularization parameter, the sampling properties including the consistency of the two procedures and the oracle properties are established. Furthermore, the algorithms and corresponding standard error formulas are discussed. A simulation study is carried out to assess the finite sample performance of the proposed methods.  相似文献   

7.
Qunfang Xu 《Statistics》2017,51(6):1280-1303
In this paper, semiparametric modelling for longitudinal data with an unstructured error process is considered. We propose a partially linear additive regression model for longitudinal data in which within-subject variances and covariances of the error process are described by unknown univariate and bivariate functions, respectively. We provide an estimating approach in which polynomial splines are used to approximate the additive nonparametric components and the within-subject variance and covariance functions are estimated nonparametrically. Both the asymptotic normality of the resulting parametric component estimators and optimal convergence rate of the resulting nonparametric component estimators are established. In addition, we develop a variable selection procedure to identify significant parametric and nonparametric components simultaneously. We show that the proposed SCAD penalty-based estimators of non-zero components have an oracle property. Some simulation studies are conducted to examine the finite-sample performance of the proposed estimation and variable selection procedures. A real data set is also analysed to demonstrate the usefulness of the proposed method.  相似文献   

8.
Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.  相似文献   

9.
Based on B-spline basis functions and smoothly clipped absolute deviation (SCAD) penalty, we present a new estimation and variable selection procedure based on modal regression for partially linear additive models. The outstanding merit of the new method is that it is robust against outliers or heavy-tail error distributions and performs no worse than the least-square-based estimation for normal error case. The main difference is that the standard quadratic loss is replaced by a kernel function depending on a bandwidth that can be automatically selected based on the observed data. With appropriate selection of the regularization parameters, the new method possesses the consistency in variable selection and oracle property in estimation. Finally, both simulation study and real data analysis are performed to examine the performance of our approach.  相似文献   

10.
This paper presents a Bayesian analysis of partially linear additive models for quantile regression. We develop a semiparametric Bayesian approach to quantile regression models using a spectral representation of the nonparametric regression functions and the Dirichlet process (DP) mixture for error distribution. We also consider Bayesian variable selection procedures for both parametric and nonparametric components in a partially linear additive model structure based on the Bayesian shrinkage priors via a stochastic search algorithm. Based on the proposed Bayesian semiparametric additive quantile regression model referred to as BSAQ, the Bayesian inference is considered for estimation and model selection. For the posterior computation, we design a simple and efficient Gibbs sampler based on a location-scale mixture of exponential and normal distributions for an asymmetric Laplace distribution, which facilitates the commonly used collapsed Gibbs sampling algorithms for the DP mixture models. Additionally, we discuss the asymptotic property of the sempiparametric quantile regression model in terms of consistency of posterior distribution. Simulation studies and real data application examples illustrate the proposed method and compare it with Bayesian quantile regression methods in the literature.  相似文献   

11.
We propose a statistical inference framework for the component-wise functional gradient descent algorithm (CFGD) under normality assumption for model errors, also known as $$L_2$$-Boosting. The CFGD is one of the most versatile tools to analyze data, because it scales well to high-dimensional data sets, allows for a very flexible definition of additive regression models and incorporates inbuilt variable selection. Due to the variable selection, we build on recent proposals for post-selection inference. However, the iterative nature of component-wise boosting, which can repeatedly select the same component to update, necessitates adaptations and extensions to existing approaches. We propose tests and confidence intervals for linear, grouped and penalized additive model components selected by $$L_2$$-Boosting. Our concepts also transfer to slow-learning algorithms more generally, and to other selection techniques which restrict the response space to more complex sets than polyhedra. We apply our framework to an additive model for sales prices of residential apartments and investigate the properties of our concepts in simulation studies.  相似文献   

12.
This paper sets out to implement the Bayesian paradigm for fractional polynomial models under the assumption of normally distributed error terms. Fractional polynomials widen the class of ordinary polynomials and offer an additive and transportable modelling approach. The methodology is based on a Bayesian linear model with a quasi-default hyper-g prior and combines variable selection with parametric modelling of additive effects. A Markov chain Monte Carlo algorithm for the exploration of the model space is presented. This theoretically well-founded stochastic search constitutes a substantial improvement to ad hoc stepwise procedures for the fitting of fractional polynomial models. The method is applied to a data set on the relationship between ozone levels and meteorological parameters, previously analysed in the literature.  相似文献   

13.
One of the standard variable selection procedures in multiple linear regression is to use a penalisation technique in least‐squares (LS) analysis. In this setting, many different types of penalties have been introduced to achieve variable selection. It is well known that LS analysis is sensitive to outliers, and consequently outliers can present serious problems for the classical variable selection procedures. Since rank‐based procedures have desirable robustness properties compared to LS procedures, we propose a rank‐based adaptive lasso‐type penalised regression estimator and a corresponding variable selection procedure for linear regression models. The proposed estimator and variable selection procedure are robust against outliers in both response and predictor space. Furthermore, since rank regression can yield unstable estimators in the presence of multicollinearity, in order to provide inference that is robust against multicollinearity, we adjust the penalty term in the adaptive lasso function by incorporating the standard errors of the rank estimator. The theoretical properties of the proposed procedures are established and their performances are investigated by means of simulations. Finally, the estimator and variable selection procedure are applied to the Plasma Beta‐Carotene Level data set.  相似文献   

14.
Nonparametric additive models are powerful techniques for multivariate data analysis. Although many procedures have been developed for estimating additive components both in mean regression and quantile regression, the problem of selecting relevant components has not been addressed much especially in quantile regression. We present a doubly-penalized estimation procedure for component selection in additive quantile regression models that combines basis function approximation with a ridge-type penalty and a variant of the smoothly clipped absolute deviation penalty. We show that the proposed estimator identifies relevant and irrelevant components consistently and achieves the nonparametric optimal rate of convergence for the relevant components. We also provide an accurate and efficient computation algorithm to implement the estimator and demonstrate its performance through simulation studies. Finally, we illustrate our method via a real data example to identify important body measurements to predict percentage of body fat of an individual.  相似文献   

15.
This paper provides a theoretical overview of Wald tests for Granger causality in levels vector autoregressions (VAR's) and Johansen-type error correction models (ECM's). The theory is based on results in Toda and Phillips (1991a) and allows for stochastic and deterministic trends as well as arbitrary degrees of cointegration. We recommend some operational procedures for conducting Granger causality tests that are based on the Gaussian maximum likelihood estimation of ECM's. These procedures are applicable in the important practical case of testing the causal effects of one variable on another group of variables and vice versa. This paper also investigates the sampling properties of these testing procedures through simulation exercises. Three sequential causality tests in ECM's are compared with conventional causality tests in levels and differences VAR's.  相似文献   

16.
The simulation-extrapolation (SIMEX) approach of Cook and Stefanski (J. Am. Stat. Assoc. 89:1314–1328, 1994) has proved to be successful in obtaining reliable estimates if variables are measured with (additive) errors. In particular for nonlinear models, this approach has advantages compared to other procedures such as the instrumental variable approach if only variables measured with error are available. However, it has always been assumed that measurement errors for the dependent variable are not correlated with those related to the explanatory variables although such scenario is quite likely. In such a case the (standard) SIMEX suffers from misspecification even for the simple linear regression model. Our paper reports first results from a generalized SIMEX (GSIMEX) approach which takes account of this correlation. We also demonstrate in our simulation study that neglect of the correlation will lead to estimates which may be worse than those from the naive estimator which completely disregards measurement errors.  相似文献   

17.
Kaifeng Zhao 《Statistics》2016,50(6):1276-1289
This paper considers variable selection in additive quantile regression based on group smoothly clipped absolute deviation (gSCAD) penalty. Although shrinkage variable selection in additive models with least-squares loss has been well studied, quantile regression is sufficiently different from mean regression to deserve a separate treatment. It is shown that the gSCAD estimator can correctly identify the significant components and at the same time maintain the usual convergence rates in estimation. Simulation studies are used to illustrate our method.  相似文献   

18.
This article considers, for the first time, sequential monitoring procedures that detect possible parameter changes in the polynomial trend models. A new class of sequential monitoring procedures that based on generalized fluctuation testing principle is proposed. The asymptotic null distributions and the consistency of the proposed tests are derived and their asymptotic critical values are tabulated. The methods are illustrated and compared in a small simulation study. In particular, we apply the proposed tests to investigate the Chinese commodity retail sales and consumer price indexes. Simulations and applications support our method.  相似文献   

19.
We propose a thresholding generalized method of moments (GMM) estimator for misspecified time series moment condition models. This estimator has the following oracle property: its asymptotic behavior is the same as of any efficient GMM estimator obtained under the a priori information that the true model were known. We propose data adaptive selection methods for thresholding parameter using multiple testing procedures. We determine the limiting null distributions of classical parameter tests and show the consistency of the corresponding block-bootstrap tests used in conjunction with thresholding GMM inference. We present the results of a simulation study for a misspecified instrumental variable regression model and for a vector autoregressive model with measurement error. We illustrate an application of the proposed methodology to data analysis of a real-world dataset.  相似文献   

20.
This paper provides a theoretical overview of Wald tests for Granger causality in levels vector autoregressions (VAR's) and Johansen-type error correction models (ECM's). The theory is based on results in Toda and Phillips (1991a) and allows for stochastic and deterministic trends as well as arbitrary degrees of cointegration. We recommend some operational procedures for conducting Granger causality tests that are based on the Gaussian maximum likelihood estimation of ECM's. These procedures are applicable in the important practical case of testing the causal effects of one variable on another group of variables and vice versa. This paper also investigates the sampling properties of these testing procedures through simulation exercises. Three sequential causality tests in ECM's are compared with conventional causality tests in levels and differences VAR's.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号