首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Many sufficient dimension reduction methods for univariate regression have been extended to multivariate regression. Sliced average variance estimation (SAVE) has the potential to recover more reductive information and recent development enables us to test the dimension and predictor effects with distributions commonly used in the literature. In this paper, we aim to extend the functionality of the SAVE to multivariate regression. Toward the goal, we propose three new methods. Numerical studies and real data analysis demonstrate that the proposed methods perform well.  相似文献   

2.
Sliced regression is an effective dimension reduction method by replacing the original high-dimensional predictors with its appropriate low-dimensional projection. It is free from any probabilistic assumption and can exhaustively estimate the central subspace. In this article, we propose to incorporate shrinkage estimation into sliced regression so that variable selection can be achieved simultaneously with dimension reduction. The new method can improve the estimation accuracy and achieve better interpretability for the reduced variables. The efficacy of proposed method is shown through both simulation and real data analysis.  相似文献   

3.
To estimate parameters defined by estimating equations with covariates missing at random, we consider three bias-corrected nonparametric approaches based on inverse probability weighting, regression and augmented inverse probability weighting. However, when the dimension of covariates is not low, the estimation efficiency will be affected due to the curse of dimensionality. To address this issue, we propose a two-stage estimation procedure by using the dimension-reduced kernel estimation in conjunction with bias-corrected estimating equations. We show that the resulting three estimators are asymptotically equivalent and achieve the desirable properties. The impact of dimension reduction in nonparametric estimation of parameters is also investigated. The finite-sample performance of the proposed estimators is studied through simulation, and an application to an automobile data set is also presented.  相似文献   

4.
Estimation of a general multi-index model comprises determining the number of linear combinations of predictors (structural dimension) that are related to the response, estimating the loadings of each index vector, selecting the active predictors and estimating the underlying link function. These objectives are often achieved sequentially at different stages of the estimation process. In this study, we propose a unified estimation approach under a semi-parametric model framework to attain these estimation goals simultaneously. The proposed estimation method is more efficient and stable than many existing methods where the estimation error in the structural dimension may propagate to the estimation of the index vectors and variable selection stages. A detailed algorithm is provided to implement the proposed method. Comprehensive simulations and a real data analysis illustrate the effectiveness of the proposed method.  相似文献   

5.
We propose in this article a novel dimension reduction method for varying coefficient models. The proposed method explores the rank reducible structure of those varying coefficients, hence, can do dimension reduction and semiparametric estimation, simultaneously. As a result, the new method not only improves estimation accuracy but also facilitates practical interpretation. To determine the structure dimension, a consistent BIC criterion is developed. Numerical experiments are also presented.  相似文献   

6.
Traditionally, time series analysis involves building an appropriate model and using either parametric or nonparametric methods to make inference about the model parameters. Motivated by recent developments for dimension reduction in time series, an empirical application of sufficient dimension reduction (SDR) to nonlinear time series modelling is shown in this article. Here, we use time series central subspace as a tool for SDR and estimate it using mutual information index. Especially, in order to reduce the computational complexity in time series, we propose an efficient estimation method of minimal dimension and lag using a modified Schwarz–Bayesian criterion, when either of the dimensions and the lags is unknown. Through simulations and real data analysis, the approach presented in this article performs well in autoregression and volatility estimation.  相似文献   

7.
Summary.  The family of inverse regression estimators that was recently proposed by Cook and Ni has proven effective in dimension reduction by transforming the high dimensional predictor vector to its low dimensional projections. We propose a general shrinkage estimation strategy for the entire inverse regression estimation family that is capable of simultaneous dimension reduction and variable selection. We demonstrate that the new estimators achieve consistency in variable selection without requiring any traditional model, meanwhile retaining the root n estimation consistency of the dimension reduction basis. We also show the effectiveness of the new estimators through both simulation and real data analysis.  相似文献   

8.
To reduce the predictors dimension without loss of information on the regression, we develop in this paper a sufficient dimension reduction method which we term cumulative Hessian directions. Unlike many other existing sufficient dimension reduction methods, the estimation of our proposal avoids completely selecting the tuning parameters such as the number of slices in slicing estimation or the bandwidth in kernel smoothing. We also investigate the asymptotic properties of our proposal when the predictors dimension diverges. Illustrations through simulations and an application are presented to evidence the efficacy of our proposal and to compare it with existing methods.  相似文献   

9.
We consider estimation in the single‐index model where the link function is monotone. For this model, a profile least‐squares estimator has been proposed to estimate the unknown link function and index. Although it is natural to propose this procedure, it is still unknown whether it produces index estimates that converge at the parametric rate. We show that this holds if we solve a score equation corresponding to this least‐squares problem. Using a Lagrangian formulation, we show how one can solve this score equation without any reparametrization. This makes it easy to solve the score equations in high dimensions. We also compare our method with the effective dimension reduction and the penalized least‐squares estimator methods, both available on CRAN as R packages, and compare with link‐free methods, where the covariates are elliptically symmetric.  相似文献   

10.
Many model‐free dimension reduction methods have been developed for high‐dimensional regression data but have not paid much attention on problems with non‐linear confounding. In this paper, we propose an inverse‐regression method of dependent variable transformation for detecting the presence of non‐linear confounding. The benefit of using geometrical information from our method is highlighted. A ratio estimation strategy is incorporated in our approach to enhance the interpretation of variable selection. This approach can be implemented not only in principal Hessian directions (PHD) but also in other recently developed dimension reduction methods. Several simulation examples that are reported for illustration and comparisons are made with sliced inverse regression and PHD in ignorance of non‐linear confounding. An illustrative application to one real data is also presented.  相似文献   

11.
The existence of a dimension reduction (DR) subspace is a common assumption in regression analysis when dealing with high-dimensional predictors. The estimation of such a DR subspace has received considerable attention in the past few years, the most popular method being undoubtedly the sliced inverse regression. In this paper, we propose a new estimation procedure of the DR subspace by assuming that the joint distribution of the predictor and the response variables is a finite mixture of distributions. The new method is compared through a simulation study to some classical methods.  相似文献   

12.
The single index model is a useful regression model. In this paper, we propose a nonconcave penalized least squares method to estimate both the parameters and the link function of the single index model. Compared to other variable selection and estimation methods, the proposed method can estimate parameters and select variables simultaneously. When the dimension of parameters in the single index model is a fixed constant, under some regularity conditions, we demonstrate that the proposed estimators for parameters have the so-called oracle property, and furthermore we establish the asymptotic normality and develop a sandwich formula to estimate the standard deviations of the proposed estimators. Simulation studies and a real data analysis are presented to illustrate the proposed methods.  相似文献   

13.
Sliced Inverse Regression (SIR) is an effective method for dimension reduction in high-dimensional regression problems. The original method, however, requires the inversion of the predictors covariance matrix. In case of collinearity between these predictors or small sample sizes compared to the dimension, the inversion is not possible and a regularization technique has to be used. Our approach is based on a Fisher Lecture given by R.D. Cook where it is shown that SIR axes can be interpreted as solutions of an inverse regression problem. We propose to introduce a Gaussian prior distribution on the unknown parameters of the inverse regression problem in order to regularize their estimation. We show that some existing SIR regularizations can enter our framework, which permits a global understanding of these methods. Three new priors are proposed leading to new regularizations of the SIR method. A comparison on simulated data as well as an application to the estimation of Mars surface physical properties from hyperspectral images are provided.  相似文献   

14.
Article: 2     
Summary. Searching for an effective dimension reduction space is an important problem in regression, especially for high dimensional data. We propose an adaptive approach based on semiparametric models, which we call the (conditional) minimum average variance estimation (MAVE) method, within quite a general setting. The MAVE method has the following advantages. Most existing methods must undersmooth the nonparametric link function estimator to achieve a faster rate of consistency for the estimator of the parameters (than for that of the nonparametric function). In contrast, a faster consistency rate can be achieved by the MAVE method even without undersmoothing the nonparametric link function estimator. The MAVE method is applicable to a wide range of models, with fewer restrictions on the distribution of the covariates, to the extent that even time series can be included. Because of the faster rate of consistency for the parameter estimators, it is possible for us to estimate the dimension of the space consistently. The relationship of the MAVE method with other methods is also investigated. In particular, a simple outer product gradient estimator is proposed as an initial estimator. In addition to theoretical results, we demonstrate the efficacy of the MAVE method for high dimensional data sets through simulation. Two real data sets are analysed by using the MAVE approach.  相似文献   

15.
Andreas Artemiou 《Statistics》2013,47(5):1037-1051
In this paper, we combine adaptively weighted large margin classifiers with Support Vector Machine (SVM)-based dimension reduction methods to create dimension reduction methods robust to the presence of extreme outliers. We discuss estimation and asymptotic properties of the algorithm. The good performance of the new algorithm is demonstrated through simulations and real data analysis.  相似文献   

16.
Most methods for survival prediction from high-dimensional genomic data combine the Cox proportional hazards model with some technique of dimension reduction, such as partial least squares regression (PLS). Applying PLS to the Cox model is not entirely straightforward, and multiple approaches have been proposed. The method of Park et al. (Bioinformatics 18(Suppl. 1):S120–S127, 2002) uses a reformulation of the Cox likelihood to a Poisson type likelihood, thereby enabling estimation by iteratively reweighted partial least squares for generalized linear models. We propose a modification of the method of park et al. (2002) such that estimates of the baseline hazard and the gene effects are obtained in separate steps. The resulting method has several advantages over the method of park et al. (2002) and other existing Cox PLS approaches, as it allows for estimation of survival probabilities for new patients, enables a less memory-demanding estimation procedure, and allows for incorporation of lower-dimensional non-genomic variables like disease grade and tumor thickness. We also propose to combine our Cox PLS method with an initial gene selection step in which genes are ordered by their Cox score and only the highest-ranking k% of the genes are retained, obtaining a so-called supervised partial least squares regression method. In simulations, both the unsupervised and the supervised version outperform other Cox PLS methods.  相似文献   

17.
In this article, a new method named cumulative slicing principle fitted component (CUPFC) model is proposed to conduct sufficient dimension reduction and prediction in regression. Based on the classical PFC methods, the CUPFC avoids selecting some parameters such as the specific basis function form or the number of slices in slicing estimation. We develop the estimator of the central subspace in the CUPFC method under three error-term structures and establish its consistency. The simulations investigate the effectiveness of the new method in prediction and reduction estimation with other competitors. The results indicate that the new proposed method generally outperforms the existing PFC methods no matter how the predictors are truly related to the response. The application to real data also verifies the validity of the proposed method.  相似文献   

18.
Variable selection is a very important tool when dealing with high dimensional data. However, most popular variable selection methods are model based, which might provide misleading results when the model assumption is not satisfied. Sufficient dimension reduction provides a general framework for model-free variable selection methods. In this paper, we propose a model-free variable selection method via sufficient dimension reduction, which incorporates the grouping information into the selection procedure for multi-population data. Theoretical properties of our selection methods are also discussed. Simulation studies suggest that our method greatly outperforms those ignoring the grouping information.  相似文献   

19.
In this article, we investigate a new procedure for the estimation of a linear quantile regression with possibly right-censored responses. Contrary to the main literature on the subject, we propose in this context to circumvent the formulation of conditional quantiles through the so-called “check” loss function that stems from the influential work of Koenker and Bassett (1978). Instead, our suggestion is here to estimate the quantile coefficients by minimizing an alternative measure of distance. In fact, our approach could be qualified as a generalization in a parametric regression framework of the technique consisting in inverting the conditional distribution of the response given the covariates. This is motivated by the knowledge that the main literature for censored data already relies on some nonparametric conditional distribution estimation as well. The ideas of effective dimension reduction are then exploited in order to accommodate for higher dimensional settings as well in this context. Extensive numerical results then suggest that such an approach provides a strongly competitive procedure to the classical approaches based on the check function, in fact both for complete and censored observations. From a theoretical prospect, both consistency and asymptotic normality of the proposed estimator for linear regression are obtained under classical regularity conditions. As a by-product, several asymptotic results on some “double-kernel” version of the conditional Kaplan–Meier distribution estimator based on effective dimension reduction, and its corresponding density estimator, are also obtained and may be of interest on their own. A brief application of our procedure to quasar data then serves to further highlight the relevance of the latter for quantile regression estimation with censored data.  相似文献   

20.
We study the quantile estimation methods for the distortion measurement error data when variables are unobserved and distorted with additive errors by some unknown functions of an observable confounding variable. After calibrating the error-prone variables, we propose the quantile regression estimation procedure and composite quantile estimation procedure. Asymptotic properties of the proposed estimators are established, and we also investigate the asymptotic relative efficiency compared with the least-squares estimator. Simulation studies are conducted to evaluate the performance of the proposed methods, and a real dataset is analyzed as an illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号