首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a new method for dimension reduction in regression using the first two inverse moments. We develop corresponding weighted chi-squared tests for the dimension of the regression. The proposed method considers linear combinations of sliced inverse regression (SIR) and the method using a new candidate matrix which is designed to recover the entire inverse second moment subspace. The optimal combination may be selected based on the p-values derived from the dimension tests. Theoretically, the proposed method, as well as sliced average variance estimate (SAVE), is more capable of recovering the complete central dimension reduction subspace than SIR and principle Hessian directions (pHd). Therefore it can substitute for SIR, pHd, SAVE, or any linear combination of them at a theoretical level. Simulation study indicates that the proposed method may have consistently greater power than SIR, pHd, and SAVE.  相似文献   

2.
The idea of dimension reduction without loss of information can be quite helpful for guiding the construction of summary plots in regression without requiring a prespecified model. Central subspaces are designed to capture all the information for the regression and to provide a population structure for dimension reduction. Here, we introduce the central k th-moment subspace to capture information from the mean, variance and so on up to the k th conditional moment of the regression. New methods are studied for estimating these subspaces. Connections with sliced inverse regression are established, and examples illustrating the theory are presented.  相似文献   

3.
Sliced average variance estimation is one of many methods for estimating the central subspace. It was shown to be more comprehensive than sliced inverse regression in the sense that it consistently estimates the central subspace under mild conditions while slice inverse regression may estimate only a proper subset of the central subspace. In this paper we extend this method to regressions with qualitative predictors. We also provide tests of dimension and a marginal coordinate hypothesis test. We apply the method to a data set concerning lakes infested by Eurasian Watermilfoil, and compare this new method to the partial inverse regression estimator.  相似文献   

4.

Parameter reduction can enable otherwise infeasible design and uncertainty studies with modern computational science models that contain several input parameters. In statistical regression, techniques for sufficient dimension reduction (SDR) use data to reduce the predictor dimension of a regression problem. A computational scientist hoping to use SDR for parameter reduction encounters a problem: a computer prediction is best represented by a deterministic function of the inputs, so data comprised of computer simulation queries fail to satisfy the SDR assumptions. To address this problem, we interpret SDR methods sliced inverse regression (SIR) and sliced average variance estimation (SAVE) as estimating the directions of a ridge function, which is a composition of a low-dimensional linear transformation with a nonlinear function. Within this interpretation, SIR and SAVE estimate matrices of integrals whose column spaces are contained in the ridge directions’ span; we analyze and numerically verify convergence of these column spaces as the number of computer model queries increases. Moreover, we show example functions that are not ridge functions but whose inverse conditional moment matrices are low-rank. Consequently, the computational scientist should beware when using SIR and SAVE for parameter reduction, since SIR and SAVE may mistakenly suggest that truly important directions are unimportant.

  相似文献   

5.

Sufficient dimension reduction (SDR) provides a framework for reducing the predictor space dimension in statistical regression problems. We consider SDR in the context of dimension reduction for deterministic functions of several variables such as those arising in computer experiments. In this context, SDR can reveal low-dimensional ridge structure in functions. Two algorithms for SDR—sliced inverse regression (SIR) and sliced average variance estimation (SAVE)—approximate matrices of integrals using a sliced mapping of the response. We interpret this sliced approach as a Riemann sum approximation of the particular integrals arising in each algorithm. We employ the well-known tools from numerical analysis—namely, multivariate numerical integration and orthogonal polynomials—to produce new algorithms that improve upon the Riemann sum-based numerical integration in SIR and SAVE. We call the new algorithms Lanczos–Stieltjes inverse regression (LSIR) and Lanczos–Stieltjes average variance estimation (LSAVE) due to their connection with Stieltjes’ method—and Lanczos’ related discretization—for generating a sequence of polynomials that are orthogonal with respect to a given measure. We show that this approach approximates the desired integrals, and we study the behavior of LSIR and LSAVE with two numerical examples. The quadrature-based LSIR and LSAVE eliminate the first-order algebraic convergence rate bottleneck resulting from the Riemann sum approximation, thus enabling high-order numerical approximations of the integrals when appropriate. Moreover, LSIR and LSAVE perform as well as the best-case SIR and SAVE implementations (e.g., adaptive partitioning of the response space) when low-order numerical integration methods (e.g., simple Monte Carlo) are used.

  相似文献   

6.
This paper proposes a general dimension‐reduction method targeting the partial central subspace recently introduced by Chiaromonte, Cook & Li. The dependence need not be confined to particular conditional moments, nor are restrictions placed on the predictors that are necessary for methods like partial sliced inverse regression. The paper focuses on a partially linear single‐index model. However, the underlying idea is applicable more generally. Illustrative examples are presented.  相似文献   

7.
A new estimation method for the dimension of a regression at the outset of an analysis is proposed. A linear subspace spanned by projections of the regressor vector X , which contains part or all of the modelling information for the regression of a vector Y on X , and its dimension are estimated via the means of parametric inverse regression. Smooth parametric curves are fitted to the p inverse regressions via a multivariate linear model. No restrictions are placed on the distribution of the regressors. The estimate of the dimension of the regression is based on optimal estimation procedures. A simulation study shows the method to be more powerful than sliced inverse regression in some situations.  相似文献   

8.
The existence of a dimension reduction (DR) subspace is a common assumption in regression analysis when dealing with high-dimensional predictors. The estimation of such a DR subspace has received considerable attention in the past few years, the most popular method being undoubtedly the sliced inverse regression. In this paper, we propose a new estimation procedure of the DR subspace by assuming that the joint distribution of the predictor and the response variables is a finite mixture of distributions. The new method is compared through a simulation study to some classical methods.  相似文献   

9.
Abstract

Sliced average variance estimation (SAVE) is one of the best methods for estimating central dimension-reduction subspace in semi parametric regression models when covariates are normal. In recent days SAVE is being used to analyze DNA microarray data especially in tumor classification but most important drawback is normality of covariates. In this article, the asymptotic behavior of estimates of CDR space under varying slice size is studied through simulation studies when covariates are non normal but follows linearity condition as well as when covariates slightly perturbed from normal distribution and we observed that serious error may occur under violation normality assumption.  相似文献   

10.
SAVE and PHD are effective methods in dimension reduction problems. Both methods are based on two assumptions: linearity condition and constant covariance condition. But in the situation where constant covariance condition fails, even if linearity condition holds, SAVE and PHD often pick the directions which are out side of the central subspace (CS) or central mean subspace (CMS). In this article, we generalize the SAVE and PHD under weaker conditions. This generalization make it possible to get the correct estimates of central subspace (CS) and central mean subspace (CMS).  相似文献   

11.
This paper discusses visualization methods for discriminant analysis. It does not address numerical methods for classification per se, but rather focuses on graphical methods that can be viewed as pre-processors, aiding the analyst's understanding of the data and the choice of a final classifier. The methods are adaptations of recent results in dimension reduction for regression, including sliced inverse regression and sliced average variance estimation. A permutation test is suggested as a means of determining dimension, and examples are given throughout the discussion.  相似文献   

12.
Many sufficient dimension reduction methods for univariate regression have been extended to multivariate regression. Sliced average variance estimation (SAVE) has the potential to recover more reductive information and recent development enables us to test the dimension and predictor effects with distributions commonly used in the literature. In this paper, we aim to extend the functionality of the SAVE to multivariate regression. Toward the goal, we propose three new methods. Numerical studies and real data analysis demonstrate that the proposed methods perform well.  相似文献   

13.
Sliced regression is an effective dimension reduction method by replacing the original high-dimensional predictors with its appropriate low-dimensional projection. It is free from any probabilistic assumption and can exhaustively estimate the central subspace. In this article, we propose to incorporate shrinkage estimation into sliced regression so that variable selection can be achieved simultaneously with dimension reduction. The new method can improve the estimation accuracy and achieve better interpretability for the reduced variables. The efficacy of proposed method is shown through both simulation and real data analysis.  相似文献   

14.
Sliced average variance estimation (SAVE) is a method for constructing sufficient summary plots in regressions with many predictors. The summary plots are designed to capture all the information about the response that is available from the predictors, and do not require a model for their construction. They can be particularly helpful for guiding the choice of a first model. Methodological aspects of SAVE are studied in this article.  相似文献   

15.
Logistic regression using conditional maximum likelihood estimation has recently gained widespread use. Many of the applications of logistic regression have been in situations in which the independent variables are collinear. It is shown that collinearity among the independent variables seriously effects the conditional maximum likelihood estimator in that the variance of this estimator is inflated in much the same way that collinearity inflates the variance of the least squares estimator in multiple regression. Drawing on the similarities between multiple and logistic regression several alternative estimators, which reduce the effect of the collinearity and are easy to obtain in practice, are suggested and compared in a simulation study.  相似文献   

16.
We consider a nonlinear censored regression problem with a vector of predictors. With censoring, high-dimensional regression analysis becomes much more complicated. Since censoring can cause severe bias in estimation, modification to adjust such bias is needed to be made. Based on the weight adjustment, we develop the modification of sliced average variance estimation for estimating the lifetime central subspace without requiring a prespecified parametric model. Our proposed method preserves as much regression information as possible. Simulation results are reported and comparisons are made with the sliced inverse regression of Li et al. (1999 Li , K. C. , Wang , J. L. , Chen , C. H. ( 1999 ). Dimension reduction for censored regression data . Ann. Statist. 27 : 123 . [Google Scholar]).  相似文献   

17.
Many model‐free dimension reduction methods have been developed for high‐dimensional regression data but have not paid much attention on problems with non‐linear confounding. In this paper, we propose an inverse‐regression method of dependent variable transformation for detecting the presence of non‐linear confounding. The benefit of using geometrical information from our method is highlighted. A ratio estimation strategy is incorporated in our approach to enhance the interpretation of variable selection. This approach can be implemented not only in principal Hessian directions (PHD) but also in other recently developed dimension reduction methods. Several simulation examples that are reported for illustration and comparisons are made with sliced inverse regression and PHD in ignorance of non‐linear confounding. An illustrative application to one real data is also presented.  相似文献   

18.
We investigate the asymptotic behaviour of the recursive Nadaraya–Watson estimator for the estimation of the regression function in a semiparametric regression model. On the one hand, we make use of the recursive version of the sliced inverse regression method for the estimation of the unknown parameter of the model. On the other hand, we implement a recursive Nadaraya–Watson procedure for the estimation of the regression function which takes into account the previous estimation of the parameter of the semiparametric regression model. We establish the almost sure convergence as well as the asymptotic normality for our Nadaraya–Watson estimate. We also illustrate our semiparametric estimation procedure on simulated data.  相似文献   

19.
Log‐normal linear regression models are popular in many fields of research. Bayesian estimation of the conditional mean of the dependent variable is problematic as many choices of the prior for the variance (on the log‐scale) lead to posterior distributions with no finite moments. We propose a generalized inverse Gaussian prior for this variance and derive the conditions on the prior parameters that yield posterior distributions of the conditional mean of the dependent variable with finite moments up to a pre‐specified order. The conditions depend on one of the three parameters of the suggested prior; the other two have an influence on inferences for small and medium sample sizes. A second goal of this paper is to discuss how to choose these parameters according to different criteria including the optimization of frequentist properties of posterior means.  相似文献   

20.
In the area of sufficient dimension reduction, two structural conditions are often assumed: the linearity condition that is close to assuming ellipticity of underlying distribution of predictors, and the constant variance condition that nears multivariate normality assumption of predictors. Imposing these conditions are considered as necessary trade-off for overcoming the “curse of dimensionality”. However, it is very hard to check whether these conditions hold or not. When these conditions are violated, some methods such as marginal transformation and re-weighting are suggested so that data fulfill them approximately. In this article, we assume an independence condition between the projected predictors and their orthogonal complements which can ensure the commonly used inverse regression methods to identify the central subspace of interest. The independence condition can be checked by the gridded chi-square test. Thus, we extend the scope of many inverse regression methods and broaden their applicability in the literature. Simulation studies and an application to the car price data are presented for illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号