首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a novel approach to sufficient dimension reduction for the conditional kth moments in regression. The approach provides a computationally feasible test for the dimension of the central kth-moment subspace. In addition, we can test predictor effects without assuming any models. All test statistics proposed in the novel approach have asymptotic chi-squared distributions.  相似文献   

2.
Jae Keun Yoo 《Statistics》2018,52(2):409-425
In this paper, a model-based approach to reduce the dimension of response variables in multivariate regression is newly proposed, following the existing context of the response dimension reduction developed by Yoo and Cook [Response dimension reduction for the conditional mean in multivariate regression. Comput Statist Data Anal. 2008;53:334–343]. The related dimension reduction subspace is estimated by maximum likelihood, assuming an additive error. In the new approach, the linearity condition, which is assumed for the methodological development in Yoo and Cook (2008), is understood through the covariance matrix of the random error. Numerical studies show potential advantages of the proposed approach over Yoo and Cook (2008). A real data example is presented for illustration.  相似文献   

3.
To characterize the dependence of a response on covariates of interest, a monotonic structure is linked to a multivariate polynomial transformation of the central subspace (CS) directions with unknown structural degree and dimension. Under a very general semiparametric model formulation, such a sufficient dimension reduction (SDR) score is shown to enjoy the existence, optimality, and uniqueness up to scale and location in the defined concordance probability function. In light of these properties and its single-index representation, two types of concordance-based generalized Bayesian information criteria are constructed to estimate the optimal SDR score and the maximum concordance index. The estimation criteria are further carried out by effective computational procedures. Generally speaking, the outer product of gradients estimation in the first approach has an advantage in computational efficiency and the parameterization system in the second approach greatly reduces the number of parameters in estimation. Different from most existing SDR approaches, only one CS direction is required to be continuous in the proposals. Moreover, the consistency of structural degree and dimension estimators and the asymptotic normality of the optimal SDR score and maximum concordance index estimators are established under some suitable conditions. The performance and practicality of our methodology are also investigated through simulations and empirical illustrations.  相似文献   

4.
Joint modeling of recurrent and terminal events has attracted considerable interest and extensive investigations by many authors. The assumption of low-dimensional covariates has been usually applied in the existing studies, which is however inapplicable in many practical situations. In this paper, we consider a partial sufficient dimension reduction approach for a joint model with high-dimensional covariates. Some simulations as well as three real data applications are presented to confirm and assess the performance of the proposed model and approach.  相似文献   

5.
Based on the theories of sliced inverse regression (SIR) and reproducing kernel Hilbert space (RKHS), a new approach RDSIR (RKHS-based Double SIR) to nonlinear dimension reduction for survival data is proposed. An isometric isomorphism is constructed based on the RKHS property, then the nonlinear function in the RKHS can be represented by the inner product of two elements that reside in the isomorphic feature space. Due to the censorship of survival data, double slicing is used to estimate the weight function to adjust for the censoring bias. The nonlinear sufficient dimension reduction (SDR) subspace is estimated by a generalized eigen-decomposition problem. The asymptotic property of the estimator is established based on the perturbation theory. Finally, the performance of RDSIR is illustrated on simulated and real data. The numerical results show that RDSIR is comparable with the linear SDR method. Most importantly, RDSIR can also effectively extract nonlinearity from survival data.  相似文献   

6.
Jae Keun Yoo 《Statistics》2016,50(5):1086-1099
The purpose of this paper is to define the central informative predictor subspace to contain the central subspace and to develop methods for estimating the former subspace. Potential advantages of the proposed methods are no requirements of linearity, constant variance and coverage conditions in methodological developments. Therefore, the central informative predictor subspace gives us the benefit of restoring the central subspace exhaustively despite failing the conditions. Numerical studies confirm the theories, and real data analyses are presented.  相似文献   

7.
8.
Principal fitted component (PFC) models are a class of likelihood-based inverse regression methods that yield a so-called sufficient reduction of the random p-vector of predictors X given the response Y. Assuming that a large number of the predictors has no information about Y, we aimed to obtain an estimate of the sufficient reduction that ‘purges’ these irrelevant predictors, and thus, select the most useful ones. We devised a procedure using observed significance values from the univariate fittings to yield a sparse PFC, a purged estimate of the sufficient reduction. The performance of the method is compared to that of penalized forward linear regression models for variable selection in high-dimensional settings.  相似文献   

9.
In this article, a new method named cumulative slicing principle fitted component (CUPFC) model is proposed to conduct sufficient dimension reduction and prediction in regression. Based on the classical PFC methods, the CUPFC avoids selecting some parameters such as the specific basis function form or the number of slices in slicing estimation. We develop the estimator of the central subspace in the CUPFC method under three error-term structures and establish its consistency. The simulations investigate the effectiveness of the new method in prediction and reduction estimation with other competitors. The results indicate that the new proposed method generally outperforms the existing PFC methods no matter how the predictors are truly related to the response. The application to real data also verifies the validity of the proposed method.  相似文献   

10.
In this article, we propose a new method for sufficient dimension reduction when both response and predictor are vectors. The new method, using distance covariance, keeps the model-free advantage, and can fully recover the central subspace even when many predictors are discrete. We then extend this method to the dual central subspace, including a special case of canonical correlation analysis. We illustrated estimators through extensive simulations and real datasets, and compared to some existing methods, showing that our estimators are competitive and robust.  相似文献   

11.
In this paper, we consider the ultrahigh-dimensional sufficient dimension reduction (SDR) for censored data and measurement error in covariates. We first propose the feature screening procedure based on censored data and the covariates subject to measurement error. With the suitable correction of mismeasurement, the error-contaminated variables detected by the proposed feature screening procedure are the same as the truly important variables. Based on the selected active variables, we develop the SDR method to estimate the central subspace and the structural dimension with both censored data and measurement error incorporated. The theoretical results of the proposed method are established. Simulation studies are reported to assess the performance of the proposed method. The proposed method is implemented to NKI breast cancer data.  相似文献   

12.
Sufficient dimension reduction methods aim to reduce the dimensionality of predictors while preserving regression information relevant to the response. In this article, we develop Minimum Average Deviance Estimation (MADE) methodology for sufficient dimension reduction. The purpose of MADE is to generalize Minimum Average Variance Estimation (MAVE) beyond its assumption of additive errors to settings where the outcome follows an exponential family distribution. As in MAVE, a local likelihood approach is used to learn the form of the regression function from the data and the main parameter of interest is a dimension reduction subspace. To estimate this parameter within its natural space, we propose an iterative algorithm where one step utilizes optimization on the Stiefel manifold. MAVE is seen to be a special case of MADE in the case of Gaussian outcomes with a common variance. Several procedures are considered to estimate the reduced dimension and to predict the outcome for an arbitrary covariate value. Initial simulations and data analysis examples yield encouraging results and invite further exploration of the methodology.  相似文献   

13.
Dimension reduction with bivariate responses, especially a mix of a continuous and categorical responses, can be of special interest. One immediate application is to regressions with censoring. In this paper, we propose two novel methods to reduce the dimension of the covariates of a bivariate regression via a model-free approach. Both methods enjoy a simple asymptotic chi-squared distribution for testing the dimension of the regression, and also allow us to test the contributions of the covariates easily without pre-specifying a parametric model. The new methods outperform the current one both in simulations and in analysis of a real data. The well-known PBC data are used to illustrate the application of our method to censored regression.  相似文献   

14.
The existence of a dimension reduction (DR) subspace is a common assumption in regression analysis when dealing with high-dimensional predictors. The estimation of such a DR subspace has received considerable attention in the past few years, the most popular method being undoubtedly the sliced inverse regression. In this paper, we propose a new estimation procedure of the DR subspace by assuming that the joint distribution of the predictor and the response variables is a finite mixture of distributions. The new method is compared through a simulation study to some classical methods.  相似文献   

15.
A new estimation method for the dimension of a regression at the outset of an analysis is proposed. A linear subspace spanned by projections of the regressor vector X , which contains part or all of the modelling information for the regression of a vector Y on X , and its dimension are estimated via the means of parametric inverse regression. Smooth parametric curves are fitted to the p inverse regressions via a multivariate linear model. No restrictions are placed on the distribution of the regressors. The estimate of the dimension of the regression is based on optimal estimation procedures. A simulation study shows the method to be more powerful than sliced inverse regression in some situations.  相似文献   

16.
Sliced inverse regression (SIR) is an effective method for dimensionality reduction in high-dimensional regression problems. However, the method has requirements on the distribution of the predictors that are hard to check since they depend on unobserved variables. It has been shown that, if the distribution of the predictors is elliptical, then these requirements are satisfied. In case of mixture models, the ellipticity is violated and in addition there is no assurance of a single underlying regression model among the different components. Our approach clusterizes the predictors space to force the condition to hold on each cluster and includes a merging technique to look for different underlying models in the data. A study on simulated data as well as two real applications are provided. It appears that SIR, unsurprisingly, is not capable of dealing with a mixture of Gaussians involving different underlying models whereas our approach is able to correctly investigate the mixture.  相似文献   

17.
Sliced average variance estimation (SAVE) is a method for constructing sufficient summary plots in regressions with many predictors. The summary plots are designed to capture all the information about the response that is available from the predictors, and do not require a model for their construction. They can be particularly helpful for guiding the choice of a first model. Methodological aspects of SAVE are studied in this article.  相似文献   

18.
Andreas Artemiou 《Statistics》2013,47(5):1037-1051
In this paper, we combine adaptively weighted large margin classifiers with Support Vector Machine (SVM)-based dimension reduction methods to create dimension reduction methods robust to the presence of extreme outliers. We discuss estimation and asymptotic properties of the algorithm. The good performance of the new algorithm is demonstrated through simulations and real data analysis.  相似文献   

19.
The analysis of high-dimensional data often begins with the identification of lower dimensional subspaces. Principal component analysis is a dimension reduction technique that identifies linear combinations of variables along which most variation occurs or which best “reconstruct” the original variables. For example, many temperature readings may be taken in a production process when in fact there are just a few underlying variables driving the process. A problem with principal components is that the linear combinations can seem quite arbitrary. To make them more interpretable, we introduce two classes of constraints. In the first, coefficients are constrained to equal a small number of values (homogeneity constraint). The second constraint attempts to set as many coefficients to zero as possible (sparsity constraint). The resultant interpretable directions are either calculated to be close to the original principal component directions, or calculated in a stepwise manner that may make the components more orthogonal. A small dataset on characteristics of cars is used to introduce the techniques. A more substantial data mining application is also given, illustrating the ability of the procedure to scale to a very large number of variables.  相似文献   

20.
Abstract

In this note, we present a theoretical result which relaxes a critical condition required by the semiparametric approach to dimension reduction. The asymptotic normality of the estimators still maintains under weaker assumptions. This improvement greatly increases the applicability of the semiparametric approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号