首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The partial least squares (PLS) approach first constructs new explanatory variables, known as factors (or components), which are linear combinations of available predictor variables. A small subset of these factors is then chosen and retained for prediction. We study the performance of PLS in estimating single-index models, especially when the predictor variables exhibit high collinearity. We show that PLS estimates are consistent up to a constant of proportionality. We present three simulation studies that compare the performance of PLS in estimating single-index models with that of sliced inverse regression (SIR). In the first two studies, we find that PLS performs better than SIR when collinearity exists. In the third study, we learn that PLS performs well even when there are multiple dependent variables, the link function is non-linear and the shape of the functional form is not known.  相似文献   

2.
Biplots are useful tools to explore the relationship among variables. In this paper, the specific regression relationship between a set of predictors X and set of response variables Y by means of partial least-squares (PLS) regression is represented. The PLS biplot provides a single graphical representation of the samples together with the predictor and response variables, as well as their interrelationships in terms of the matrix of regression coefficients.  相似文献   

3.
Logistic regression using conditional maximum likelihood estimation has recently gained widespread use. Many of the applications of logistic regression have been in situations in which the independent variables are collinear. It is shown that collinearity among the independent variables seriously effects the conditional maximum likelihood estimator in that the variance of this estimator is inflated in much the same way that collinearity inflates the variance of the least squares estimator in multiple regression. Drawing on the similarities between multiple and logistic regression several alternative estimators, which reduce the effect of the collinearity and are easy to obtain in practice, are suggested and compared in a simulation study.  相似文献   

4.
In this paper we discuss the partial least squares (PLS) prediction method. The method is compared to the predictor based on principal component regression (PCR). Both theoretical considerations and computations on artificial and real data are presented.  相似文献   

5.
Classification models can demonstrate apparent prediction accuracy even when there is no underlying relationship between the predictors and the response. Variable selection procedures can lead to false positive variable selections and overestimation of true model performance. A simulation study was conducted using logistic regression with forward stepwise, best subsets, and LASSO variable selection methods with varying total sample sizes (20, 50, 100, 200) and numbers of random noise predictor variables (3, 5, 10, 15, 20, 50). Using our critical values can help reduce needless follow-up on variables having no true association with the outcome.  相似文献   

6.
Approaches for regressor construction in the linear prediction problem are investigated in a framework similar to partial least squares and continuum regression, but weighted to allow for custom specification of an evaluative scheme. A cross-validatory continuum regression procedure is proposed, and shown to compare well with ordinary continuum regression in empirical demonstrations.  相似文献   

7.
In at least one important application of stochastic linear programming (Lavaca-Tres Palacios Estuary:A Study of the Influence of Freshwater Inflows, 1980)constraint parameters are simultaneously estimated using multiple regression with historic data for the values of the decision variables and the right hand side of the constraint function. In this circumstance, the question immediately arises "How stable is the linear programming (LP) solution with regard to regression issues such as sample size, magnitude of the error variance, centroids of the decision variables, apd collinearity?" This paper reports a simulation designed to assess the stability of the LP solution and to compare the effectiveness of ridge as an alternative to ordinary least squares (OLS) regression. For the given scenario, the LP solution is consistently "biased." The amount of bias is exacerbated by small samples, large error variances, and collinearity among observations of the decision variables. The best regression criterion is a function not only of collinearity, but also of the magnitude of the error variance and the sum of the means of the decision variables relative to the right hand side of the stochastic constraint

In the application that motivated this research, the LP solutions were recommended fresh water inflows from Lake Texana into the estuaries of the Gulf of Mexico. The stochastic constraint estimates commercial fish harvest as a function of seasonal fresh water inflow. The historic data set used to estimate parameters of the constraint comprised rainfall data and fish harvest data prior to the construction of the Lake Texana dam, of necessity a small sample with collinear seasonal rainfall. It is not the authors' intent to solve this application, but rather to investigate through a simpler simulated systemwhether or not regression estimates in similar circumstances might introduce a systematic and predictable bias. The answer to this latter question is a qualified Yes!.  相似文献   

8.
In the presence of multicollinearity the literature points to principal component regression (PCR) as an estimation method for the regression coefficients of a multiple regression model. Due to ambiguities in the interpretation, involved by the orthogonal transformation of the set of explanatory variables, the method could not yet gain wide acceptance. Factor analysis regression (FAR) provides a model-based estimation method which is particularly tailored to overcome multicollinearity in an errors-in-variables setting. In this paper two feasible versions of a FAR estimator are compared with the OLS estimator and the PCR estimator by means of Monte Carlo simulation. While the PCR estimator performs best in cases of strong and high multicollinearity, the Thomson-based FAR estimator proves to be superior when the regressors are moderately correlated.  相似文献   

9.
通常所说的Granger因果关系检验,实际上是对线性因果关系的检验,无法检验非线性因果关系。Peguin和Terasvirta(1999)进行了基于泰勒展式的一般性扩展,应用于非线性因果关系检验,并采用提取主成分的方法解决其中的多重共线性问题。但是,提取主成分对解决多重共线性的效果并不太好。Lasso回归是目前处理多重共线性的主要方法之一,相对于其他方法,更容易产生稀疏解,在参数估计的同时实现变量选择,因而可以用来解决检验中的多重共线性问题,以提高检验的效率。对检验程序的模拟结果表明,基于Lasso回归的检验取得较好的效果。  相似文献   

10.
Summary.  The problem of component choice in regression-based prediction has a long history. The main cases where important choices must be made are functional data analysis, and problems in which the explanatory variables are relatively high dimensional vectors. Indeed, principal component analysis has become the basis for methods for functional linear regression. In this context the number of components can also be interpreted as a smoothing parameter, and so the viewpoint is a little different from that for standard linear regression. However, arguments for and against conventional component choice methods are relevant to both settings and have received significant recent attention. We give a theoretical argument, which is applicable in a wide variety of settings, justifying the conventional approach. Although our result is of minimax type, it is not asymptotic in nature; it holds for each sample size. Motivated by the insight that is gained from this analysis, we give theoretical and numerical justification for cross-validation choice of the number of components that is used for prediction. In particular we show that cross-validation leads to asymptotic minimization of mean summed squared error, in settings which include functional data analysis.  相似文献   

11.
12.
A challenging problem in the analysis of high-dimensional data is variable selection. In this study, we describe a bootstrap based technique for selecting predictors in partial least-squares regression (PLSR) and principle component regression (PCR) in high-dimensional data. Using a bootstrap-based technique for significance tests of the regression coefficients, a subset of the original variables can be selected to be included in the regression, thus obtaining a more parsimonious model with smaller prediction errors. We compare the bootstrap approach with several variable selection approaches (jack-knife and sparse formulation-based methods) on PCR and PLSR in simulation and real data.  相似文献   

13.
Presence of collinearity among the explanatory variables results in larger standard errors of parameters estimated. When multicollinearity is present among the explanatory variables, the ordinary least-square (OLS) estimators tend to be unstable due to larger variance of the estimators of the regression coefficients. As alternatives to OLS estimators few ridge estimators are available in the literature. This article presents some of the popular ridge estimators and attempts to provide (i) a generalized class of ridge estimators and (ii) a modified ridge estimator. The performance of the proposed estimators is investigated with the help of Monte Carlo simulation technique. Simulation results indicate that the suggested estimators perform better than the ordinary least-square (OLS) estimators and other estimators considered in this article.  相似文献   

14.
The authors consider dimensionality reduction methods used for prediction, such as reduced rank regression, principal component regression and partial least squares. They show how it is possible to obtain intermediate solutions by estimating simultaneously the latent variables for the predictors and for the responses. They obtain a continuum of solutions that goes from reduced rank regression to principal component regression via maximum likelihood and least squares estimation. Different solutions are compared using simulated and real data.  相似文献   

15.
Distance-based regression is a prediction method consisting of two steps: from distances between observations we obtain latent variables which, in turn, are the regressors in an ordinary least squares linear model. Distances are computed from actually observed predictors by means of a suitable dissimilarity function. Being generally nonlinearly related with the response, their selection by the usual F tests is unavailable. In this article, we propose a solution to this predictor selection problem by defining generalized test statistics and adapting a nonparametric bootstrap method to estimate their p-values. We include a numerical example with automobile insurance data.  相似文献   

16.
Most methods for survival prediction from high-dimensional genomic data combine the Cox proportional hazards model with some technique of dimension reduction, such as partial least squares regression (PLS). Applying PLS to the Cox model is not entirely straightforward, and multiple approaches have been proposed. The method of Park et al. (Bioinformatics 18(Suppl. 1):S120–S127, 2002) uses a reformulation of the Cox likelihood to a Poisson type likelihood, thereby enabling estimation by iteratively reweighted partial least squares for generalized linear models. We propose a modification of the method of park et al. (2002) such that estimates of the baseline hazard and the gene effects are obtained in separate steps. The resulting method has several advantages over the method of park et al. (2002) and other existing Cox PLS approaches, as it allows for estimation of survival probabilities for new patients, enables a less memory-demanding estimation procedure, and allows for incorporation of lower-dimensional non-genomic variables like disease grade and tumor thickness. We also propose to combine our Cox PLS method with an initial gene selection step in which genes are ordered by their Cox score and only the highest-ranking k% of the genes are retained, obtaining a so-called supervised partial least squares regression method. In simulations, both the unsupervised and the supervised version outperform other Cox PLS methods.  相似文献   

17.
Sliced Inverse Regression (SIR) is an effective method for dimension reduction in high-dimensional regression problems. The original method, however, requires the inversion of the predictors covariance matrix. In case of collinearity between these predictors or small sample sizes compared to the dimension, the inversion is not possible and a regularization technique has to be used. Our approach is based on a Fisher Lecture given by R.D. Cook where it is shown that SIR axes can be interpreted as solutions of an inverse regression problem. We propose to introduce a Gaussian prior distribution on the unknown parameters of the inverse regression problem in order to regularize their estimation. We show that some existing SIR regularizations can enter our framework, which permits a global understanding of these methods. Three new priors are proposed leading to new regularizations of the SIR method. A comparison on simulated data as well as an application to the estimation of Mars surface physical properties from hyperspectral images are provided.  相似文献   

18.
Variable selection methods have been widely used in the analysis of high-dimensional data, for example, gene expression microarray data and single nucleotide polymorphism data. A special feature of the genomic data is that genes participating in a common metabolic pathway or sharing a similar biological function tend to have high correlations. The collinearity naturally embedded in these data requires special handling, which cannot be provided by existing variable selection methods. In this paper, we propose a set of new methods to select variables in correlated data. The new methods follow the forward selection procedure of least angle regression (LARS) but conduct grouping and selecting at the same time. The methods specially work when no prior information on group structures of data is available. Simulations and real examples show that our proposed methods often outperform the existing variable selection methods, including LARS and elastic net, in terms of both reducing prediction error and preserving sparsity of representation.  相似文献   

19.
This article considers both Partial Least Squares (PLS) and Ridge Regression (RR) methods to combat multicollinearity problem. A simulation study has been conducted to compare their performances with respect to Ordinary Least Squares (OLS). With varying degrees of multicollinearity, it is found that both, PLS and RR, estimators produce significant reductions in the Mean Square Error (MSE) and Prediction Mean Square Error (PMSE) over OLS. However, from the simulation study it is evident that the RR performs better when the error variance is large and the PLS estimator achieves its best results when the model includes more variables. However, the advantage of the ridge regression method over PLS is that it can provide the 95% confidence interval for the regression coefficients while PLS cannot.  相似文献   

20.
This paper presents the results of a Monte Carlo study of OLS and GLS based adaptive ridge estimators for regression problems in which the independent variables are collinear and the errors are autocorrelated. It studies the effects of degree of collinearity, magnitude of error variance, orientation of the parameter vector and serial correlation of the independent variables on the mean squared error performance of these estimators. Results suggest that such estimators produce greatly improved performance in favorable portions of the parameter space. The GLS based methods are best when the independent variables are also serially correlated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号