首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In this paper, a new method for robust principal component analysis (PCA) is proposed. PCA is a widely used tool for dimension reduction without substantial loss of information. However, the classical PCA is vulnerable to outliers due to its dependence on the empirical covariance matrix. To avoid such weakness, several alternative approaches based on robust scatter matrix were suggested. A popular choice is ROBPCA that combines projection pursuit ideas with robust covariance estimation via variance maximization criterion. Our approach is based on the fact that PCA can be formulated as a regression-type optimization problem, which is the main difference from the previous approaches. The proposed robust PCA is derived by substituting square loss function with a robust penalty function, Huber loss function. A practical algorithm is proposed in order to implement an optimization computation, and furthermore, convergence properties of the algorithm are investigated. Results from a simulation study and a real data example demonstrate the promising empirical properties of the proposed method.  相似文献   

2.
Principal component analysis is a popular dimension reduction technique often used to visualize high‐dimensional data structures. In genomics, this can involve millions of variables, but only tens to hundreds of observations. Theoretically, such extreme high dimensionality will cause biased or inconsistent eigenvector estimates, but in practice, the principal component scores are used for visualization with great success. In this paper, we explore when and why the classical principal component scores can be used to visualize structures in high‐dimensional data, even when there are few observations compared with the number of variables. Our argument is twofold: First, we argue that eigenvectors related to pervasive signals will have eigenvalues scaling linearly with the number of variables. Second, we prove that for linearly increasing eigenvalues, the sample component scores will be scaled and rotated versions of the population scores, asymptotically. Thus, the visual information of the sample scores will be unchanged, even though the sample eigenvectors are biased. In the case of pervasive signals, the principal component scores can be used to visualize the population structures, even in extreme high‐dimensional situations.  相似文献   

3.
Principal component analysis (PCA) is widely used to analyze high-dimensional data, but it is very sensitive to outliers. Robust PCA methods seek fits that are unaffected by the outliers and can therefore be trusted to reveal them. FastHCS (high-dimensional congruent subsets) is a robust PCA algorithm suitable for high-dimensional applications, including cases where the number of variables exceeds the number of observations. After detailing the FastHCS algorithm, we carry out an extensive simulation study and three real data applications, the results of which show that FastHCS is systematically more robust to outliers than state-of-the-art methods.  相似文献   

4.
5.
The selection of an appropriate subset of explanatory variables to use in a linear regression model is an important aspect of a statistical analysis. Classical stepwise regression is often used with this aim but it could be invalidated by a few outlying observations. In this paper, we introduce a robust F-test and a robust stepwise regression procedure based on weighted likelihood in order to achieve robustness against the presence of outliers. The introduced methodology is asymptotically equivalent to the classical one when no contamination is present. Some examples and simulation are presented.  相似文献   

6.
The different parts (variables) of a compositional data set cannot be considered independent from each other, since only the ratios between the parts constitute the relevant information to be analysed. Practically, this information can be included in a system of orthonormal coordinates. For the task of regression of one part on other parts, a specific choice of orthonormal coordinates is proposed which allows for an interpretation of the regression parameters in terms of the original parts. In this context, orthogonal regression is appropriate since all compositional parts – also the explanatory variables – are measured with errors. Besides classical (least-squares based) parameter estimation, also robust estimation based on robust principal component analysis is employed. Statistical inference for the regression parameters is obtained by bootstrap; in the robust version the fast and robust bootstrap procedure is used. The methodology is illustrated with a data set from macroeconomics.  相似文献   

7.
Generalized linear mixed models (GLMMs) are widely used to analyse non-normal response data with extra-variation, but non-robust estimators are still routinely used. We propose robust methods for maximum quasi-likelihood and residual maximum quasi-likelihood estimation to limit the influence of outlying observations in GLMMs. The estimation procedure parallels the development of robust estimation methods in linear mixed models, but with adjustments in the dependent variable and the variance component. The methods proposed are applied to three data sets and a comparison is made with the nonparametric maximum likelihood approach. When applied to a set of epileptic seizure data, the methods proposed have the desired effect of limiting the influence of outlying observations on the parameter estimates. Simulation shows that one of the residual maximum quasi-likelihood proposals has a smaller bias than those of the other estimation methods. We further discuss the equivalence of two GLMM formulations when the response variable follows an exponential family. Their extensions to robust GLMMs and their comparative advantages in modelling are described. Some possible modifications of the robust GLMM estimation methods are given to provide further flexibility for applying the method.  相似文献   

8.
In this paper, we propose a novel robust principal component analysis (PCA) for high-dimensional data in the presence of various heterogeneities, in particular strong tailing and outliers. A transformation motivated by the characteristic function is constructed to improve the robustness of the classical PCA. The suggested method has the distinct advantage of dealing with heavy-tail-distributed data, whose covariances may be non-existent (positively infinite, for instance), in addition to the usual outliers. The proposed approach is also a case of kernel principal component analysis (KPCA) and employs the robust and non-linear properties via a bounded and non-linear kernel function. The merits of the new method are illustrated by some statistical properties, including the upper bound of the excess error and the behaviour of the large eigenvalues under a spiked covariance model. Additionally, using a variety of simulations, we demonstrate the benefits of our approach over the classical PCA. Finally, using data on protein expression in mice of various genotypes in a biological study, we apply the novel robust PCA to categorise the mice and find that our approach is more effective at identifying abnormal mice than the classical PCA.  相似文献   

9.
主成分分析与因子分析的异同比较及应用   总被引:51,自引:0,他引:51  
王芳 《统计教育》2003,(5):14-17
主成分分析法和因子分析法都是从变量的方差-协方差结构入手,在尽可能多地保留原始信息的基础上,用少数新变量来解释原始变量的多元统计分析方法。教学实践中,发现学生运用主成分分析法和因子分析法处理降维问题的认识不够清楚,本文针对性地从主成分分析法、因子分析法的基本思想、使用方法及统计量的分析等多角度进行比较,并辅以实例。  相似文献   

10.
Abstract. We review and extend some statistical tools that have proved useful for analysing functional data. Functional data analysis primarily is designed for the analysis of random trajectories and infinite‐dimensional data, and there exists a need for the development of adequate statistical estimation and inference techniques. While this field is in flux, some methods have proven useful. These include warping methods, functional principal component analysis, and conditioning under Gaussian assumptions for the case of sparse data. The latter is a recent development that may provide a bridge between functional and more classical longitudinal data analysis. Besides presenting a brief review of functional principal components and functional regression, we develop some concepts for estimating functional principal component scores in the sparse situation. An extension of the so‐called generalized functional linear model to the case of sparse longitudinal predictors is proposed. This extension includes functional binary regression models for longitudinal data and is illustrated with data on primary biliary cirrhosis.  相似文献   

11.
在聚类问题中,若变量之间存在相关性,传统的应对方法主要是考虑采用马氏距离、主成分聚类等方法,但其可操作性或可解释性较差,因此提出一类基于模型的聚类方法,先对变量间的相关性结构建模(作为辅助信息)再做聚类分析。这种方法的优点主要在于:适用范围更宽泛,不仅能处理(线性)相关问题,而且还可以处理变量间存在的其他复杂结构生成的数据聚类问题;各个变量的重要性也可以通过模型的回归系数来体现;比马氏距离更稳健、更具操作性,比主成分聚类更容易得到解释,算法上也更为简洁有效。  相似文献   

12.
CoPlot analysis is one of the multivariate data-visualizing techniques. It consists of two graphs: the first one represents the distribution of p-dimensional observations over two-dimensional space, whereas the second shows the relations of variables with the observations. At CoPlot analysis, multidimensional scaling (MDS) and Pearson’s correlation coefficient (PCC) are used to obtain a map that demonstrates observations and variables simultaneously. However, both MDS and PCC are sensitive to outliers. When multidimensional dataset contains outliers, interpretation of the map, which is obtained from classical CoPlot analysis, may result in wrong conclusions. At this study, a novel approach to classical CoPlot analysis is presented. By using robust MDS and median absolute deviation correlation coefficient (MADCC), robust CoPlot map is improved. Numerical examples are given to illustrate the merits of the proposed approach. Also, obtained results are compared with the classical CoPlot analysis to emphasize the superiority of introduced robust CoPlot approach.  相似文献   

13.
In classical factor analysis, a few outliers can bias the factor structure extracted from the relationship between manifest variables. As in least-squares regression analysis there is no protection against deviant observations. This paper discusses estimation methods which aim to extract the “true” factor structure reflecting the relationships within the bulk of the data. Such estimation methods constitute the core of robust factor analysis. By means of a simulation study, we illustrate that an implementation of robust estimation methods can lend considerable improvement to the validity of a factor analysis.  相似文献   

14.
The effect of nonstationarity in time series columns of input data in principal components analysis is examined. Nonstationarity are very common among economic indicators collected over time. They are subsequently summarized into fewer indices for purposes of monitoring. Due to the simultaneous drifting of the nonstationary time series usually caused by the trend, the first component averages all the variables without necessarily reducing dimensionality. Sparse principal components analysis can be used, but attainment of sparsity among the loadings (hence, dimension-reduction is achieved) is influenced by the choice of parameter(s) (λ 1,i ). Simulated data with more variables than the number of observations and with different patterns of cross-correlations and autocorrelations were used to illustrate the advantages of sparse principal components analysis over ordinary principal components analysis. Sparse component loadings for nonstationary time series data can be achieved provided that appropriate values of λ 1,j are used. We provide the range of values of λ 1,j that will ensure convergence of the sparse principal components algorithm and consequently achieve sparsity of component loadings.  相似文献   

15.
Some quality characteristics are well defined when treated as the response variables and their relationships are identified to some independent variables. This relationship is called a profile. The parametric models, such as linear models, may be used to model the profiles. However, due to the complexity of many processes in practical applications, it is inappropriate to model the process using parametric models. In these cases non parametric methods are used to model the processes. One of the most applicable non parametric methods used to model complicated profiles is the wavelet. Many authors considered the use of the wavelet transformation only for monitoring the processes in phase II. The problem of estimating the in-control profile in phase I using wavelet transformation is not deeply addressed. Usually classical estimators are used in phase I to estimate the in-control profiles, even when the wavelet transformation is used. These estimators are suitable if the data do not contain outliers. However, when the outliers exist, these estimators cannot estimate the in-control profile properly. In this research, a robust method of estimating the in-control profiles is proposed, which is insensitive to the presence of outliers and could be applied when the wavelet transformation is used. The proposed estimator is the combination of the robust clustering and the S-estimator. This estimator is compared with the classical estimator of the in-control profile in the presence of outliers. The results from a large simulation study show that using the proposed method, one can estimate the in-control profile precisely when the data are contaminated either locally or globally.  相似文献   

16.
Data on twins are used to infer a genetic component of variance for various quantitative human characteristics. There are several statistical approaches available to analyze twin data. Here we compare three approaches for fitting variance components models to the relationship between height and bi-illiocristal diameter across ages in a sample of male and female Polish twins aged 8–17. Two of the approaches assume a multivariate normal model for the data, with one basing the likelihood on the raw data and the other using the distribution of the sample covariance matrix. The third approach uses a robust modification of the multivariate normal log-likelihood to downweight abnormal observations. The statistical theory underlying the methods is outlined, and the implementation of the methods is discussed.  相似文献   

17.
In many fields of empirical research one is faced with observations arising from a functional process. If so, classical multivariate methods are often not feasible or appropriate to explore the data at hand and functional data analysis is prevailing. In this paper we present a method for joint modeling of mean and variance in longitudinal data using penalized splines. Unlike previous approaches we model both components simultaneously via rich spline bases. Estimation as well as smoothing parameter selection is carried out using a mixed model framework. The resulting smooth covariance structures are then used to perform principal component analysis. We illustrate our approach by several simulations and an application to financial interest data.  相似文献   

18.
Response surface methodology is useful for exploring a response over a region of factor space and in searching for extrema. Its generality, makes it applicable to a variety of areas. Classical response surface methodology for a continuous response variable is generally based on least squares fitting. The sensitivity of least squares to outlying observations carries over to the surface procedures. To overcome this sensitivity, we propose response surface methodology based on robust procedures for continuous response variables. This robust methodology is analogous to the methodology based on least squares, while being much less sensitive to outlying observations. The results of a Monte Carlo study comparing it and classical surface methodologies for normal and contaminated normal errors are presented. The results show that as the proportion of contamination increases, the robust methodology correctly identifies a higher proportion of extrema than the least squares methods and that the robust estimates of extrema tend to be closer to the true extrema than the least squares methods.  相似文献   

19.
王斌会 《统计研究》2007,24(8):72-76
传统的多元统计分析方法,如主成分分析方法和因子分析方法等的共同点是计算样本的均值向量和协方差矩阵,并在这两者的基础上计算其他统计量。当样本数据中没有离群值时,这些方法都能得到优良的结果。但是当样本数据中包括离群值时,计算结果就会很容易受到这些离群值的影响,这是因为传统的均值向量和协方差矩阵都不是稳健的统计量。本文对目前较流行的FAST-MCD方法的算法进行研究,构造了稳健的均值向量和稳健的协方差矩阵,应用到主成分分析中,并针对其不足之处提出改进方法。从模拟和实证的结果来看,改进后的的方法和新的稳健估计量确实能够对离群值起到很好的抵抗作用,大幅度地降低它们对计算结果的影响。  相似文献   

20.
Investigations of multivariate population are pretty common in applied researches, and the two-way crossed factorial design is a common design used at the exploratory phase in industrial applications. When assumptions such as multivariate normality and covariance homogeneity are violated, the conventional wisdom is to resort to nonparametric tests for hypotheses testing. In this paper we compare the performances, and in particular the power, of some nonparametric and semi-parametric methods that have been developed in recent years. Specifically, we examined resampling methods and robust versions of classical multivariate analysis of variance (MANOVA) tests. In a simulation study, we generate data sets with different configurations of factor''s effect, number of replicates, number of response variables under null hypothesis, and number of response variables under alternative hypothesis. The objective is to elicit practical advice and guides to practitioners regarding the sensitivity of the tests in the various configurations, the tradeoff between power and type I error, the strategic impact of increasing number of response variables, and the favourable performance of one test when the alternative is sparse. A real case study from an industrial engineering experiment in thermoformed packaging production is used to compare and illustrate the application of the various methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号