首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 862 毫秒
1.
This paper considers the use of multidimensional scaling techniques in multivariate statistical process control. Principal components analysis, multiple principal components analysis, partial least squares and PARAFAC models have already been established as useful methods for such, but it should be possible to widen the portfolio of techniques to include others that come under the multidimensional scaling class. Some of these are briefly described-namely classical scaling, non-metric scaling, biplots, Procrustes analysis-and are then used on some gas transportation data provided by Transco.  相似文献   

2.
Data resulting from behavioral dental research, usually categorical or discretized and having unknown measurement and distributional characteristics, often cannot be analyzed with classical multivariate techniques. A non linear principal components technique called multiple correspondence analysis is presented with its corresponding computer program that can handle this kind of data. The model is described as a form of multidimensional scaling. The technique Is applied in order to establish which factors are associated with an Individual's preference for preservation of the teeth.  相似文献   

3.
CoPlot analysis is one of the multivariate data-visualizing techniques. It consists of two graphs: the first one represents the distribution of p-dimensional observations over two-dimensional space, whereas the second shows the relations of variables with the observations. At CoPlot analysis, multidimensional scaling (MDS) and Pearson’s correlation coefficient (PCC) are used to obtain a map that demonstrates observations and variables simultaneously. However, both MDS and PCC are sensitive to outliers. When multidimensional dataset contains outliers, interpretation of the map, which is obtained from classical CoPlot analysis, may result in wrong conclusions. At this study, a novel approach to classical CoPlot analysis is presented. By using robust MDS and median absolute deviation correlation coefficient (MADCC), robust CoPlot map is improved. Numerical examples are given to illustrate the merits of the proposed approach. Also, obtained results are compared with the classical CoPlot analysis to emphasize the superiority of introduced robust CoPlot approach.  相似文献   

4.
Poverty can be seen as a multidimensional phenomenon described by a set of indicators, the poverty components. A one-dimensional measure of poverty serving as a ranking index can be obtained by combining the component indicators via aggregation techniques. Ranking indices are thought of as supporting political decisions. This paper proposes an alternative to aggregation based on simple concepts of partial order theory and illustrates the pros and cons of this approach taking as case study a multidimensional measure of poverty comprising three components – absolute poverty, relative poverty and income – computed for the European Union regions. The analysis enables one to highlight conflicts across the components with some regions detected as controversial, with, for example, low levels of relative poverty and high levels of monetary poverty. The partial order approach enables one to point to the regions with the most severe data conflicts and to the component indicators that cause these conflicts.  相似文献   

5.
In multivariate statistics, estimation of the covariance or correlation matrix is of crucial importance. Computational and other arguments often lead to the use of coordinate-dependent estimators, yielding matrices that are symmetric but not positive semidefinite. We briefly discuss existing methods, based on shrinking, for transforming such matrices into positive semidefinite matrices, A simple method based on eigenvalues is also considered. Taking into account the geometric structure of correlation matrices, a new method is proposed which uses techniques similar to those of multidimensional scaling.  相似文献   

6.
Many college courses use group work as a part of the learning and evaluation process. Class groups are often selected randomly or by allowing students to organize groups themselves. However, if it is desired to control some aspect of the group structure, such as increasing schedule compatibility within groups, multidimensional scaling can be used to form such groups. This article describes how this has been adopted in an undergraduate statistics course. Resulting groups have been more homogeneous with respect to student schedules than groups selected randomly—an example from winter quarter 2004 increased correlations between student schedules from a mean of .29 before grouping to a within-group mean of .50. Further, the exercise allows opportunities to discuss a wealth of statistical concepts in class, including surveys, association measures, multidimensional scaling, and statistical graphics.  相似文献   

7.
This article uses a comprehensive model of economic inequality to examine the impact of relative price changes on inequality in the marginal distributions of various income components in which the marginal distributions are derived from a multidimensional joint distribution. The multidimensional joint distribution function is assumed to be a member of the Pearson Type VI family; that is, it is assumed to be a beta distribution of the second kind. The multidimensional joint distribution is so called because it is a joint distribution of components of income and expenditures on various commodity groups. Gini measures of inequality are devised from the marginal distributions of the various income components. The inequality measures are shown to depend on the parameters of the multidimensional joint distribution. It is then shown that the parameters of the multidimensional joint distribution depend on the relative prices of various commodity groups and several other specified exogenous variables. Thus, knowledge of how changes in relative prices affect the parameters of the multidimensional joint distribution is deductively equivalent to knowledge of how changes in relative prices affect inequality in the marginal distributions of various components of income. It is found that relative price changes have a statistically significant impact on inequality in various components of income.  相似文献   

8.
Recently, a lot of attention has been brought to constrained estimation theory in multidimensional scaling models. So far, only equality constraints have been thoroughly studied. In this paper, the optimization theory is extended to general multidi-mensional scaling models with both inequality and equality constraints. A Newton-Raphson based algorithm is developed to produce the constrained least squares estimate. To illustrate the theory, some classical color data are reanalyzed in the context of the linear Euclidean distance model.  相似文献   

9.
Kleiner and Hartigan (1981) introduced trees and castles (graphical techniques for representing multidimensional data) in which the variables are assigned to components of the display on the basis of hierarchical clustering. An experiment was performed to assess the efficacy of trees and castles in discrimination. The graphs were compared to two types of histograms: one with variables assigned randomly, the other with variables assigned according to hierarchical clustering. Trees tended to give the best results.  相似文献   

10.
Nonmetric multidimensional scaling (MDS) is adapted to give configurations of points that lie on the surface of a sphere.There are data sets where it can be argued that spherical MDS is more relevant than the usual planar MDS.The theory behind the adaption of planar MDS to spherical MDS is outlined and then its use is illustrated on three data sets.  相似文献   

11.
基于66种力学期刊论文的参考文献,利用引文分析法对力学期刊的引文进行统计和分析,得到力学期刊群的引文矩阵。通过余弦函数将引文矩阵转化为相似矩阵,并运用多维尺度分析法将力学期刊之间在引文上的相似性转化为平面距离,以平面图直观地展现出力学不同分支学科期刊子群在引文上的关系。在通过因子分析将66种力学期刊复杂的内部关系转换为少数几个不相关的因子的同时,考察了力学期刊群的构成和内部关系,从而探索了力学学科的结构。  相似文献   

12.
Abstract. Maximum likelihood estimation in many classical statistical problems is beset by multimodality. This article explores several variations of deterministic annealing that tend to avoid inferior modes and find the dominant mode. In Bayesian settings, annealing can be tailored to find the dominant mode of the log posterior. Our annealing algorithms involve essentially trivial changes to existing optimization algorithms built on block relaxation or the EM or MM principle. Our examples include estimation with the multivariate t distribution, Gaussian mixture models, latent class analysis, factor analysis, multidimensional scaling and a one‐way random effects model. In the numerical examples explored, the proposed annealing strategies significantly improve the chances for locating the global maximum.  相似文献   

13.
Generalized discriminant analysis based on distances   总被引:14,自引:1,他引:13  
This paper describes a method of generalized discriminant analysis based on a dissimilarity matrix to test for differences in a priori groups of multivariate observations. Use of classical multidimensional scaling produces a low‐dimensional representation of the data for which Euclidean distances approximate the original dissimilarities. The resulting scores are then analysed using discriminant analysis, giving tests based on the canonical correlations. The asymptotic distributions of these statistics under permutations of the observations are shown to be invariant to changes in the distributions of the original variables, unlike the distributions of the multi‐response permutation test statistics which have been considered by other workers for testing differences among groups. This canonical method is applied to multivariate fish assemblage data, with Monte Carlo simulations to make power comparisons and to compare theoretical results and empirical distributions. The paper proposes classification based on distances. Error rates are estimated using cross‐validation.  相似文献   

14.
In the past ten years multidimensional scaling of nonmetric data has been widely applied in behavioral and business research. This paper investigates the asymmetric data matrix and develops stress distributions based upon a null hypothesis of equal like­lihood in the ranking of a set of proximities.  相似文献   

15.
A multidimensional scaling methodology (STUNMIX) for the analysis of subjects' preference/choice of stimuli that sets out to integrate the previous work in this area into a single framework, as well as to provide a variety of new options and models, is presented. Locations of the stimuli and the ideal points of derived segments of subjects on latent dimensions are estimated simultaneously. The methodology is formulated in the framework of the exponential family of distributions, whereby a wide range of different data types can be analyzed. Possible reparameterizations of stimulus coordinates by stimulus characteristics, as well as of probabilities of segment membership by subject background variables, are permitted. The models are estimated in a maximum likelihood framework. The performance of the models is demonstrated on synthetic data, and robustness is investigated. An empirical application is provided, concerning intentions to buy portable telephones.  相似文献   

16.
Two similarity measures are employed to compare historic stock market indices over time. The more traditional Euclidean similarity is employed to provide a reference. As a comparison, dynamic time warping is introduced as a similarity measure. Multidimensional scaling is employed to compare these dissimilarities on 15 financial indices sampled daily over a 10-year period. In addition to investigating the whole period, 1-year tranches are also considered. This analysis is compared to a recent study of Machado et al. [Analysis of stock market indices through multidimensional scaling, Commun. Nonlinear Sci. Numer. Simul. 16(12) (2011), pp. 4610–4618], who examined these same indices using correlation as a similarity measure. It is suggested that this approach may be problematic. Doubt is also cast on the efficacy of the ‘histogram’ similarity they also propose.  相似文献   

17.
New points can be superimposed on a Euclidean configuration obtained as a result of a metric multidimensional scaling at coordinates given by Gower's interpolation formula. The procedure amounts to discarding a, possibly nonnull, coordinate along an additional dimension. We derive an analytical formula for this projection error term and, for real data problems, we describe a statistical method for testing its significance, as a cautionary device prior to further distance-based predictions.  相似文献   

18.
Some new results of a distance—based (DB) model for prediction with mixed variables are presented and discussed. This model can be thought of as a linear model where predictor variables for a response Y are obtained from the observed ones via classic multidimensional scaling. A coefficient is introduced in order to choose the most predictive dimensions, providing a solution to the problem of small variances and a very large number n of observations (the dimensionality increases as n). The problem of missing data is explored and a DB solution is proposed. It is shown that this approach can be regarded as a kind of ridge regression when the usual Euclidean distance is used.  相似文献   

19.
The DEDICOM model is a model to analyze square tables describing asymmetric relationships among n entities. Its importance in the asymmetric multidimensional scaling literature is due to the fact that several authors showed a large class of models to be simply a constrained version of DEDICOM. A typical example is the Generalized GIPSCAL proposed by Kiers & Takane. In this paper we present a new algorithm capable to fit, in the least squares sense, any DEDICOM constrained model.  相似文献   

20.
Principal components are a well established tool in dimension reduction. The extension to principal curves allows for general smooth curves which pass through the middle of a multidimensional data cloud. In this paper local principal curves are introduced, which are based on the localization of principal component analysis. The proposed algorithm is able to identify closed curves as well as multiple curves which may or may not be connected. For the evaluation of the performance of principal curves as tool for data reduction a measure of coverage is suggested. By use of simulated and real data sets the approach is compared to various alternative concepts of principal curves.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号