首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
Among many classification methods, linear discriminant analysis (LDA) is a favored tool due to its simplicity, robustness, and predictive accuracy but when the number of genes is larger than the number of observations, it cannot be applied directly because the within-class covariance matrix is singular. Also, diagonal LDA (DLDA) is a simpler model compared to LDA and has better performance in some cases. However, in reality, DLDA requires a strong assumption based on mutual independence. In this article, we propose the modified LDA (MLDA). MLDA is based on independence, but uses the information that has an effect on classification performance with the dependence structure. We suggest two approaches. One is the case of using gene rank. The other involves no use of gene rank. We found that MLDA has better performance than LDA, DLDA, or K-nearest neighborhood and is comparable with support vector machines in real data analysis and the simulation study.  相似文献   

2.
In this article, a sequential correction of two linear methods: linear discriminant analysis (LDA) and perceptron is proposed. This correction relies on sequential joining of additional features on which the classifier is trained. These new features are posterior probabilities determined by a basic classification method such as LDA and perceptron. In each step, we add the probabilities obtained on a slightly different data set, because the vector of added probabilities varies at each step. We therefore have many classifiers of the same type trained on slightly different data sets. Four different sequential correction methods are presented based on different combining schemas (e.g. mean rule and product rule). Experimental results on different data sets demonstrate that the improvements are efficient, and that this approach outperforms classical linear methods, providing a significant reduction in the mean classification error rate.  相似文献   

3.
We consider the supervised classification setting, in which the data consist of p features measured on n observations, each of which belongs to one of K classes. Linear discriminant analysis (LDA) is a classical method for this problem. However, in the high-dimensional setting where p ? n, LDA is not appropriate for two reasons. First, the standard estimate for the within-class covariance matrix is singular, and so the usual discriminant rule cannot be applied. Second, when p is large, it is difficult to interpret the classification rule obtained from LDA, since it involves all p features. We propose penalized LDA, a general approach for penalizing the discriminant vectors in Fisher's discriminant problem in a way that leads to greater interpretability. The discriminant problem is not convex, so we use a minorization-maximization approach in order to efficiently optimize it when convex penalties are applied to the discriminant vectors. In particular, we consider the use of L(1) and fused lasso penalties. Our proposal is equivalent to recasting Fisher's discriminant problem as a biconvex problem. We evaluate the performances of the resulting methods on a simulation study, and on three gene expression data sets. We also survey past methods for extending LDA to the high-dimensional setting, and explore their relationships with our proposal.  相似文献   

4.
We introduce a technique for extending the classical method of linear discriminant analysis (LDA) to data sets where the predictor variables are curves or functions. This procedure, which we call functional linear discriminant analysis ( FLDA ), is particularly useful when only fragments of the curves are observed. All the techniques associated with LDA can be extended for use with FLDA. In particular FLDA can be used to produce classifications on new (test) curves, give an estimate of the discriminant function between classes and provide a one- or two-dimensional pictorial representation of a set of curves. We also extend this procedure to provide generalizations of quadratic and regularized discriminant analysis.  相似文献   

5.
In simulation studies for discriminant analysis, misclassification errors are often computed using the Monte Carlo method, by testing a classifier on large samples generated from known populations. Although large samples are expected to behave closely to the underlying distributions, they may not do so in a small interval or region, and thus may lead to unexpected results. We demonstrate with an example that the LDA misclassification error computed via the Monte Carlo method may often be smaller than the Bayes error. We give a rigorous explanation and recommend a method to properly compute misclassification errors.  相似文献   

6.
Many sparse linear discriminant analysis (LDA) methods have been proposed to overcome the major problems of the classic LDA in high‐dimensional settings. However, the asymptotic optimality results are limited to the case with only two classes. When there are more than two classes, the classification boundary is complicated and no explicit formulas for the classification errors exist. We consider the asymptotic optimality in the high‐dimensional settings for a large family of linear classification rules with arbitrary number of classes. Our main theorem provides easy‐to‐check criteria for the asymptotic optimality of a general classification rule in this family as dimensionality and sample size both go to infinity and the number of classes is arbitrary. We establish the corresponding convergence rates. The general theory is applied to the classic LDA and the extensions of two recently proposed sparse LDA methods to obtain the asymptotic optimality.  相似文献   

7.
Yanfang Li  Jing Lei 《Statistics》2018,52(4):782-800
We study high dimensional multigroup classification from a sparse subspace estimation perspective, unifying the linear discriminant analysis (LDA) with other recent developments in high dimensional multivariate analysis using similar tools, such as penalization method. We develop two two-stage sparse LDA models, where in the first stage, convex relaxation is used to convert two classical formulations of LDA to semidefinite programs, and furthermore subspace perspective allows for straightforward regularization and estimation. After the initial convex relaxation, we use a refinement stage to improve the accuracy. For the first model, a penalized quadratic program with group lasso penalty is used for refinement, whereas a sparse version of the power method is used for the second model. We carefully examine the theoretical properties of both methods, alongside with simulations and real data analysis.  相似文献   

8.
This article investigates the possible use of our newly defined extended projection depth (abbreviated to EPD) in nonparametric discriminant analysis. We propose a robust nonparametric classifier, which relies on the intuitively simple notion of EPD. The EPD-based classifier assigns an observation to the population with respect to which it has the maximum EPD. Asymptotic properties of misclassification rates and robust properties of EPD-based classifier are discussed. A few simulated data sets are used to compare the performance of EPD-based classifier with Fisher's linear discriminant rule, quadratic discriminant rule, and PD-based classifier. It is also found that when the underlying distributions are elliptically symmetric, EPD-based classifier is asymptotically equivalent to the optimal Bayes classifier.  相似文献   

9.
ABSTRACT

Classification of data consisting of both categorical and continuous variables between two groups is often handled by the sample location linear discriminant function confined to each of the locations specified by the observed values of the categorical variables. Homoscedasticity of across-location conditional dispersion matrices of the continuous variables is often assumed. Quite often, interactions between continuous and categorical variables cause across-location heteroscedasticity. In this article, we examine the effect of heterogeneous across-location conditional dispersion matrices on the overall expected and actual error rates associated with the sample location linear discriminant function. Performance of the sample location linear discriminant function is evaluated against the results for the restrictive classifier adjusted for across-location heteroscedasticity. Conclusions based on a Monte Carlo study are reported.  相似文献   

10.
For time series data with obvious periodicity (e.g., electric motor systems and cardiac monitor) or vague periodicity (e.g., earthquake and explosion, speech, and stock data), frequency-based techniques using the spectral analysis can usually capture the features of the series. By this approach, we are able not only to reduce the data dimensions into frequency domain but also utilize these frequencies by general classification methods such as linear discriminant analysis (LDA) and k-nearest-neighbor (KNN) to classify the time series. This is a combination of two classical approaches. However, there is a difficulty in using LDA and KNN in frequency domain due to excessive dimensions of data. We overcome the obstacle by using Singular Value Decomposition to select essential frequencies. Two data sets are used to illustrate our approach. The classification error rates of our simple approach are comparable to those of several more complicated methods.  相似文献   

11.
A classifier is developed which uses information from all pixels in a neighbourhood to classify the pixel at the center of the neighbourhood. It is not a smoother in that it tries to recognize boundaries. and it makes explieite use of the relative positions of pixels in the neighbourhood. It is based on a geometric probability model for the distribution of the classes in the plane. The neighbourhood-based classifier is shown to outperform linear discriminant analysis on some LANDSAT data.  相似文献   

12.
It is widely believed that unlabeled data are promising for improving prediction accuracy in classification problems. Although theoretical studies about when/how unlabeled data are beneficial exist, an actual prediction improvement has not been sufficiently investigated for a finite sample in a systematic manner. We investigate the impact of unlabeled data in linear discriminant analysis and compare the error rates of the classifiers estimated with/without unlabeled data. Our focus is a labeling mechanism that characterizes the probabilistic structure of occurrence of labeled cases. Results imply that an extremely small proportion of unlabeled data has a large effect on the analysis results.  相似文献   

13.
赵雪艳 《统计研究》2020,37(6):106-118
对应分析在对定性数据进行数量化处理过程中出现了“弓形效应”,关于对应分析的“弓形效应”的修正方法已经有了丰富的研究成果,避免了可能错误的分析结果,对理论界和应用领域都有重要意义。数量化Ⅱ类是关于定性数据的一种判别分析方法,在国内外已被广泛应用。本文通过大量模拟数据研究发现,数量化Ⅱ类在对定性数据进行数量化过程中出现了“弓形效应”,降低了正判别率,同时不能正确再现原始数据信息,得出与原始数据信息不符的错误分析结果,为修正“弓形效应”,提出了二阶段判别分析法,并从正判别率和对原始数据再现程度两个方面对数量化Ⅱ类与二阶段判别分析法进行了比较,同时将二阶段判别分析法运用到个人信用评级中,发现二阶段判别分析法的判别性能优于数量化Ⅱ类。  相似文献   

14.
This study considers the binary classification of functional data collected in the form of curves. In particular, we assume a situation in which the curves are highly mixed over the entire domain, so that the global discriminant analysis based on the entire domain is not effective. This study proposes an interval-based classification method for functional data: the informative intervals for classification are selected and used for separating the curves into two classes. The proposed method, called functional logistic regression with fused lasso penalty, combines the functional logistic regression as a classifier and the fused lasso for selecting discriminant segments. The proposed method automatically selects the most informative segments of functional data for classification by employing the fused lasso penalty and simultaneously classifies the data based on the selected segments using the functional logistic regression. The effectiveness of the proposed method is demonstrated with simulated and real data examples.  相似文献   

15.
This article demonstrates the application of classification trees (decision trees), logistic regression (LR), and linear discriminant function (LDR) to classify data of water quality (i.e., whether the water is fit for drinking on not fit for drinking). The data on water quality were obtained from Pakistan Council of Research in Water Resources (PCRWR) for two cities of Pakistan—one representing industrial environment (Sialkot) and the other one representing non-industrial environment (Narowal). To classify data on water quality, three statistical tools were employed—the Decision Tree methodology using Gini Index, LR, and LDA—using R software library. The results obtained by the said three techniques were compared using misclassification rates (a model with minimum value of misclassification rate is better). It was witnessed that LR performed well than the other two techniques while the Decision trees and LDA performed equally well. But for illustration purposes decision trees technique is comparatively easy to draw and interpret.  相似文献   

16.
This paper discusses visualization methods for discriminant analysis. It does not address numerical methods for classification per se, but rather focuses on graphical methods that can be viewed as pre-processors, aiding the analyst's understanding of the data and the choice of a final classifier. The methods are adaptations of recent results in dimension reduction for regression, including sliced inverse regression and sliced average variance estimation. A permutation test is suggested as a means of determining dimension, and examples are given throughout the discussion.  相似文献   

17.

There has been increasing interest in using semi-supervised learning to form a classifier. As is well known, the (Fisher) information in an unclassified feature with unknown class label is less (considerably less for weakly separated classes) than that of a classified feature which has known class label. Hence in the case where the absence of class labels does not depend on the data, the expected error rate of a classifier formed from the classified and unclassified features in a partially classified sample is greater than that if the sample were completely classified. We propose to treat the labels of the unclassified features as missing data and to introduce a framework for their missingness as in the pioneering work of Rubin (Biometrika 63:581–592, 1976) for missingness in incomplete data analysis. An examination of several partially classified data sets in the literature suggests that the unclassified features are not occurring at random in the feature space, but rather tend to be concentrated in regions of relatively high entropy. It suggests that the missingness of the labels of the features can be modelled by representing the conditional probability of a missing label for a feature via the logistic model with covariate depending on the entropy of the feature or an appropriate proxy for it. We consider here the case of two normal classes with a common covariance matrix where for computational convenience the square of the discriminant function is used as the covariate in the logistic model in place of the negative log entropy. Rather paradoxically, we show that the classifier so formed from the partially classified sample may have smaller expected error rate than that if the sample were completely classified.

  相似文献   

18.
A fast Bayesian method that seamlessly fuses classification and hypothesis testing via discriminant analysis is developed. Building upon the original discriminant analysis classifier, modelling components are added to identify discriminative variables. A combination of cake priors and a novel form of variational Bayes we call reverse collapsed variational Bayes gives rise to variable selection that can be directly posed as a multiple hypothesis testing approach using likelihood ratio statistics. Some theoretical arguments are presented showing that Chernoff-consistency (asymptotically zero type I and type II error) is maintained across all hypotheses. We apply our method on some publicly available genomics datasets and show that our method performs well in practice for its computational cost. An R package VaDA has also been made available on Github.  相似文献   

19.
It is well known that linear discriminant analysis (LDA) works well and is asymptotically optimal under fixed-p-large-n situations. But Bickel and Levina (2004 Bickel, P.J., Levina, E. (2004). Some theory for Fishers linear discriminant function, naive Bayes, and some alternatives when there are many more variables than observations. Bernoulli 10:9891010.[Crossref], [Web of Science ®] [Google Scholar]) showed that the LDA is as bad as random guessing when p > n. This article studies the sparse discriminant analysis via Dantzig penalized least squares. Our method avoids estimating the high-dimensional covariance matrix and does not need the sparsity assumption on the inverse of the covariance matrix. We show that the new discriminant analysis is asymptotically optimal theoretically. Simulation and real data studies show that the classifier performs better than the existing sparse methods.  相似文献   

20.
The statistical problems associated with estimating the mean responding cell density in the limiting dilution assay (LDA) have largely been ignored. We evaluate techniques for analyzing LDA data from multiple biological samples, assumed to follow either a normal or gamma distribution. Simulated data is used to evaluate the performance of an unweighted mean, a log transform, and a weighted mean procedure described by Taswell (1987). In general, an unweighted mean with jackknife estimates will produce satisfactory results. In some cases, a log transform is more appropriate. Taswell's weighted mean algorithm is unable to estimate an accurate variance. We also show that methods which pool samples, or LDA data, are invalid. In addition, we show that optimization of the variance in multiple sample LDA's is dependent on the estimator, the between-organism variance, the replicate well size, and the numberof biological samples. However, this optimization is generally achieved by maximizing biological samples at the expense of well replicates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号