首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Margin-based classifiers have been popular in both machine learning and statistics for classification problems. Among numerous classifiers, some are hard classifiers while some are soft ones. Soft classifiers explicitly estimate the class conditional probabilities and then perform classification based on estimated probabilities. In contrast, hard classifiers directly target on the classification decision boundary without producing the probability estimation. These two types of classifiers are based on different philosophies and each has its own merits. In this paper, we propose a novel family of large-margin classifiers, namely large-margin unified machines (LUMs), which covers a broad range of margin-based classifiers including both hard and soft ones. By offering a natural bridge from soft to hard classification, the LUM provides a unified algorithm to fit various classifiers and hence a convenient platform to compare hard and soft classification. Both theoretical consistency and numerical performance of LUMs are explored. Our numerical study sheds some light on the choice between hard and soft classifiers in various classification problems.  相似文献   

2.
Classical statistical approaches for multiclass probability estimation are typically based on regression techniques such as multiple logistic regression, or density estimation approaches such as linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). These methods often make certain assumptions on the form of probability functions or on the underlying distributions of subclasses. In this article, we develop a model-free procedure to estimate multiclass probabilities based on large-margin classifiers. In particular, the new estimation scheme is employed by solving a series of weighted large-margin classifiers and then systematically extracting the probability information from these multiple classification rules. A main advantage of the proposed probability estimation technique is that it does not impose any strong parametric assumption on the underlying distribution and can be applied for a wide range of large-margin classification methods. A general computational algorithm is developed for class probability estimation. Furthermore, we establish asymptotic consistency of the probability estimates. Both simulated and real data examples are presented to illustrate competitive performance of the new approach and compare it with several other existing methods.  相似文献   

3.
The penalized logistic regression (PLR) is a powerful statistical tool for classification. It has been commonly used in many practical problems. Despite its success, since the loss function of the PLR is unbounded, resulting classifiers can be sensitive to outliers. To build more robust classifiers, we propose the robust PLR (RPLR) which uses truncated logistic loss functions, and suggest three schemes to estimate conditional class probabilities. Connections of the RPLR with some other existing work on robust logistic regression have been discussed. Our theoretical results indicate that the RPLR is Fisher consistent and more robust to outliers. Moreover, we develop estimated generalized approximate cross validation (EGACV) for the tuning parameter selection. Through numerical examples, we demonstrate that truncating the loss function indeed yields better performance in terms of classification accuracy and class probability estimation.  相似文献   

4.
The support vector machine (SVM) has been successfully applied to various classification areas with great flexibility and a high level of classification accuracy. However, the SVM is not suitable for the classification of large or imbalanced datasets because of significant computational problems and a classification bias toward the dominant class. The SVM combined with the k-means clustering (KM-SVM) is a fast algorithm developed to accelerate both the training and the prediction of SVM classifiers by using the cluster centers obtained from the k-means clustering. In the KM-SVM algorithm, however, the penalty of misclassification is treated equally for each cluster center even though the contributions of different cluster centers to the classification can be different. In order to improve classification accuracy, we propose the WKM–SVM algorithm which imposes different penalties for the misclassification of cluster centers by using the number of data points within each cluster as a weight. As an extension of the WKM–SVM, the recovery process based on WKM–SVM is suggested to incorporate the information near the optimal boundary. Furthermore, the proposed WKM–SVM can be successfully applied to imbalanced datasets with an appropriate weighting strategy. Experiments show the effectiveness of our proposed methods.  相似文献   

5.
Classical analysis of contingency tables employs (i) fixed sample sizes and (ii) the maximum likelihood and weighted least squares approach to parameter estimation. It is well-known, however, that certain important parameters, such as the main effect and interaction parameters, can neverbe estimated unbiasedly when the sample size is fixed a priori We introduce a sequential unbiased estimator for the cell probabilities subject to log linear constraints. As a simple consequence, we show how parameters such as those mentioned above may. be estimated unbiasedly. Our unbiased estimator for the vector of cell probabilities is shown to be consistent in the sense of Wolfowitz (Ann. Math. Statist. (1947) 18). We give a sufficient condition on a multinomial stopping rule for the corresponding sufficient statistic to be complete. When this condition holds, we have a unique minimum variance unbiased estimator for the cell probabilities.  相似文献   

6.
The authors develop empirical likelihood (EL) based methods of inference for a common mean using data from several independent but nonhomogeneous populations. For point estimation, they propose a maximum empirical likelihood (MEL) estimator and show that it is n‐consistent and asymptotically optimal. For confidence intervals, they consider two EL based methods and show that both intervals have approximately correct coverage probabilities under large samples. Finite‐sample performances of the MEL estimator and the EL based confidence intervals are evaluated through a simulation study. The results indicate that overall the MEL estimator and the weighted EL confidence interval are superior alternatives to the existing methods.  相似文献   

7.
Most statistical and data-mining algorithms assume that data come from a stationary distribution. However, in many real-world classification tasks, data arrive over time and the target concept to be learned from the data stream may change accordingly. Many algorithms have been proposed for learning drifting concepts. To deal with the problem of learning when the distribution generating the data changes over time, dynamic weighted majority was proposed as an ensemble method for concept drift. Unfortunately, this technique considers neither the age of the classifiers in the ensemble nor their past correct classification. In this paper, we propose a method that takes into account expert's age as well as its contribution to the global algorithm's accuracy. We evaluate the effectiveness of our proposed method by using m classifiers and training a collection of n-fold partitioning of the data. Experimental results on a benchmark data set show that our method outperforms existing ones.  相似文献   

8.
The estimated test error of a learned classifier is the most commonly reported measure of classifier performance. However, constructing a high quality point estimator of the test error has proved to be very difficult. Furthermore, common interval estimators (e.g. confidence intervals) are based on the point estimator of the test error and thus inherit all the difficulties associated with the point estimation problem. As a result, these confidence intervals do not reliably deliver nominal coverage. In contrast we directly construct the confidence interval by use of smooth data-dependent upper and lower bounds on the test error. We prove that for linear classifiers, the proposed confidence interval automatically adapts to the non-smoothness of the test error, is consistent under fixed and local alternatives, and does not require that the Bayes classifier be linear. Moreover, the method provides nominal coverage on a suite of test problems using a range of classification algorithms and sample sizes.  相似文献   

9.
In this paper, we consider the classification of high-dimensional vectors based on a small number of training samples from each class. The proposed method follows the Bayesian paradigm, and it is based on a small vector which can be viewed as the regression of the new observation on the space spanned by the training samples. The classification method provides posterior probabilities that the new vector belongs to each of the classes, hence it adapts naturally to any number of classes. Furthermore, we show a direct similarity between the proposed method and the multicategory linear support vector machine introduced in Lee et al. [2004. Multicategory support vector machines: theory and applications to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association 99 (465), 67–81]. We compare the performance of the technique proposed in this paper with the SVM classifier using real-life military and microarray datasets. The study shows that the misclassification errors of both methods are very similar, and that the posterior probabilities assigned to each class are fairly accurate.  相似文献   

10.
In this paper, we consider the estimation problem of multiple conditional quantile functions with right censored survival data. To account for censoring in estimating a quantile function, weighted quantile regression (WQR) has been developed by using inverse-censoring-probability weights. However, the estimated quantile functions from the WQR often cross each other and consequently violate the basic properties of quantiles. To avoid quantile crossing, we propose non-crossing weighted multiple quantile regression (NWQR), which estimates multiple conditional quantile functions simultaneously. We further propose the adaptive sup-norm regularized NWQR (ANWQR) to perform simultaneous estimation and variable selection. The large sample properties of the NWQR and ANWQR estimators are established under certain regularity conditions. The proposed methods are evaluated through simulation studies and analysis of a real data set.  相似文献   

11.
It is often the case that high-dimensional data consist of only a few informative components. Standard statistical modeling and estimation in such a situation is prone to inaccuracies due to overfitting, unless regularization methods are practiced. In the context of classification, we propose a class of regularization methods through shrinkage estimators. The shrinkage is based on variable selection coupled with conditional maximum likelihood. Using Stein's unbiased estimator of the risk, we derive an estimator for the optimal shrinkage method within a certain class. A comparison of the optimal shrinkage methods in a classification context, with the optimal shrinkage method when estimating a mean vector under a squared loss, is given. The latter problem is extensively studied, but it seems that the results of those studies are not completely relevant for classification. We demonstrate and examine our method on simulated data and compare it to feature annealed independence rule and Fisher's rule.  相似文献   

12.
Abstract

In this article, we consider the inverse probability weighted estimators for a single-index model with missing covariates when the selection probabilities are known or unknown. It is shown that the estimator for the index parameter by using estimated selection probabilities has a smaller asymptotic variance than that with true selection probabilities, thus is more efficient. Therefore, the important Horvitz-Thompson property is verified for the index parameter in single index model. However, this difference disappears for the estimators of the link function. Some numerical examples and a real data application are also conducted to illustrate the performances of the estimators.  相似文献   

13.
In survival analysis, covariate measurements often contain missing observations; ignoring this feature can lead to invalid inference. We propose a class of weighted estimating equations for right‐censored data with missing covariates under semiparametric transformation models. Time‐specific and subject‐specific weights are accommodated in the formulation of the weighted estimating equations. We establish unified results for estimating missingness probabilities that cover both parametric and non‐parametric modelling schemes. To improve estimation efficiency, the weighted estimating equations are augmented by a new set of unbiased estimating equations. The resultant estimator has the so‐called ‘double robustness’ property and is optimal within a class of consistent estimators.  相似文献   

14.
In this study, we demonstrate how generalized propensity score estimators (Imbens’ weighted estimator, the propensity score weighted estimator and the generalized doubly robust estimator) can be used to calculate the adjusted marginal probabilities for estimating the three common binomial parameters: the risk difference (RD), the relative risk (RR), and the odds ratio (OR). We further conduct a simulation study to compare the estimated RD, RR, and OR using the adjusted and the unadjusted marginal probabilities in terms of the bias and mean-squared error (MSE). Although there is no clear winner in terms of the MSE for estimating RD, RR, and OR, simulation results surprisingly show thatthe unadjusted marginal probabilities produce the smallest bias compared with the adjusted marginal probabilities in most of the estimates. Hence, in conclusion, we recommend using the unadjusted marginal probabilities to estimate RD, RR, and OR, in practice.  相似文献   

15.
A new density-based classification method that uses semiparametric mixtures is proposed. Like other density-based classifiers, it first estimates the probability density function for the observations in each class, with a semiparametric mixture, and then classifies a new observation by the highest posterior probability. By making a proper use of a multivariate nonparametric density estimator that has been developed recently, it is able to produce adaptively smooth and complicated decision boundaries in a high-dimensional space and can thus work well in such cases. Issues specific to classification are studied and discussed. Numerical studies using simulated and real-world data show that the new classifier performs very well as compared with other commonly used classification methods.  相似文献   

16.
In practice, it often happens that we have a number of base methods of classification. We are not able to clearly determine which method is optimal in the sense of the smallest error rate. Then we have a combined method that allows us to consolidate information from multiple sources in a better classifier. I propose a different approach, a sequential approach. Sequentiality is understood here in the sense of adding posterior probabilities to the original data set and so created data are used during classification process. We combine posterior probabilities obtained from base classifiers using all combining methods. Finally, we combine these probabilities using a mean combining method. To the original data set we add obtained posterior probabilities as additional features. In each step we change our additional probabilities to achieve the minimum error rate for base methods. Experimental results on different data sets demonstrate that the method is efficient and that this approach outperforms base methods providing a reduction in the mean classification error rate.  相似文献   

17.
Nonparametric estimates of the conditional distribution of a response variable given a covariate are important for data exploration purposes. In this article, we propose a nonparametric estimator of the conditional distribution function in the case where the response variable is subject to interval censoring and double truncation. Using the approach of Dehghan and Duchesne (2011), the proposed method consists in adding weights that depend on the covariate value in the self-consistency equation of Turnbull (1976), which results in a nonparametric estimator. We demonstrate by simulation that the estimator, bootstrap variance estimation and bandwidth selection all perform well in finite samples.  相似文献   

18.
In this paper, we construct a non parametric estimator of conditional distribution function by the double-kernel local linear approach for left-truncated data, from which we derive the weighted double-kernel local linear estimator of conditional quantile. The asymptotic normality of the proposed estimators is also established. Finite-sample performance of the estimator is investigated via simulation.  相似文献   

19.
The randomized response technique (RRT) is an important tool, commonly used to avoid biased answers in survey on sensitive issues by preserving the respondents’ privacy. In this paper, we introduce a data collection method for survey on sensitive issues combining both the unrelated-question RRT and the direct question design. The direct questioning method is utilized to obtain responses to a non sensitive question that is related to the innocuous question from the unrelated-question RRT. These responses serve as additional information that can be used to improve the estimation of the prevalence of the sensitive behavior. Furthermore, we propose two new methods for the estimation of the proportion of respondents possessing the sensitive attribute under a missing data setup. More specifically, we develop the weighted estimator and the weighted conditional likelihood estimator. The performances of our estimators are studied numerically and compared with that of an existing one. Both proposed estimators are more efficient than the Greenberg's estimator. We illustrate our methods using real data from a survey study on illegal use of cable TV service in Taiwan.  相似文献   

20.
We propose a weighted delete-one-cluster Jackknife based framework for few clusters with severe cluster-level heterogeneity. The proposed method estimates the mean for a condition by a weighted sum of estimates from each of the Jackknife procedures. Influence from a heterogeneous cluster can be weighted appropriately, and the conditional mean can be estimated with higher precision. An algorithm for estimating the variance of the proposed estimator is also provided, followed by the cluster permutation test for the condition effect assessment. Our simulation studies demonstrate that the proposed framework has good operating characteristics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号