首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Boosting is a new, powerful method for classification. It is an iterative procedure which successively classifies a weighted version of the sample, and then reweights this sample dependent on how successful the classification was. In this paper we review some of the commonly used methods for performing boosting and show how they can be fit into a Bayesian setup at each iteration of the algorithm. We demonstrate how this formulation gives rise to a new splitting criterion when using a domain-partitioning classification method such as a decision tree. Further we can improve the predictive performance of simple decision trees, known as stumps, by using a posterior weighted average of them to classify at each step of the algorithm, rather than just a single stump. The main advantage of this approach is to reduce the number of boosting iterations required to produce a good classifier with only a minimal increase in the computational complexity of the algorithm.  相似文献   

2.
Statistical learning is emerging as a promising field where a number of algorithms from machine learning are interpreted as statistical methods and vice-versa. Due to good practical performance, boosting is one of the most studied machine learning techniques. We propose algorithms for multivariate density estimation and classification. They are generated by using the traditional kernel techniques as weak learners in boosting algorithms. Our algorithms take the form of multistep estimators, whose first step is a standard kernel method. Some strategies for bandwidth selection are also discussed with regard both to the standard kernel density classification problem, and to our 'boosted' kernel methods. Extensive experiments, using real and simulated data, show an encouraging practical relevance of the findings. Standard kernel methods are often outperformed by the first boosting iterations and in correspondence of several bandwidth values. In addition, the practical effectiveness of our classification algorithm is confirmed by a comparative study on two real datasets, the competitors being trees including AdaBoosting with trees.  相似文献   

3.
With the rapid development of e-commerce, online consumer review plays an increasingly important role in consumers’ purchase decisions. Most research papers use the quantitative measures of consumer reviews for statistical analysis. Here we focus on analyzing the texts of customer reviews with text mining tools. We propose a new feature selection method called maximizing the difference. Various classification methods such as boosting, random forest and SVM are used to test the performance of the new method along with different evaluation criteria. Both simulation and empirical results show that it improves the effectiveness of the classifier over the existing methods.  相似文献   

4.
There are many well-known methods applied in classification problem for linear data with both known and unknown distribution. Here, we deal with classification involving data on torus and cylinder. A new method involving a generalized likelihood ratio test is developed for classifying in two populations using directional data. The approach assumes that one of the probabilities of misclassification is known. The procedure is constructed by applying Gibbs sampler on the conditionally specified distribution. A parametric bootstrap approach is also presented. An application to data involving linear and circular measurements on human skull from two tribal populations is given.  相似文献   

5.
A smoothed bootstrap method is presented for the purpose of bandwidth selection in nonparametric hazard rate estimation for iid data. In this context, two new bootstrap bandwidth selectors are established based on the exact expression of the bootstrap version of the mean integrated squared error of some approximations of the kernel hazard rate estimator. This is very useful since Monte Carlo approximation is no longer needed for the implementation of the two bootstrap selectors. A simulation study is carried out in order to show the empirical performance of the new bootstrap bandwidths and to compare them with other existing selectors. The methods are illustrated by applying them to a diabetes data set.  相似文献   

6.
Fast and robust bootstrap   总被引:1,自引:0,他引:1  
In this paper we review recent developments on a bootstrap method for robust estimators which is computationally faster and more resistant to outliers than the classical bootstrap. This fast and robust bootstrap method is, under reasonable regularity conditions, asymptotically consistent. We describe the method in general and then consider its application to perform inference based on robust estimators for the linear regression and multivariate location-scatter models. In particular, we study confidence and prediction intervals and tests of hypotheses for linear regression models, inference for location-scatter parameters and principal components, and classification error estimation for discriminant analysis.  相似文献   

7.
In high dimensional classification problem, two stage method, reducing the dimension of predictor first and then applying the classification method, is a natural solution and has been widely used in many fields. The consistency of the two stage method is an important issue, since errors induced by dimension reduction method inevitably have impacts on the following classification method. As an effective method for classification problem, boosting has been widely used in practice. In this paper, we study the consistency of two stage method–dimension reduction based boosting algorithm (briefly DRB) for classification problem. Theoretical results show that Lipschitz condition on the base learner is required to guarantee the consistency of DRB. This theoretical findings provide useful guideline for application.  相似文献   

8.
Kernel density classification and boosting: an L2 analysis   总被引:1,自引:0,他引:1  
Kernel density estimation is a commonly used approach to classification. However, most of the theoretical results for kernel methods apply to estimation per se and not necessarily to classification. In this paper we show that when estimating the difference between two densities, the optimal smoothing parameters are increasing functions of the sample size of the complementary group, and we provide a small simluation study which examines the relative performance of kernel density methods when the final goal is classification.A relative newcomer to the classification portfolio is boosting, and this paper proposes an algorithm for boosting kernel density classifiers. We note that boosting is closely linked to a previously proposed method of bias reduction in kernel density estimation and indicate how it will enjoy similar properties for classification. We show that boosting kernel classifiers reduces the bias whilst only slightly increasing the variance, with an overall reduction in error. Numerical examples and simulations are used to illustrate the findings, and we also suggest further areas of research.  相似文献   

9.
Euclidean distance k-nearest neighbor (k-NN) classifiers are simple nonparametric classification rules. Bootstrap methods, widely used for estimating the expected prediction error of classification rules, are motivated by the objective of calculating the ideal bootstrap estimate of expected prediction error. In practice, bootstrap methods use Monte Carlo resampling to estimate the ideal bootstrap estimate because exact calculation is generally intractable. In this article, we present analytical formulae for exact calculation of the ideal bootstrap estimate of expected prediction error for k-NN classifiers and propose a new weighted k-NN classifier based on resampling ideas. The resampling-weighted k-NN classifier replaces the k-NN posterior probability estimates by their expectations under resampling and predicts an unclassified covariate as belonging to the group with the largest resampling expectation. A simulation study and an application involving remotely sensed data show that the resampling-weighted k-NN classifier compares favorably to unweighted and distance-weighted k-NN classifiers.  相似文献   

10.
Missing data are a common problem in almost all areas of empirical research. Ignoring the missing data mechanism, especially when data are missing not at random (MNAR), can result in biased and/or inefficient inference. Because MNAR mechanism is not verifiable based on the observed data, sensitivity analysis is often used to assess it. Current sensitivity analysis methods primarily assume a model for the response mechanism in conjunction with a measurement model and examine sensitivity to missing data mechanism via the parameters of the response model. Recently, Jamshidian and Mata (Post-modelling sensitivity analysis to detect the effect of missing data mechanism, Multivariate Behav. Res. 43 (2008), pp. 432–452) introduced a new method of sensitivity analysis that does not require the difficult task of modelling the missing data mechanism. In this method, a single measurement model is fitted to all of the data and to a sub-sample of the data. Discrepancy in the parameter estimates obtained from the the two data sets is used as a measure of sensitivity to missing data mechanism. Jamshidian and Mata describe their method mainly in the context of detecting data that are missing completely at random (MCAR). They used a bootstrap type method, that relies on heuristic input from the researcher, to test for the discrepancy of the parameter estimates. Instead of using bootstrap, the current article obtains confidence interval for parameter differences on two samples based on an asymptotic approximation. Because it does not use bootstrap, the developed procedure avoids likely convergence problems with the bootstrap methods. It does not require heuristic input from the researcher and can be readily implemented in statistical software. The article also discusses methods of obtaining sub-samples that may be used to test missing at random in addition to MCAR. An application of the developed procedure to a real data set, from the first wave of an ongoing longitudinal study on aging, is presented. Simulation studies are performed as well, using two methods of missing data generation, which show promise for the proposed sensitivity method. One method of missing data generation is also new and interesting in its own right.  相似文献   

11.
The generalized bootstrap is a parametric bootstrap method in which the underlying distribution function is estimated by fitting a generalized lambda distribution to the observed data. In this study, the generalized bootstrap is compared with the traditional parametric and non-parametric bootstrap methods in estimating the quantiles at different levels, especially for high quantiles. The performances of the three methods are evaluated in terms of cover rate, average interval width and standard deviation of width of the 95% bootstrap confidence intervals. Simulation results showed that the generalized bootstrap has overall better performance than the non-parametric bootstrap in high quantile estimation.  相似文献   

12.
Summary.  An authentic food is one that is what it purports to be. Food processors and consumers need to be assured that, when they pay for a specific product or ingredient, they are receiving exactly what they pay for. Classification methods are an important tool in food authenticity studies where they are used to assign food samples of unknown type to known types. A classification method is developed where the classification rule is estimated by using both the labelled and the unlabelled data, in contrast with many classical methods which use only the labelled data for estimation. This methodology models the data as arising from a Gaussian mixture model with parsimonious covariance structure, as is done in model-based clustering. A missing data formulation of the mixture model is used and the models are fitted by using the EM and classification EM algorithms. The methods are applied to the analysis of spectra of food-stuffs recorded over the visible and near infra-red wavelength range in food authenticity studies. A comparison of the performance of model-based discriminant analysis and the method of classification proposed is given. The classification method proposed is shown to yield very good misclassification rates. The correct classification rate was observed to be as much as 15% higher than the correct classification rate for model-based discriminant analysis.  相似文献   

13.
ABSTRACT

In this paper, we propose an adaptive stochastic gradient boosting tree for classification studies with imbalanced data. The adjustment of cost-sensitivity and the predictive threshold are integrated together with a composite criterion into the original stochastic gradient boosting tree to deal with the issues of the imbalanced data structure. Numerical study shows that the proposed method can significantly enhance the classification accuracy for the minority class with only a small loss in the true negative rate for the majority class. We discuss the relation of the cost-sensitivity to the threshold manipulation using simulations. An illustrative example of the analysis of suboptimal health-state data in traditional Chinese medicine is discussed.  相似文献   

14.
We present an application of subsampling and bootstrap methods for time series to determine the distribution of the estimator of zero crossings. The zero crossings method provides an alternative estimator of the lag-1 autocorrelation coefficient that is reducing the data storage requirements and is more robust with respect to outliers when compared to the classical estimator. The main results here are showing the consistency of subsampling, the consistency of moving block bootstrap, the consistency of non overlapping block bootstrap and the consistency of stationary bootstrap for this estimator. Theorems are formulated for Gaussian processes, elliptically symmetric processes and processes which are transformed Gaussian processes. Theoretical results are illustrated by simulations and practical data analysis. We have also shown that in practice the MBB method behaves better than the subsampling method.  相似文献   

15.
Leave-one-out and 632 bootstrap are popular data-based methods of estimating the true error rate of a classification rule, but practical applications almost exclusively quote only point estimates. Interval estimation would provide better assessment of the future performance of the rule, but little has been published on this topic. We first review general-purpose jackknife and bootstrap methodology that can be used in conjunction with leave-one-out estimates to provide prediction intervals for true error rates of classification rules. Monte Carlo simulation is then used to investigate coverage rates of the resulting intervals for normal data, but the results are disappointing; standard intervals show considerable overinclusion, intervals based on Edgeworth approximations or random weighting do not perform well, and while a bootstrap approach provides intervals with coverage rates closer to the nominal ones there is still marked underinclusion. We then turn to intervals constructed from 632 bootstrap estimates, and show that much better results are obtained. Although there is now some overinclusion, particularly for large training samples, the actual coverage rates are sufficiently close to the nominal rates for the method to be recommended. An application to real data illustrates the considerable variability that can arise in practical estimation of error rates.  相似文献   

16.
In this article we develop a class of stochastic boosting (SB) algorithms, which build upon the work of Holmes and Pintore (Bayesian Stat. 8, Oxford University Press, Oxford, 2007). They introduce boosting algorithms which correspond to standard boosting (e.g. Bühlmann and Hothorn, Stat. Sci. 22:477–505, 2007) except that the optimization algorithms are randomized; this idea is placed within a Bayesian framework. We show that the inferential procedure in Holmes and Pintore (Bayesian Stat. 8, Oxford University Press, Oxford, 2007) is incorrect and further develop interpretational, computational and theoretical results which allow one to assess SB’s potential for classification and regression problems. To use SB, sequential Monte Carlo (SMC) methods are applied. As a result, it is found that SB can provide better predictions for classification problems than the corresponding boosting algorithm. A theoretical result is also given, which shows that the predictions of SB are not significantly worse than boosting, when the latter provides the best prediction. We also investigate the method on a real case study from machine learning.  相似文献   

17.
Various bootstrap methods for variance estimation and confidence intervals in complex survey data, where sampling is done without replacement, have been proposed in the literature. The oldest, and perhaps the most intuitively appealing, is the without-replacement bootstrap (BWO) method proposed by Gross (1980). Unfortunately, the BWO method is only applicable to very simple sampling situations. We first introduce extensions of the BWO method to more complex sampling designs. The performance of the BWO and two other bootstrap methods, the rescaling bootstrap (Rao and Wu 1988) and the mirror-match bootstrap (Sitter 1992), are then compared through a simulation study. Together these three methods encompass the various bootstrap proposals.  相似文献   

18.
Two new nonparametric common principal component model selection procedures based on bootstrap distributions of the vector correlations of all combinations of the eigenvectors from two groups are proposed. The performance of these methods is compared in a simulation study to the two parametric methods previously suggested by Flury in 1988, as well as modified versions of two nonparametric methods proposed by Klingenberg in 1996 and then by Klingenberg and McIntyre in 1998. The proposed bootstrap vector correlation distribution (BVD) method is shown to outperform all of the existing methods in most of the simulated situations considered.  相似文献   

19.
一种加权主成分距离的聚类分析方法   总被引:1,自引:0,他引:1  
吕岩威  李平 《统计研究》2016,33(11):102-108
指标之间的高度相关性及其重要性差异导致了传统聚类分析方法往往无法获得良好的分类效果。本文在对传统聚类分析方法及其各种改进方法局限性展开探讨的基础上,运用数学方法重构了分类定义中的距离概念,通过定义自适应赋权的主成分距离为分类统计量,提出一种新的改进的主成分聚类分析方法——加权主成分距离聚类分析法。理论研究表明,加权主成分距离聚类分析法系统集成了已有聚类分析方法的优点,有充分的理论基础保证其科学合理性。仿真实验结果显示,加权主成分距离聚类分析法能够有效解决已有聚类分析方法在特定情形下的失真问题,所得分类效果更为理想。  相似文献   

20.
Traditional resampling methods for estimating sampling distributions sometimes fail, and alternative approaches are then needed. For example, if the classical central limit theorem does not hold and the naïve bootstrap fails, the m/n bootstrap, based on smaller-sized resamples, may be used as an alternative. An alternative to the naïve bootstrap, the sufficient bootstrap, which uses only the distinct observations in a bootstrap sample, is another recently proposed bootstrap approach that has been suggested to reduce the computational burden associated with bootstrapping. It works as long as naïve bootstrap does. However, if the naïve bootstrap fails, so will the sufficient bootstrap. In this paper, we propose combining the sufficient bootstrap with the m/n bootstrap in order to both regain consistent estimation of sampling distributions and to reduce the computational burden of the bootstrap. We obtain necessary and sufficient conditions for asymptotic normality of the proposed method, and propose new values for the resample size m. We compare the proposed method with the naïve bootstrap, the sufficient bootstrap, and the m/n bootstrap by simulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号