首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 859 毫秒
1.
Boosting is a new, powerful method for classification. It is an iterative procedure which successively classifies a weighted version of the sample, and then reweights this sample dependent on how successful the classification was. In this paper we review some of the commonly used methods for performing boosting and show how they can be fit into a Bayesian setup at each iteration of the algorithm. We demonstrate how this formulation gives rise to a new splitting criterion when using a domain-partitioning classification method such as a decision tree. Further we can improve the predictive performance of simple decision trees, known as stumps, by using a posterior weighted average of them to classify at each step of the algorithm, rather than just a single stump. The main advantage of this approach is to reduce the number of boosting iterations required to produce a good classifier with only a minimal increase in the computational complexity of the algorithm.  相似文献   

2.
Variable selection is an important decision process in consumer credit scoring. However, with the rapid growth in credit industry, especially, after the rising of e-commerce, a huge amount of information on customer behavior is available to provide more informative implication of consumer credit scoring. In this study, a hybrid quadratic programming model is proposed for consumer credit scoring problems by variable selection. The proposed model is then solved with a bisection method based on Tabu search algorithm (BMTS), and the solution of this model provides alternative subsets of variables in different sizes. The final subset of variables used in consumer credit scoring model is selected based on both the size (number of variables in a subset) and predictive (classification) accuracy rate. Simulation studies are used to measure the performances of the proposed model, illustrating its effectiveness for simultaneous variable selection as well as classification.  相似文献   

3.
The traditional classification is based on the assumption that distribution of indicator variable X in one class is homogeneous. However, when data in one class comes from heterogeneous distribution, the likelihood ratio of two classes is not unique. In this paper, we construct the classification via an ambiguity criterion for the case of distribution heterogeneity of X in a single class. The separated historical data in each situation are used to estimate the thresholds respectively. The final boundary is chosen as the maximum and minimum thresholds from all situations. Our approach obtains the minimum ambiguity with a high classification accuracy allowing for a precise decision. In addition, nonparametric estimation of the classification region and theoretical properties are derived. Simulation study and real data analysis are reported to demonstrate the effectiveness of our method.  相似文献   

4.
Sequential regression multiple imputation has emerged as a popular approach for handling incomplete data with complex features. In this approach, imputations for each missing variable are produced based on a regression model using other variables as predictors in a cyclic manner. Normality assumption is frequently imposed for the error distributions in the conditional regression models for continuous variables, despite that it rarely holds in real scenarios. We use a simulation study to investigate the performance of several sequential regression imputation methods when the error distribution is flat or heavy tailed. The methods evaluated include the sequential normal imputation and its several extensions which adjust for non normal error terms. The results show that all methods perform well for estimating the marginal mean and proportion, as well as the regression coefficient when the error distribution is flat or moderately heavy tailed. When the error distribution is strongly heavy tailed, all methods retain their good performances for the mean and the adjusted methods have robust performances for the proportion; but all methods can have poor performances for the regression coefficient because they cannot accommodate the extreme values well. We caution against the mechanical use of sequential regression imputation without model checking and diagnostics.  相似文献   

5.
Many fields of research need to classify individual systems based on one or more data series, which are obtained by sampling an unknown continuous curve with noise. In other words, the underlying process is an unknown function which the observed variables represent only imperfectly. Although functional logistic regression has many attractive features for this classification problem, this method is applicable only when the number of individuals to be classified (or available to estimate the model) is large compared to the number of curves sampled per individual.To overcome this limitation, we use penalized optimal scoring to construct a new method for the classification of multi-dimensional functional data. The proposed method consists of two stages. First, the series of observed discrete values available for each individual are expressed as a set of continuous curves. Next, the penalized optimal scoring model is estimated on the basis of these curves. A similar penalized optimal scoring method was described in my previous work, but this model is not suitable for the analysis of continuous functions. In this paper we adopt a Gaussian kernel approach to extend the previous model. The high accuracy of the new method is demonstrated on Monte Carlo simulations, and used to predict defaulting firms on the Japanese Stock Exchange.  相似文献   

6.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

7.
Abstract

An aspect of cluster analysis which has been widely studied in recent years is the weighting and selection of variables. Procedures have been proposed which are able to identify the cluster structure present in a data matrix when that structure is confined to a subset of variables. Other methods assess the relative importance of each variable as revealed by a suitably chosen weight. But when a cluster structure is present in more than one subset of variables and is different from one subset to another, those solutions as well as standard clustering algorithms can lead to misleading results. Some very recent methodologies for finding consensus classifications of the same set of units can be useful also for the identification of cluster structures in a data matrix, but each one seems to be only partly satisfactory for the purpose at hand. Therefore a new more specific procedure is proposed and illustrated by analyzing two real data sets; its performances are evaluated by means of a simulation experiment.  相似文献   

8.
In practice, it often happens that we have a number of base methods of classification. We are not able to clearly determine which method is optimal in the sense of the smallest error rate. Then we have a combined method that allows us to consolidate information from multiple sources in a better classifier. I propose a different approach, a sequential approach. Sequentiality is understood here in the sense of adding posterior probabilities to the original data set and so created data are used during classification process. We combine posterior probabilities obtained from base classifiers using all combining methods. Finally, we combine these probabilities using a mean combining method. To the original data set we add obtained posterior probabilities as additional features. In each step we change our additional probabilities to achieve the minimum error rate for base methods. Experimental results on different data sets demonstrate that the method is efficient and that this approach outperforms base methods providing a reduction in the mean classification error rate.  相似文献   

9.
We consider the supervised classification setting, in which the data consist of p features measured on n observations, each of which belongs to one of K classes. Linear discriminant analysis (LDA) is a classical method for this problem. However, in the high-dimensional setting where p ? n, LDA is not appropriate for two reasons. First, the standard estimate for the within-class covariance matrix is singular, and so the usual discriminant rule cannot be applied. Second, when p is large, it is difficult to interpret the classification rule obtained from LDA, since it involves all p features. We propose penalized LDA, a general approach for penalizing the discriminant vectors in Fisher's discriminant problem in a way that leads to greater interpretability. The discriminant problem is not convex, so we use a minorization-maximization approach in order to efficiently optimize it when convex penalties are applied to the discriminant vectors. In particular, we consider the use of L(1) and fused lasso penalties. Our proposal is equivalent to recasting Fisher's discriminant problem as a biconvex problem. We evaluate the performances of the resulting methods on a simulation study, and on three gene expression data sets. We also survey past methods for extending LDA to the high-dimensional setting, and explore their relationships with our proposal.  相似文献   

10.
Inference for the state occupation probabilities, given a set of baseline covariates, is an important problem in survival analysis and time to event multistate data. We introduce an inverse censoring probability re-weighted semi-parametric single index model based approach to estimate conditional state occupation probabilities of a given individual in a multistate model under right-censoring. Besides obtaining a temporal regression function, we also test the potential time varying effect of a baseline covariate on future state occupation. We show that the proposed technique has desirable finite sample performances and its performance is competitive when compared with three other existing approaches. We illustrate the proposed methodology using two different data sets. First, we re-examine a well-known data set dealing with leukemia patients undergoing bone marrow transplant with various state transitions. Our second illustration is based on data from a study involving functional status of a set of spinal cord injured patients undergoing a rehabilitation program.  相似文献   

11.
Here we consider a multinomial probit regression model where the number of variables substantially exceeds the sample size and only a subset of the available variables is associated with the response. Thus selecting a small number of relevant variables for classification has received a great deal of attention. Generally when the number of variables is substantial, sparsity-enforcing priors for the regression coefficients are called for on grounds of predictive generalization and computational ease. In this paper, we propose a sparse Bayesian variable selection method in multinomial probit regression model for multi-class classification. The performance of our proposed method is demonstrated with one simulated data and three well-known gene expression profiling data: breast cancer data, leukemia data, and small round blue-cell tumors. The results show that compared with other methods, our method is able to select the relevant variables and can obtain competitive classification accuracy with a small subset of relevant genes.  相似文献   

12.
Model selection methods are important to identify the best approximating model. To identify the best meaningful model, purpose of the model should be clearly pre-stated. The focus of this paper is model selection when the modelling purpose is classification. We propose a new model selection approach designed for logistic regression model selection where main modelling purpose is classification. The method is based on the distance between the two clustering trees. We also question and evaluate the performances of conventional model selection methods based on information theory concepts in determining best logistic regression classifier. An extensive simulation study is used to assess the finite sample performances of the cluster tree based and the information theoretic model selection methods. Simulations are adjusted for whether the true model is in the candidate set or not. Results show that the new approach is highly promising. Finally, they are applied to a real data set to select a binary model as a means of classifying the subjects with respect to their risk of breast cancer.  相似文献   

13.
The k nearest neighbors (k-NN) classifier is one of the most popular methods for statistical pattern recognition and machine learning. In practice, the size k, the number of neighbors used for classification, is usually arbitrarily set to one or some other small numbers, or based on the cross-validation procedure. In this study, we propose a novel alternative approach to decide the size k. Based on a k-NN-based multivariate multi-sample test, we assign each k a permutation test based Z-score. The number of NN is set to the k with the highest Z-score. This approach is computationally efficient since we have derived the formulas for the mean and variance of the test statistic under permutation distribution for multiple sample groups. Several simulation and real-world data sets are analyzed to investigate the performance of our approach. The usefulness of our approach is demonstrated through the evaluation of prediction accuracies using Z-score as a criterion to select the size k. We also compare our approach to the widely used cross-validation approaches. The results show that the size k selected by our approach yields high prediction accuracies when informative features are used for classification, whereas the cross-validation approach may fail in some cases.  相似文献   

14.
Kernel smoothing methods are used to extend the Poisson log‐linear approach to the estimation of the size of population using multiple lists to an open population when the multiple lists are recorded at each time point. The data is marginal as only the lists at each time point are available and the transitions of individuals between lists at different time points are not observable. Our analysis is motivated by and applied to data on the number of drug addicts in the Hong Kong Special Administrative Region.  相似文献   

15.
We study the problem of classification with multiple q-variate observations with and without time effect on each individual. We develop new classification rules for populations with certain structured and unstructured mean vectors and under certain covariance structures. The new classification rules are effective when the number of observations is not large enough to estimate the variance–covariance matrix. Computational schemes for maximum likelihood estimates of required population parameters are given. We apply our findings to two real data sets as well as to a simulated data set.  相似文献   

16.
In this paper, we consider James–Stein shrinkage and pretest estimation methods for time series following generalized linear models when it is conjectured that some of the regression parameters may be restricted to a subspace. Efficient estimation strategies are developed when there are many covariates in the model and some of them are not statistically significant. Statistical properties of the pretest and shrinkage estimation methods including asymptotic distributional bias and risk are developed. We investigate the relative performances of shrinkage and pretest estimators with respect to the unrestricted maximum partial likelihood estimator (MPLE). We show that the shrinkage estimators have a lower relative mean squared error as compared to the unrestricted MPLE when the number of significant covariates exceeds two. Monte Carlo simulation experiments were conducted for different combinations of inactive covariates and the performance of each estimator was evaluated in terms of its mean squared error. The practical benefits of the proposed methods are illustrated using two real data sets.  相似文献   

17.
To create inferences in dichotomous classifications with misclassifications and possibly perform repeated classifications, the maximum likelihood method is commonly used, mainly because of its efficiency in obtaining parameter estimators of a mixture of two binomial distributions. One simpler alternative that is operationally easier is to consider the simple majority method. In this method, each of n items are classified r times as conforming or non-conforming. The final classification of the item is determined by the most frequent class. This method yielded lower mean squared errors than the maximum likelihood and the moments estimators and is asymptotically efficient. In this paper, we introduce a new approach in which the realization of all r repeated classifications of each item may not be needed. Each of n items is sequentially classified as conforming or nonconforming, and the process ceases when the frequency of conforming or non-conforming classification reaches the integer a. We show that, by a Monte Carlo simulation, the last procedure presents a lower mean squared error than the simple majority results for a similar number of r repeated classifications.  相似文献   

18.
If at least one out of two serial machines that produce a specific product in manufacturing environments malfunctions, there will be non conforming items produced. Determining the optimal time of the machines' maintenance is the one of major concerns. While a convenient common practice for this kind of problem is to fit a single probability distribution to the combined defect data, it does not adequately capture the fact that there are two different underlying causes of failures. A better approach is to view the defects as arising from a mixture population: one due to the first machine failures and the other due to the second one. In this article, a mixture model along with both Bayesian inference and stochastic dynamic programming approaches are used to find the multi-stage optimal replacement strategy. Using the posterior probability of the machines to be in state λ1, λ2 (the failure rates of defective items produced by machine 1 and 2, respectively), we first formulate the problem as a stochastic dynamic programming model. Then, we derive some properties for the optimal value of the objective function and propose a solution algorithm. At the end, the application of the proposed methodology is demonstrated by a numerical example and an error analysis is performed to evaluate the performances of the proposed procedure. The results of this analysis show that the proposed method performs satisfactorily when a different number of observations on the times between productions of defective products is available.  相似文献   

19.
It is shown that the commonly used Weibull-Gamma frailty model has a finite number of finite moments only and that its marginal distribution generalizes the log-logistic distribution. In some cases there is not even a finite variance, and there are cases without a single finite moment. Upon transformation to the entire real line, generalized logistic and generalized Cauchy distributions are introduced and their connection with the previous ones established, as well as with the extreme-value distribution. Apart from intrinsic and classroom value, the family can be of use when formulating non-informative priors in Bayesian data analysis. Also, gauging the amount of finite moments is important when checking regularity conditions in the Weibull-Gamma model. Our findings are illustrated using data from survival in cancer patients.  相似文献   

20.
The big data era demands new statistical analysis paradigms, since traditional methods often break down when datasets are too large to fit on a single desktop computer. Divide and Recombine (D&R) is becoming a popular approach for big data analysis, where results are combined over subanalyses performed in separate data subsets. In this article, we consider situations where unit record data cannot be made available by data custodians due to privacy concerns, and explore the concept of statistical sufficiency and summary statistics for model fitting. The resulting approach represents a type of D&R strategy, which we refer to as summary statistics D&R; as opposed to the standard approach, which we refer to as horizontal D&R. We demonstrate the concept via an extended Gamma–Poisson model, where summary statistics are extracted from different databases and incorporated directly into the fitting algorithm without having to combine unit record data. By exploiting the natural hierarchy of data, our approach has major benefits in terms of privacy protection. Incorporating the proposed modelling framework into data extraction tools such as TableBuilder by the Australian Bureau of Statistics allows for potential analysis at a finer geographical level, which we illustrate with a multilevel analysis of the Australian unemployment data. Supplementary materials for this article are available online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号