首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
基于Fisher变换的Bayes判别方法探索   总被引:1,自引:0,他引:1       下载免费PDF全文
判别分析是三大多元统计分析方法之一,在许多领域都有广泛的应用。通常认为距离判别、Fisher判别和Bayes判别是三种不同的判别分析方法,本文的研究表明,距离判别与Bayes判别是两种实质的判别方法,前者实际依据的是百分位点或置信区间,后者实际依据的是概率。而著名的Fisher判别,只是依据方差分析的思想,对判别变量进行线性变换,然后用于距离判别,其实不能算是一种实质的判别方法。本文将Fisher变换与Bayes判别结合起来,即先做Fisher变换,再利用概率最大原则做Bayes判别,得到一种新的判别途径,可进一步提高判别效率。理论与实证分析表明,基于Fisher变换的Bayes判别,适用场合广泛,判别效率最高。  相似文献   

2.
A fundamental theorem in hypothesis testing is the Neyman‐Pearson (N‐P) lemma, which creates the most powerful test of simple hypotheses. In this article, we establish Bayesian framework of hypothesis testing, and extend the Neyman‐Pearson lemma to create the Bayesian most powerful test of general hypotheses, thus providing optimality theory to determine thresholds of Bayes factors. Unlike conventional Bayes tests, the proposed Bayesian test is able to control the type I error.  相似文献   

3.
This article deals with Bayes factors as useful Bayesian tools in frequentist testing of a precise hypothesis. A result and several examples are included to justify the definition of Bayes factor for point null hypotheses, without merging the initial distribution with a degenerate distribution on the null hypothesis. Of special interest is the problem of testing a proportion (joint with a natural criterion to compare different tests), the possible presence of nuisance parameters, or the influence of Bayesian sufficiency on this problem. The problem of testing a precise hypothesis under a Bayesian perspective is also considered and two alternative methods to deal with are given.  相似文献   

4.
Bayes methodology provides posterior distribution functions based on parametric likelihoods adjusted for prior distributions. A distribution-free alternative to the parametric likelihood is use of empirical likelihood (EL) techniques, well known in the context of nonparametric testing of statistical hypotheses. Empirical likelihoods have been shown to exhibit many of the properties of conventional parametric likelihoods. In this paper, we propose and examine Bayes factors (BF) methods that are derived via the EL ratio approach. Following Kass and Wasserman (1995), we consider Bayes factors type decision rules in the context of standard statistical testing techniques. We show that the asymptotic properties of the proposed procedure are similar to the classical BF's asymptotic operating characteristics. Although we focus on hypothesis testing, the proposed approach also yields confidence interval estimators of unknown parameters. Monte Carlo simulations were conducted to evaluate the theoretical results as well as to demonstrate the power of the proposed test.  相似文献   

5.
This article investigates the possible use of our newly defined extended projection depth (abbreviated to EPD) in nonparametric discriminant analysis. We propose a robust nonparametric classifier, which relies on the intuitively simple notion of EPD. The EPD-based classifier assigns an observation to the population with respect to which it has the maximum EPD. Asymptotic properties of misclassification rates and robust properties of EPD-based classifier are discussed. A few simulated data sets are used to compare the performance of EPD-based classifier with Fisher's linear discriminant rule, quadratic discriminant rule, and PD-based classifier. It is also found that when the underlying distributions are elliptically symmetric, EPD-based classifier is asymptotically equivalent to the optimal Bayes classifier.  相似文献   

6.
The problem of approximating an interval null or imprecise hypothesis test by a point null or precise hypothesis test under a Bayesian framework is considered. In the literature, some of the methods for solving this problem have used the Bayes factor for testing a point null and justified it as an approximation to the interval null. However, many authors recommend evaluating tests through the posterior odds, a Bayesian measure of evidence against the null hypothesis. It is of interest then to determine whether similar results hold when using the posterior odds as the primary measure of evidence. For the prior distributions under which the approximation holds with respect to the Bayes factor, it is shown that the posterior odds for testing the point null hypothesis does not approximate the posterior odds for testing the interval null hypothesis. In fact, in order to obtain convergence of the posterior odds, a number of restrictive conditions need to be placed on the prior structure. Furthermore, under a non-symmetrical prior setup, neither the Bayes factor nor the posterior odds for testing the imprecise hypothesis converges to the Bayes factor or posterior odds respectively for testing the precise hypothesis. To rectify this dilemma, it is shown that constraints need to be placed on the priors. In both situations, the class of priors constructed to ensure convergence of the posterior odds are not practically useful, thus questioning, from a Bayesian perspective, the appropriateness of point null testing in a problem better represented by an interval null. The theories developed are also applied to an epidemiological data set from White et al. (Can. Veterinary J. 30 (1989) 147–149.) in order to illustrate and study priors for which the point null hypothesis test approximates the interval null hypothesis test. AMS Classification: Primary 62F15; Secondary 62A15  相似文献   

7.
Handling dependence or not in feature selection is still an open question in supervised classification issues where the number of covariates exceeds the number of observations. Some recent papers surprisingly show the superiority of naive Bayes approaches based on an obviously erroneous assumption of independence, whereas others recommend to infer on the dependence structure in order to decorrelate the selection statistics. In the classical linear discriminant analysis (LDA) framework, the present paper first highlights the impact of dependence in terms of instability of feature selection. A second objective is to revisit the above issue using a flexible factor modeling for the covariance. This framework introduces latent components of dependence, conditionally on which a new Bayes consistency is defined. A procedure is then proposed for the joint estimation of the expectation and variance parameters of the model. The present method is compared to recent regularized diagonal discriminant analysis approaches, assuming independence among features, and regularized LDA procedures, both in terms of classification performance and stability of feature selection. The proposed method is implemented in the R package FADA, freely available from the R repository CRAN.  相似文献   

8.
Kernel discriminant analysis translates the original classification problem into feature space and solves the problem with dimension and sample size interchanged. In high‐dimension low sample size (HDLSS) settings, this reduces the ‘dimension’ to that of the sample size. For HDLSS two‐class problems we modify Mika's kernel Fisher discriminant function which – in general – remains ill‐posed even in a kernel setting; see Mika et al. (1999). We propose a kernel naive Bayes discriminant function and its smoothed version, using first‐ and second‐degree polynomial kernels. For fixed sample size and increasing dimension, we present asymptotic expressions for the kernel discriminant functions, discriminant directions and for the error probability of our kernel discriminant functions. The theoretical calculations are complemented by simulations which show the convergence of the estimators to the population quantities as the dimension grows. We illustrate the performance of the new discriminant rules, which are easy to implement, on real HDLSS data. For such data, our results clearly demonstrate the superior performance of the new discriminant rules, and especially their smoothed versions, over Mika's kernel Fisher version, and typically also over the commonly used naive Bayes discriminant rule.  相似文献   

9.
ABSTRACT

There have been considerable amounts of work regarding the development of various default Bayes factors in model selection and hypothesis testing. Two commonly used criteria, the intrinsic Bayes factor and the fractional Bayes factor are compared to test two independent normal means and variances. We also derive several intrinsic priors whose Bayes factors are asymptotically equivalent to the respective Bayes factors. We demonstrate our results in simulated datasets.  相似文献   

10.
A Bayesian approach is considered to study the change point problems. A hypothesis for testing change versus no change is considered using the notion of predictive distributions. Bayes factors are developed for change versus no change in the exponential families of distributions with conjugate priors. Under vague prior information, both Bayes factors and pseudo Bayes factors are considered. A new result is developed which describes how the overall Bayes factor has a decomposition into Bayes factors at each point. Finally, an example is provided in which the computations are performed using the concept of imaginary observations.  相似文献   

11.
One method of testing for independence in a two-way table is based on the Bayes factor, the ratio of the likelihoods under the independence hypothesis H and the alternative hypothesis H. The main difficulty in this approach is the specification of prior distributions on the composite hypotheses H and H. A new Bayesian test statistic is constructed by using a prior distribution on H that is concentrated about the “independence surface” H. Approximations are proposed which simplify the computation of the test statistic. The values of the Bayes factor are compared with values of statistics proposed by Gunel and Dickey (1974), Good and Crook (1987), and Spiegelhalter and Smith (1982) for a number of two-way tables. This investigation suggests a strong relationship between the new statistic and the p-value.  相似文献   

12.
Recently, the field of multiple hypothesis testing has experienced a great expansion, basically because of the new methods developed in the field of genomics. These new methods allow scientists to simultaneously process thousands of hypothesis tests. The frequentist approach to this problem is made by using different testing error measures that allow to control the Type I error rate at a certain desired level. Alternatively, in this article, a Bayesian hierarchical model based on mixture distributions and an empirical Bayes approach are proposed in order to produce a list of rejected hypotheses that will be declared significant and interesting for a more detailed posterior analysis. In particular, we develop a straightforward implementation of a Gibbs sampling scheme where all the conditional posterior distributions are explicit. The results are compared with the frequentist False Discovery Rate (FDR) methodology. Simulation examples show that our model improves the FDR procedure in the sense that it diminishes the percentage of false negatives keeping an acceptable percentage of false positives.  相似文献   

13.
Classification of gene expression microarray data is important in the diagnosis of diseases such as cancer, but often the analysis of microarray data presents difficult challenges because the gene expression dimension is typically much larger than the sample size. Consequently, classification methods for microarray data often rely on regularization techniques to stabilize the classifier for improved classification performance. In particular, numerous regularization techniques, such as covariance-matrix regularization, are available, which, in practice, lead to a difficult choice of regularization methods. In this paper, we compare the classification performance of five covariance-matrix regularization methods applied to the linear discriminant function using two simulated high-dimensional data sets and five well-known, high-dimensional microarray data sets. In our simulation study, we found the minimum distance empirical Bayes method reported in Srivastava and Kubokawa [Comparison of discrimination methods for high dimensional data, J. Japan Statist. Soc. 37(1) (2007), pp. 123–134], and the new linear discriminant analysis reported in Thomaz, Kitani, and Gillies [A Maximum Uncertainty LDA-based approach for Limited Sample Size problems – with application to Face Recognition, J. Braz. Comput. Soc. 12(1) (2006), pp. 1–12], to perform consistently well and often outperform three other prominent regularization methods. Finally, we conclude with some recommendations for practitioners.  相似文献   

14.
The problem of testing a point null hypothesis involving an exponential mean is The problem of testing a point null hypothesis involving an exponential mean is usual interpretation of P-values as evidence against precise hypotheses is faulty. As in Berger and Delampady (1986) and Berger and Sellke (1987), lower bounds on Bayesian measures of evidence over wide classes of priors are found emphasizing the conflict between posterior probabilities and P-values. A hierarchical Bayes approach is also considered as an alternative to computing lower bounds and “automatic” Bayesian significance tests which further illustrates the point that P-values are highly misleading measures of evidence for tests of point null hypotheses.  相似文献   

15.
Hidden Markov random field models provide an appealing representation of images and other spatial problems. The drawback is that inference is not straightforward for these models as the normalisation constant for the likelihood is generally intractable except for very small observation sets. Variational methods are an emerging tool for Bayesian inference and they have already been successfully applied in other contexts. Focusing on the particular case of a hidden Potts model with Gaussian noise, we show how variational Bayesian methods can be applied to hidden Markov random field inference. To tackle the obstacle of the intractable normalising constant for the likelihood, we explore alternative estimation approaches for incorporation into the variational Bayes algorithm. We consider a pseudo-likelihood approach as well as the more recent reduced dependence approximation of the normalisation constant. To illustrate the effectiveness of these approaches we present empirical results from the analysis of simulated datasets. We also analyse a real dataset and compare results with those of previous analyses as well as those obtained from the recently developed auxiliary variable MCMC method and the recursive MCMC method. Our results show that the variational Bayesian analyses can be carried out much faster than the MCMC analyses and produce good estimates of model parameters. We also found that the reduced dependence approximation of the normalisation constant outperformed the pseudo-likelihood approximation in our analysis of real and synthetic datasets.  相似文献   

16.
Quantitative model validation is playing an increasingly important role in performance and reliability assessment of a complex system whenever computer modelling and simulation are involved. The foci of this paper are to pursue a Bayesian probabilistic approach to quantitative model validation with non-normality data, considering data uncertainty and to investigate the impact of normality assumption on validation accuracy. The Box–Cox transformation method is employed to convert the non-normality data, with the purpose of facilitating the overall validation assessment of computational models with higher accuracy. Explicit expressions for the interval hypothesis testing-based Bayes factor are derived for the transformed data in the context of univariate and multivariate cases. Bayesian confidence measure is presented based on the Bayes factor metric. A generalized procedure is proposed to implement the proposed probabilistic methodology for model validation of complicated systems. Classic hypothesis testing method is employed to conduct a comparison study. The impact of data normality assumption and decision threshold variation on model assessment accuracy is investigated by using both classical and Bayesian approaches. The proposed methodology and procedure are demonstrated with a univariate stochastic damage accumulation model, a multivariate heat conduction problem and a multivariate dynamic system.  相似文献   

17.
In this paper, we use the Bayesian method in the application of hypothesis testing and model selection to determine the order of a Markov chain. The criteria used are based on Bayes factors with noninformative priors. Com¬parisons with the commonly used AIC and BIC criteria are made through an example and computer simulations. The results show that the proposed method is better than the AIC and BIC criteria, especially for Markov chains with higher orders and larger state spaces.  相似文献   

18.
Type-I and Type-II censoring schemes are the widely used censoring schemes available for life testing experiments. A mixture of Type-I and Type-II censoring schemes is known as a hybrid censoring scheme. Different hybrid censoring schemes have been introduced in recent years. In the last few years, a progressive censoring scheme has also received considerable attention. In this article, we mainly consider the Bayesian inference of the unknown parameters of two-parameter exponential distribution under different hybrid and progressive censoring schemes. It is observed that in general the Bayes estimate and the associated credible interval of any function of the unknown parameters, cannot be obtained in explicit form. We propose to use the Monte Carlo sampling procedure to compute the Bayes estimate and also to construct the associated credible interval. Monte Carlo Simulation experiments have been performed to see the effectiveness of the proposed method in case of Type-I hybrid censored samples. The performances are quite satisfactory. One data analysis has been performed for illustrative purposes.  相似文献   

19.
In this paper, we study the empirical Bayes two-action problem under linear loss function. Upper bounds on the regret of empirical Bayes testing rules are investigated. Previous results on this problem construct empirical Bayes tests using kernel type estimators of nonparametric functionals. Further, they have assumed specific forms, such as the continuous one-parameter exponential family for {Fθ:θΩ}, for the family of distributions of the observations. In this paper, we present a new general approach of establishing upper bounds (in terms of rate of convergence) of empirical Bayes tests for this problem. Our results are given for any family of continuous distributions and apply to empirical Bayes tests based on any type of nonparametric method of functional estimation. We show that our bounds are very sharp in the sense that they reduce to existing optimal or nearly optimal rates of convergence when applied to specific families of distributions.  相似文献   

20.
Several alternative Bayes factors have been recently proposed in order to solve the problem of the extreme sensitivity of the Bayes factor to the priors of models under comparison. Specifically, the impossibility of using the Bayes factor with standard noninformative priors for model comparison has led to the introduction of new automatic criteria, such as the posterior Bayes factor (Aitkin 1991), the intrinsic Bayes factors (Berger and Pericchi 1996b) and the fractional Bayes factor (O'Hagan 1995). We derive some interesting properties of the fractional Bayes factor that provide justifications for its use additional to the ones given by O'Hagan. We further argue that the use of the fractional Bayes factor, originally introduced to cope with improper priors, is also useful in a robust analysis. Finally, using usual classes of priors, we compare several alternative Bayes factors for the problem of testing the point null hypothesis in the univariate normal model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号