首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The problem of approximating an interval null or imprecise hypothesis test by a point null or precise hypothesis test under a Bayesian framework is considered. In the literature, some of the methods for solving this problem have used the Bayes factor for testing a point null and justified it as an approximation to the interval null. However, many authors recommend evaluating tests through the posterior odds, a Bayesian measure of evidence against the null hypothesis. It is of interest then to determine whether similar results hold when using the posterior odds as the primary measure of evidence. For the prior distributions under which the approximation holds with respect to the Bayes factor, it is shown that the posterior odds for testing the point null hypothesis does not approximate the posterior odds for testing the interval null hypothesis. In fact, in order to obtain convergence of the posterior odds, a number of restrictive conditions need to be placed on the prior structure. Furthermore, under a non-symmetrical prior setup, neither the Bayes factor nor the posterior odds for testing the imprecise hypothesis converges to the Bayes factor or posterior odds respectively for testing the precise hypothesis. To rectify this dilemma, it is shown that constraints need to be placed on the priors. In both situations, the class of priors constructed to ensure convergence of the posterior odds are not practically useful, thus questioning, from a Bayesian perspective, the appropriateness of point null testing in a problem better represented by an interval null. The theories developed are also applied to an epidemiological data set from White et al. (Can. Veterinary J. 30 (1989) 147–149.) in order to illustrate and study priors for which the point null hypothesis test approximates the interval null hypothesis test. AMS Classification: Primary 62F15; Secondary 62A15  相似文献   

2.
The problem of testing a point null hypothesis involving an exponential mean is The problem of testing a point null hypothesis involving an exponential mean is usual interpretation of P-values as evidence against precise hypotheses is faulty. As in Berger and Delampady (1986) and Berger and Sellke (1987), lower bounds on Bayesian measures of evidence over wide classes of priors are found emphasizing the conflict between posterior probabilities and P-values. A hierarchical Bayes approach is also considered as an alternative to computing lower bounds and “automatic” Bayesian significance tests which further illustrates the point that P-values are highly misleading measures of evidence for tests of point null hypotheses.  相似文献   

3.
In the Bayesian approach, the Behrens–Fisher problem has been posed as one of estimation for the difference of two means. No Bayesian solution to the Behrens–Fisher testing problem has yet been given due, perhaps, to the fact that the conventional priors used are improper. While default Bayesian analysis can be carried out for estimation purposes, it poses difficulties for testing problems. This paper generates sensible intrinsic and fractional prior distributions for the Behrens–Fisher testing problem from the improper priors commonly used for estimation. It allows us to compute the Bayes factor to compare the null and the alternative hypotheses. This default procedure of model selection is compared with a frequentist test and the Bayesian information criterion. We find discrepancy in the sense that frequentist and Bayesian information criterion reject the null hypothesis for data, that the Bayes factor for intrinsic or fractional priors do not.  相似文献   

4.
This paper presents a Bayesian-hypothesis-testing-based methodology for model validation and confidence extrapolation under uncertainty, using limited test data. An explicit expression of the Bayes factor is derived for the interval hypothesis testing. The interval method is compared with the Bayesian point null hypothesis testing approach. The Bayesian network with Markov Chain Monte Carlo simulation and Gibbs sampling is explored for extrapolating the inference from the validated domain at the component level to the untested domain at the system level. The effect of the number of experiments on the confidence in the model validation decision is investigated. The probabilities of Type I and Type II errors in decision-making during the model validation and confidence extrapolation are quantified. The proposed methodologies are applied to a structural mechanics problem. Numerical results demonstrate that the Bayesian methodology provides a quantitative approach to facilitate rational decisions in model validation and confidence extrapolation under uncertainty.  相似文献   

5.
6.
Substitution of a mixed prior distribution by a continuous one for the point null hypothesis testing problem is discussed. Conditions are established in order to approximate the Bayes factors for the two problems. Besides, trough this approximation an assignation of priorprobabilities is suggested.  相似文献   

7.
A Bayesian test for the point null testing problem in the multivariate case is developed. A procedure to get the mixed distribution using the prior density is suggested. For comparisons between the Bayesian and classical approaches, lower bounds on posterior probabilities of the null hypothesis, over some reasonable classes of prior distributions, are computed and compared with the p-value of the classical test. With our procedure, a better approximation is obtained because the p-value is in the range of the Bayesian measures of evidence.  相似文献   

8.
The full Bayesian significance test (FBST) was introduced by Pereira and Stern for measuring the evidence of a precise null hypothesis. The FBST requires both numerical optimization and multidimensional integration, whose computational cost may be heavy when testing a precise null hypothesis on a scalar parameter of interest in the presence of a large number of nuisance parameters. In this paper we propose a higher order approximation of the measure of evidence for the FBST, based on tail area expansions of the marginal posterior of the parameter of interest. When in particular focus is on matching priors, further results are highlighted. Numerical illustrations are discussed.  相似文献   

9.
In this article, we deal with the problem of testing a point null hypothesis for the mean of a multivariate power exponential distribution. We study the conditions under which Bayesian and frequentist approaches can match. In this comparison it is observed that the tails of the model are the key to explain the reconciliability or irreconciliability between the two approaches.  相似文献   

10.
In this paper, we develop Bayes factor based testing procedures for the presence of a correlation or a partial correlation. The proposed Bayesian tests are obtained by restricting the class of the alternative hypotheses to maximize the probability of rejecting the null hypothesis when the Bayes factor is larger than a specified threshold. It turns out that they depend simply on the frequentist t-statistics with the associated critical values and can thus be easily calculated by using a spreadsheet in Excel and in fact by just adding one more step after one has performed the frequentist correlation tests. In addition, they are able to yield an identical decision with the frequentist paradigm, provided that the evidence threshold of the Bayesian tests is determined by the significance level of the frequentist paradigm. We illustrate the performance of the proposed procedures through simulated and real-data examples.  相似文献   

11.
This article addresses the problem of testing whether the vectors of regression coefficients are equal for two independent normal regression models when the error variances are unknown. This problem poses severe difficulties both to the frequentist and Bayesian approaches to statistical inference. In the former approach, normal hypothesis testing theory does not apply because of the unrelated variances. In the latter, the prior distributions typically used for the parameters are improper and hence the Bayes factor-based solution cannot be used.We propose a Bayesian solution to this problem in which no subjective input is considered. We first generate “objective” proper prior distributions (intrinsic priors) for which the Bayes factor and model posterior probabilities are well defined. The posterior probability of each model is used as a model selection tool. This consistent procedure of testing hypotheses is compared with some of the frequentist approximate tests proposed in the literature.  相似文献   

12.
The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the unit root hypothesis, is based solely on the posterior density function, without the need of imposing positive probabilities to sets of zero Lebesgue measure. Furthermore, it is conducted under strict observance of the likelihood principle. It was designed mainly for testing sharp null hypotheses and it is called FBST for Full Bayesian Significance Test.  相似文献   

13.
Bayesian counterparts of some standard tests concerning the means of multi-normal distribution are discussed. In particular the hypothesis that the multi-normal mean is equal to a specified value, and the hypothesis that the means are equal. Lower bounds on the Bayes factor in favour of the null hypothesis are obtained over the class of conjugate priors. The P-value, or observed significance level of the standard sampling-theoretic test procedure are compared with the posterior probability. The results correspond closely with those of Good (1967), Berger & Sellke (1987), Pepple (1988) and others and illustrate the conflict between posterior probabilities and P-values as measures of evidence.  相似文献   

14.
The authors consider the problem of searching for activation in brain images obtained from functional magnetic resonance imaging and the corresponding functional signal detection problem. They develop a Bayesian procedure to detect signals existing within noisy images when the image is modeled as a scale space random field. Their procedure is based on the Radon‐Nikodym derivative, which is used as the Bayes factor for assessing the point null hypothesis of no signal. They apply their method to data from the Montreal Neurological Institute.  相似文献   

15.
Abstract.  An optimal Bayesian decision procedure for testing hypothesis in normal linear models based on intrinsic model posterior probabilities is considered. It is proven that these posterior probabilities are simple functions of the classical F -statistic, thus the evaluation of the procedure can be carried out analytically through the frequentist analysis of the posterior probability of the null. An asymptotic analysis proves that, under mild conditions on the design matrix, the procedure is consistent. For any testing hypothesis it is also seen that there is a one-to-one mapping – which we call calibration curve – between the posterior probability of the null hypothesis and the classical bi p -value. This curve adds substantial knowledge about the possible discrepancies between the Bayesian and the p -value measures of evidence for testing hypothesis. It permits a better understanding of the serious difficulties that are encountered in linear models for interpreting the p -values. A specific illustration of the variable selection problem is given.  相似文献   

16.
Quantitative model validation is playing an increasingly important role in performance and reliability assessment of a complex system whenever computer modelling and simulation are involved. The foci of this paper are to pursue a Bayesian probabilistic approach to quantitative model validation with non-normality data, considering data uncertainty and to investigate the impact of normality assumption on validation accuracy. The Box–Cox transformation method is employed to convert the non-normality data, with the purpose of facilitating the overall validation assessment of computational models with higher accuracy. Explicit expressions for the interval hypothesis testing-based Bayes factor are derived for the transformed data in the context of univariate and multivariate cases. Bayesian confidence measure is presented based on the Bayes factor metric. A generalized procedure is proposed to implement the proposed probabilistic methodology for model validation of complicated systems. Classic hypothesis testing method is employed to conduct a comparison study. The impact of data normality assumption and decision threshold variation on model assessment accuracy is investigated by using both classical and Bayesian approaches. The proposed methodology and procedure are demonstrated with a univariate stochastic damage accumulation model, a multivariate heat conduction problem and a multivariate dynamic system.  相似文献   

17.
We consider the problem of deriving Bayesian inference procedures via the concept of relative surprise. The mathematical concept of surprise has been developed by I.J. Good in a long sequence of papers. We make a modification to this development that permits the avoidance of a serious defect; namely, the change of variable problem. We apply relative surprise to the development of estimation, hypothesis testing and model checking procedures. Important advantages of the relative surprise approach to inference include the lack of dependence on a particular loss function and complete freedom to the statistician in the choice of prior for hypothesis testing problems. Links are established with common Bayesian inference procedures such as highest posterior density regions, modal estimates and Bayes factors. From a practical perspective new inference procedures arise that possess good properties.  相似文献   

18.
A fundamental theorem in hypothesis testing is the Neyman‐Pearson (N‐P) lemma, which creates the most powerful test of simple hypotheses. In this article, we establish Bayesian framework of hypothesis testing, and extend the Neyman‐Pearson lemma to create the Bayesian most powerful test of general hypotheses, thus providing optimality theory to determine thresholds of Bayes factors. Unlike conventional Bayes tests, the proposed Bayesian test is able to control the type I error.  相似文献   

19.
Fisher succeeded early on in redefining Student’s t-distribution in geometrical terms on a central hypersphere. Intriguingly, a noncentral analytical extension for this fundamental Fisher–Student’s central hypersphere h-distribution does not exist. We therefore set to derive the noncentral h-distribution and use it to graphically illustrate the limitations of the Neyman–Pearson null hypothesis significance testing framework and the strengths of the Bayesian statistical hypothesis analysis framework on the hypersphere polar axis, a compact nontrivial one-dimensional parameter space. Using a geometrically meaningful maximal entropy prior, we requalify the apparent failure of an important psychological science reproducibility project. We proceed to show that the Bayes factor appropriately models the two-sample t-test p-value density of a gene expression profile produced by the high-throughput genomic-scale microarray technology, and provides a simple expression for a local false discovery rate addressing the multiple hypothesis testing problem brought about by such a technology.  相似文献   

20.
The authors consider the correlation between two arbitrary functions of the data and a parameter when the parameter is regarded as a random variable with given prior distribution. They show how to compute such a correlation and use closed form expressions to assess the dependence between parameters and various classical or robust estimators thereof, as well as between p‐values and posterior probabilities of the null hypothesis in the one‐sided testing problem. Other applications involve the Dirichlet process and stationary Gaussian processes. Using this approach, the authors also derive a general nonparametric upper bound on Bayes risks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号