首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bayesian inclusion probabilities have become a popular tool for variable assessment. From a frequentist perspective, it is often difficult to evaluate these probabilities as typically no Type I error rates are considered, neither are any explorations of power of the methods given. This paper considers how a frequentist may evaluate Bayesian inclusion probabilities for screening predictors. This evaluation looks at both unrestricted and restricted model spaces and develops a framework which a frequentist can utilize inclusion probabilities that preserve Type I error rates. Furthermore, this framework is applied to an analysis of the Arabidopsis thaliana with respect to determining quantitative trait loci associated with cotelydon opening angle.  相似文献   

2.
The objective of this article is to propose and study frequentist tests that have maximum average power, averaging with respect to some specified weight function. First, some relationships between these tests, called maximum average-power (MAP) tests, and most powerful or uniformly most powerful tests are presented. Second, the existence of a maximum average-power test for any hypothesis testing problem is shown. Third, an MAP test for any hypothesis testing problem with a simple null hypothesis is constructed, including some interesting classical examples. Fourth, an MAP test for a hypothesis testing problem with a composite null hypothesis is discussed. From any one-parameter exponential family, a commonly used UMPU test is shown to be also an MAP test with respect to a rich class of weight functions. Finally, some remarks are given to conclude the article.  相似文献   

3.
This article addresses the problem of testing whether the vectors of regression coefficients are equal for two independent normal regression models when the error variances are unknown. This problem poses severe difficulties both to the frequentist and Bayesian approaches to statistical inference. In the former approach, normal hypothesis testing theory does not apply because of the unrelated variances. In the latter, the prior distributions typically used for the parameters are improper and hence the Bayes factor-based solution cannot be used.We propose a Bayesian solution to this problem in which no subjective input is considered. We first generate “objective” proper prior distributions (intrinsic priors) for which the Bayes factor and model posterior probabilities are well defined. The posterior probability of each model is used as a model selection tool. This consistent procedure of testing hypotheses is compared with some of the frequentist approximate tests proposed in the literature.  相似文献   

4.
In this paper, we develop noninformative priors for linear combinations of the means under the normal populations. It turns out that among the reference priors the one-at-a-time reference prior satisfies a second order probability matching criterion. Moreover, the second order probability matching priors match alternative coverage probabilities up to the second order and are also HPD matching priors. Our simulation study indicates that the one-at-a-time reference prior performs better than the other reference priors in terms of matching the target coverage probabilities in a frequentist sense.  相似文献   

5.
We present a mathematical theory of objective, frequentist chance phenomena that uses as a model a set of probability measures. In this work, sets of measures are not viewed as a statistical compound hypothesis or as a tool for modeling imprecise subjective behavior. Instead we use sets of measures to model stable (although not stationary in the traditional stochastic sense) physical sources of finite time series data that have highly irregular behavior. Such models give a coarse-grained picture of the phenomena, keeping track of the range of the possible probabilities of the events. We present methods to simulate finite data sequences coming from a source modeled by a set of probability measures, and to estimate the model from finite time series data. The estimation of the set of probability measures is based on the analysis of a set of relative frequencies of events taken along subsequences selected by a collection of rules. In particular, we provide a universal methodology for finding a family of subsequence selection rules that can estimate any set of probability measures with high probability.  相似文献   

6.
Since the 1960s the Bayesian case against frequentist inference has been partly built on several “classic” examples which are devised to show how frequentist inference procedures can give rise to fallacious results; see Berger and Wolpert (1988) [2]. The primary aim of this note is to revisit one of these examples, the Berger location model, that is supposed to demonstrate the fallaciousness of frequentist Confidence Interval (CI) estimation. A closer look at the example, however, reveals that the fallacious results stem primarily from the problematic nature of the example itself, since it is based on a non-regular probability model that enables one to (indirectly) assign probabilities to the unknown parameter. Moreover, the proposed confidence set is not a proper frequentist CI in the sense that it is not defined in terms of legitimate error probabilities.  相似文献   

7.
Transition models are an important framework that can be used to model longitudinal categorical data. A relevant issue in applying these models is the condition of stationarity, or homogeneity of transition probabilities over time. We propose two tests to assess stationarity in transition models: Wald and likelihood-ratio tests, which do not make use of transition probabilities, using only the estimated parameters of the models in contrast to the classical test available in the literature. In this paper, we present two motivating studies, with ordinal longitudinal data, to which proportional odds transition models are fitted and the two proposed tests are applied as well as the classical test. Additionally, their performances are assessed through simulation studies. The results show that the proposed tests have good performance, being better for control of type-I error and they present equivalent power functions asymptotically. Also, the correlations between the Wald, likelihood-ratio and the classical test statistics are positive and large, an indicator of general concordance. Additionally, both of the proposed tests are more flexible and can be applied in studies with qualitative and quantitative covariates.  相似文献   

8.
Rank correlations are shown to be generally robust in the sense that the tests for independence, which they naturally define, have weakly equicontinuous error probabilities. The ordinary sample correlation coefficient is not robust in this respect.  相似文献   

9.
The performance of Box-Cox power transformations in classification using Hinkley's (1975) method is studied. Misclassification probabilities before and after transformation are compared. It is found that the use of Box-Cox transformations can sometimes substantially reduce the error probabilities. Estimates of error probabilities are obtained and certain properties are derived. Examples for a number of distributions are given.  相似文献   

10.
Recently, the field of multiple hypothesis testing has experienced a great expansion, basically because of the new methods developed in the field of genomics. These new methods allow scientists to simultaneously process thousands of hypothesis tests. The frequentist approach to this problem is made by using different testing error measures that allow to control the Type I error rate at a certain desired level. Alternatively, in this article, a Bayesian hierarchical model based on mixture distributions and an empirical Bayes approach are proposed in order to produce a list of rejected hypotheses that will be declared significant and interesting for a more detailed posterior analysis. In particular, we develop a straightforward implementation of a Gibbs sampling scheme where all the conditional posterior distributions are explicit. The results are compared with the frequentist False Discovery Rate (FDR) methodology. Simulation examples show that our model improves the FDR procedure in the sense that it diminishes the percentage of false negatives keeping an acceptable percentage of false positives.  相似文献   

11.
Abstract

In this paper, we propose a Bayesian two-stage design with changing hypothesis test by bridging a single-arm study and a double-arm randomized trial in one phase II clinical trial based on continuous endpoints rather than binary endpoints. We have also calibrated with respect to frequentist and Bayesian error rates. The proposed design minimizes the Bayesian expected sample size if the new candidate has low or high efficacy activity subject to the constraint upon error rates in both frequentist and Bayesian perspectives. Tables of designs for various combinations of design parameters are also provided.  相似文献   

12.
Case-control studies of genetic polymorphisms and gene-environment interactions are reporting large numbers of statistically significant associations, many of which are likely to be spurious. This problem reflects the low prior probability that any one null hypothesis is false, and the large number of test results reported for a given study. In a Bayesian approach to the low prior probabilities, Wacholder et al. (2004) suggest supplementing the p-value for a hypothesis with its posterior probability given the study data. In a frequentist approach to the test multiplicity problem, Benjamini & Hochberg (1995) propose a hypothesis-rejection rule that provides greater statistical power by controlling the false discovery rate rather than the family-wise error rate controlled by the Bonferroni correction. This paper defines a Bayes false discovery rate and proposes a Bayes-based rejection rule for controlling it. The method, which combines the Bayesian approach of Wacholder et al. with the frequentist approach of Benjamini & Hochberg, is used to evaluate the associations reported in a case-control study of breast cancer risk and genetic polymorphisms of genes involved in the repair of double-strand DNA breaks.  相似文献   

13.
Nonparametric predictive inference (NPI) is a powerful frequentist statistical framework based only on an exchangeability assumption for future and past observations, made possible by the use of lower and upper probabilities. In this article, NPI is presented for ordinal data, which are categorical data with an ordering of the categories. The method uses a latent variable representation of the observations and categories on the real line. Lower and upper probabilities for events involving the next observation are presented, and briefly compared to NPI for non ordered categorical data. As application, the comparison of multiple groups of ordinal data is presented.  相似文献   

14.
The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the unit root hypothesis, is based solely on the posterior density function, without the need of imposing positive probabilities to sets of zero Lebesgue measure. Furthermore, it is conducted under strict observance of the likelihood principle. It was designed mainly for testing sharp null hypotheses and it is called FBST for Full Bayesian Significance Test.  相似文献   

15.
We present global and local likelihood-based tests to evaluate stationarity in transition models. Three motivational studies are considered. A simulation study was carried out to assess the performance of the proposed tests. The results showed that they present good performance with the control of the type-I error, especially for ordinal responses, and control of the type-II error, especially for the nominal case. Asymptotically they are close to the classical test performance. They can be executed in a single framework without the need to estimate the transition probabilities, incorporating both categorical and continuous covariates, and used to identify sources of non-stationarity.  相似文献   

16.
Tail probabilities from three independent hypothesis tests can be combined to form a test statistic of the form P1,P2 θ2,P3 θ3.The null distribution of the combined test statistic is presented and critical values for α=0.01 and 0.05 are provided.The power of this test is discussed for the special case ofthree independent F-tests.  相似文献   

17.
This paper aims to connect Bayesian analysis and frequentist theory in the context of multiple comparisons. The authors show that when testing the equality of two sample means, the posterior probability of the one‐sided alternative hypothesis, defined as a half‐space, shares with the frequentist P‐value the property of uniformity under the null hypothesis. Ultimately, the posterior probability may thus be used in the same spirit as a P‐value in the Benjamini‐Hochberg procedure, or in any of its extensions.  相似文献   

18.
Abstract.  An optimal Bayesian decision procedure for testing hypothesis in normal linear models based on intrinsic model posterior probabilities is considered. It is proven that these posterior probabilities are simple functions of the classical F -statistic, thus the evaluation of the procedure can be carried out analytically through the frequentist analysis of the posterior probability of the null. An asymptotic analysis proves that, under mild conditions on the design matrix, the procedure is consistent. For any testing hypothesis it is also seen that there is a one-to-one mapping – which we call calibration curve – between the posterior probability of the null hypothesis and the classical bi p -value. This curve adds substantial knowledge about the possible discrepancies between the Bayesian and the p -value measures of evidence for testing hypothesis. It permits a better understanding of the serious difficulties that are encountered in linear models for interpreting the p -values. A specific illustration of the variable selection problem is given.  相似文献   

19.
In this paper, we develop Bayes factor based testing procedures for the presence of a correlation or a partial correlation. The proposed Bayesian tests are obtained by restricting the class of the alternative hypotheses to maximize the probability of rejecting the null hypothesis when the Bayes factor is larger than a specified threshold. It turns out that they depend simply on the frequentist t-statistics with the associated critical values and can thus be easily calculated by using a spreadsheet in Excel and in fact by just adding one more step after one has performed the frequentist correlation tests. In addition, they are able to yield an identical decision with the frequentist paradigm, provided that the evidence threshold of the Bayesian tests is determined by the significance level of the frequentist paradigm. We illustrate the performance of the proposed procedures through simulated and real-data examples.  相似文献   

20.
The paper develops some objective priors for the common mean in the one-way random effects model with heterogeneous error variances. We derive the first and second order matching priors and reference priors. It turns out that the second order matching prior matches the alternative coverage probabilities up to the second order, and is also an HPD matching prior. However, derived reference priors just satisfy a first order matching criterion. Our simulation studies indicate that the second order matching prior performs better than the reference prior and the Jeffreys prior in terms of matching the target coverage probabilities in a frequentist sense. We also illustrate our results using real data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号