首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 897 毫秒
1.
The problem of choosing the loss function in the Bayesian problem of many hypotheses testing is considered. It is shown that linear and quadratic loss functions are the most-used ones. For any kind of loss function, the risk function in the Bayesian problem of many hypotheses testing contains the errors of two kinds. The Bayesian decision rule minimizes the total effect of these errors. The share of each of them in the optimal value of risk function is unknown. When solving many important problems, the results caused by different errors significantly differ from each other. Therefore, it is necessary to guarantee the limitation on the most undesirable kind of these errors and to minimize the errors of the second kind. For solving these problems, this article are states and solves conditional Bayesian tasks of testing many hypotheses. The results of sensitivity analysis of the classical and conditional Bayesian problems are given and their advantages and drawbacks are considered.  相似文献   

2.
Traditionally, noninferiority hypotheses have been tested using a frequentist method with a fixed margin. Given that information for the control group is often available from previous studies, it is interesting to consider a Bayesian approach in which information is “borrowed” for the control group to improve efficiency. However, construction of an appropriate informative prior can be challenging. In this paper, we consider a hybrid Bayesian approach for testing noninferiority hypotheses in studies with a binary endpoint. To account for heterogeneity between the historical information and the current trial for the control group, a dynamic P value–based power prior parameter is proposed to adjust the amount of information borrowed from the historical data. This approach extends the simple test‐then‐pool method to allow a continuous discounting power parameter. An adjusted α level is also proposed to better control the type I error. Simulations are conducted to investigate the performance of the proposed method and to make comparisons with other methods including test‐then‐pool and hierarchical modeling. The methods are illustrated with data from vaccine clinical trials.  相似文献   

3.
This article addresses the problem of testing whether the vectors of regression coefficients are equal for two independent normal regression models when the error variances are unknown. This problem poses severe difficulties both to the frequentist and Bayesian approaches to statistical inference. In the former approach, normal hypothesis testing theory does not apply because of the unrelated variances. In the latter, the prior distributions typically used for the parameters are improper and hence the Bayes factor-based solution cannot be used.We propose a Bayesian solution to this problem in which no subjective input is considered. We first generate “objective” proper prior distributions (intrinsic priors) for which the Bayes factor and model posterior probabilities are well defined. The posterior probability of each model is used as a model selection tool. This consistent procedure of testing hypotheses is compared with some of the frequentist approximate tests proposed in the literature.  相似文献   

4.
The paper deals with the problem of electronic verification of people on the basis of measurement information of a fingerprint reader and new approaches to its solution. The offered method guaranties the restriction of error probabilities of both type at the desired level at making a decision about permitting or rejecting the request on service in the system. On the basis of investigation of real data obtained in the real biometrical system, the choice of distribution laws is substantiated and the proper estimations of their parameters are obtained. Using chosen distribution laws, the normal distribution for measurement results of characteristics of the people having access to the system and the beta distribution for the people having no such access, the optimal rule based on the Constrained Bayesian Method (CBM) of making a decision about giving a permission of access to the users of the system is justified. The CBM, the Neyman–Pearson and classical Bayes methods are investigated and their good and negative points are examined. Computation results obtained by direct computation, by simulation and using real data completely confirm the suppositions made and the high quality of verification results obtained on their basis.  相似文献   

5.
One of the most important topics in manufacturing industries is the evaluation of performance lifetimes of products. Based on a given lifetime performance index, this paper deals with evaluating the performance of a process subject to a given lower specification limit. We confine ourselves to the progressively first-failure-censored data coming from a common Pareto distribution. With both the Bayesian and the non-Bayesian approaches being investigated here, we pay more attention to Bayesian estimators under balanced type loss functions. The results are presented under the balanced versions of two well-known loss functions, namely the squared error loss and the Varian's linear-exponential (LINEX) loss. Moreover, based on the Bayesian and the non-Bayesian approaches, the problem of testing hypotheses on the lifetime performance index is studied. Also, a simulation study is performed to assess the obtained results. Finally, two illustrative examples are given.  相似文献   

6.
Consider testing multiple hypotheses using tests that can only be evaluated by simulation, such as permutation tests or bootstrap tests. This article introduces MMCTest , a sequential algorithm that gives, with arbitrarily high probability, the same classification as a specific multiple testing procedure applied to ideal p‐values. The method can be used with a class of multiple testing procedures that include the Benjamini and Hochberg false discovery rate procedure and the Bonferroni correction controlling the familywise error rate. One of the key features of the algorithm is that it stops sampling for all the hypotheses that can already be decided as being rejected or non‐rejected. MMCTest can be interrupted at any stage and then returns three sets of hypotheses: the rejected, the non‐rejected and the undecided hypotheses. A simulation study motivated by actual biological data shows that MMCTest is usable in practice and that, despite the additional guarantee, it can be computationally more efficient than other methods.  相似文献   

7.
In the Bayesian approach, the Behrens–Fisher problem has been posed as one of estimation for the difference of two means. No Bayesian solution to the Behrens–Fisher testing problem has yet been given due, perhaps, to the fact that the conventional priors used are improper. While default Bayesian analysis can be carried out for estimation purposes, it poses difficulties for testing problems. This paper generates sensible intrinsic and fractional prior distributions for the Behrens–Fisher testing problem from the improper priors commonly used for estimation. It allows us to compute the Bayes factor to compare the null and the alternative hypotheses. This default procedure of model selection is compared with a frequentist test and the Bayesian information criterion. We find discrepancy in the sense that frequentist and Bayesian information criterion reject the null hypothesis for data, that the Bayes factor for intrinsic or fractional priors do not.  相似文献   

8.
A common approach to analysing clinical trials with multiple outcomes is to control the probability for the trial as a whole of making at least one incorrect positive finding under any configuration of true and false null hypotheses. Popular approaches are to use Bonferroni corrections or structured approaches such as, for example, closed-test procedures. As is well known, such strategies, which control the family-wise error rate, typically reduce the type I error for some or all the tests of the various null hypotheses to below the nominal level. In consequence, there is generally a loss of power for individual tests. What is less well appreciated, perhaps, is that depending on approach and circumstances, the test-wise loss of power does not necessarily lead to a family wise loss of power. In fact, it may be possible to increase the overall power of a trial by carrying out tests on multiple outcomes without increasing the probability of making at least one type I error when all null hypotheses are true. We examine two types of problems to illustrate this. Unstructured testing problems arise typically (but not exclusively) when many outcomes are being measured. We consider the case of more than two hypotheses when a Bonferroni approach is being applied while for illustration we assume compound symmetry to hold for the correlation of all variables. Using the device of a latent variable it is easy to show that power is not reduced as the number of variables tested increases, provided that the common correlation coefficient is not too high (say less than 0.75). Afterwards, we will consider structured testing problems. Here, multiplicity problems arising from the comparison of more than two treatments, as opposed to more than one measurement, are typical. We conduct a numerical study and conclude again that power is not reduced as the number of tested variables increases.  相似文献   

9.
In this article, it is shown how to compute, in an approximated way, probabilities of Type I error and Type II error of sequential Bayesian procedures for testing one-sided null hypotheses. First, some theoretical results are obtained, and then an algorithm is developed for applying these results. The prior predictive density plays a central role in this study.  相似文献   

10.
Recently, the field of multiple hypothesis testing has experienced a great expansion, basically because of the new methods developed in the field of genomics. These new methods allow scientists to simultaneously process thousands of hypothesis tests. The frequentist approach to this problem is made by using different testing error measures that allow to control the Type I error rate at a certain desired level. Alternatively, in this article, a Bayesian hierarchical model based on mixture distributions and an empirical Bayes approach are proposed in order to produce a list of rejected hypotheses that will be declared significant and interesting for a more detailed posterior analysis. In particular, we develop a straightforward implementation of a Gibbs sampling scheme where all the conditional posterior distributions are explicit. The results are compared with the frequentist False Discovery Rate (FDR) methodology. Simulation examples show that our model improves the FDR procedure in the sense that it diminishes the percentage of false negatives keeping an acceptable percentage of false positives.  相似文献   

11.
Quasi-optimal procedures of testing many hypotheses are described in this paper. They significantly simplify the Bayesian algorithms of hypothesis testing and computation of the risk function. The relations allowing for obtaining the estimations for the values of average risks in optimum tasks are given. The obtained general solutions are reduced to concrete formulae for a multivariate normal distribution of probabilities. The methods of approximate computation of the risk functions in Bayesian tasks of testing many hypotheses are offered. The properties and interrelations of the developed methods and algorithms are investigated. On the basis of a simulation, the validity of the obtained results and conclusions drawn is presented.  相似文献   

12.
In this paper, we develop Bayes factor based testing procedures for the presence of a correlation or a partial correlation. The proposed Bayesian tests are obtained by restricting the class of the alternative hypotheses to maximize the probability of rejecting the null hypothesis when the Bayes factor is larger than a specified threshold. It turns out that they depend simply on the frequentist t-statistics with the associated critical values and can thus be easily calculated by using a spreadsheet in Excel and in fact by just adding one more step after one has performed the frequentist correlation tests. In addition, they are able to yield an identical decision with the frequentist paradigm, provided that the evidence threshold of the Bayesian tests is determined by the significance level of the frequentist paradigm. We illustrate the performance of the proposed procedures through simulated and real-data examples.  相似文献   

13.
The classical problem of change point is considered when the data are assumed to be correlated. The nuisance parameters in the model are the initial level μ and the common variance σ2. The four cases, based on none, one, and both of the parameters are known are considered. Likelihood ratio tests are obtained for testing hypotheses regarding the change in level, δ, in each case. Following Henderson (1986), a Bayesian test is obtained for the two sided alternative. Under the Bayesian set up, a locally most powerful unbiased test is derived for the case μ=0 and σ2=1. The exact null distribution function of the Bayesian test statistic is given an integral representation. Methods to obtain exact and approximate critical values are indicated.  相似文献   

14.
Simultaneously testing a family of n null hypotheses can arise in many applications. A common problem in multiple hypothesis testing is to control Type-I error. The probability of at least one false rejection referred to as the familywise error rate (FWER) is one of the earliest error rate measures. Many FWER-controlling procedures have been proposed. The ability to control the FWER and achieve higher power is often used to evaluate the performance of a controlling procedure. However, when testing multiple hypotheses, FWER and power are not sufficient for evaluating controlling procedure’s performance. Furthermore, the performance of a controlling procedure is also governed by experimental parameters such as the number of hypotheses, sample size, the number of true null hypotheses and data structure. This paper evaluates, under various experimental settings, the performance of some FWER-controlling procedures in terms of five indices, the FWER, the false discovery rate, the false non-discovery rate, the sensitivity and the specificity. The results can provide guidance on how to select an appropriate FWER-controlling procedure to meet a study’s objective.  相似文献   

15.
Conditional power calculations are frequently used to guide the decision whether or not to stop a trial for futility or to modify planned sample size. These ignore the information in short‐term endpoints and baseline covariates, and thereby do not make fully efficient use of the information in the data. We therefore propose an interim decision procedure based on the conditional power approach which exploits the information contained in baseline covariates and short‐term endpoints. We will realize this by considering the estimation of the treatment effect at the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long‐term endpoint which enable the incorporation of baseline covariates and multiple short‐term endpoints. We show that the proposed procedure leads to an efficiency gain and a reduced sample size, without compromising the Type I error rate of the procedure, even when the adopted prediction models are misspecified. In particular, implementing our proposal in the conditional power approach enables earlier decisions relative to standard approaches, whilst controlling the probability of an incorrect decision. This time gain results in a lower expected number of recruited patients in case of stopping for futility, such that fewer patients receive the futile regimen. We explain how these methods can be used in adaptive designs with unblinded sample size re‐assessment based on the inverse normal P‐value combination method to control Type I error. We support the proposal by Monte Carlo simulations based on data from a real clinical trial.  相似文献   

16.
This paper compares the Bayesian and frequentist approaches to testing a one-sided hypothesis about a multivariate mean. First, this paper proposes a simple way to assign a Bayesian posterior probability to one-sided hypotheses about a multivariate mean. The approach is to use (almost) the exact posterior probability under the assumption that the data has multivariate normal distribution, under either a conjugate prior in large samples or under a vague Jeffreys prior. This is also approximately the Bayesian posterior probability of the hypothesis based on a suitably flat Dirichlet process prior over an unknown distribution generating the data. Then, the Bayesian approach and a frequentist approach to testing the one-sided hypothesis are compared, with results that show a major difference between Bayesian reasoning and frequentist reasoning. The Bayesian posterior probability can be substantially smaller than the frequentist p-value. A class of example is given where the Bayesian posterior probability is basically 0, while the frequentist p-value is basically 1. The Bayesian posterior probability in these examples seems to be more reasonable. Other drawbacks of the frequentist p-value as a measure of whether the one-sided hypothesis is true are also discussed.  相似文献   

17.
Abstract.  An optimal Bayesian decision procedure for testing hypothesis in normal linear models based on intrinsic model posterior probabilities is considered. It is proven that these posterior probabilities are simple functions of the classical F -statistic, thus the evaluation of the procedure can be carried out analytically through the frequentist analysis of the posterior probability of the null. An asymptotic analysis proves that, under mild conditions on the design matrix, the procedure is consistent. For any testing hypothesis it is also seen that there is a one-to-one mapping – which we call calibration curve – between the posterior probability of the null hypothesis and the classical bi p -value. This curve adds substantial knowledge about the possible discrepancies between the Bayesian and the p -value measures of evidence for testing hypothesis. It permits a better understanding of the serious difficulties that are encountered in linear models for interpreting the p -values. A specific illustration of the variable selection problem is given.  相似文献   

18.
A fundamental theorem in hypothesis testing is the Neyman‐Pearson (N‐P) lemma, which creates the most powerful test of simple hypotheses. In this article, we establish Bayesian framework of hypothesis testing, and extend the Neyman‐Pearson lemma to create the Bayesian most powerful test of general hypotheses, thus providing optimality theory to determine thresholds of Bayes factors. Unlike conventional Bayes tests, the proposed Bayesian test is able to control the type I error.  相似文献   

19.
We provide a decision theoretic approach to the construction of a learning process in the presence of independent and identically distributed observations. Starting with a probability measure representing beliefs about a key parameter, the approach allows the measure to be updated via the solution to a well defined decision problem. While the learning process encompasses the Bayesian approach, a necessary asymptotic consideration then actually implies the Bayesian learning process is best. This conclusion is due to the requirement of posterior consistency for all models and of having standardized losses between probability distributions. This is shown considering a specific continuous model and a very general class of discrete models.  相似文献   

20.
A Bayesian approach for the balanced two-way analysis of variance is derived. Four hypotheses are considered, namely the existence of neither block nor treatment effect, the existence of block but not treatment effect, the existence of treatment but not block effect, and the existence of both block and treatment effects. Lower bounds on the posterior probability are derived over the class of conjugate priors for the normal model. The hypotheses are further combined so that they should resemble the situation in classical testing and the resultant p-values are analysed in conjunction with the lower bounds on the posterior probabilities. The situation of decision making under quadratic loss is also considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号