首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
We develop a Bayesian procedure for the homogeneity testing problem of r populations using r × s contingency tables. The posterior probability of the homogeneity null hypothesis is calculated using a mixed prior distribution. The methodology consists of choosing an appropriate value of π0 for the mass assigned to the null and spreading the remainder, 1 ? π0, over the alternative according to a density function. With this method, a theorem which shows when the same conclusion is reached from both frequentist and Bayesian points of view is obtained. A sufficient condition under which the p-value is less than a value α and the posterior probability is also less than 0.5 is provided.  相似文献   

2.
Summary In this paper we introduce a class of prior distributions for contingency tables with given marginals. We are interested in the structrre of concordance/discordance of such tables. There is actually a minor limitation in that the marginals are required to assume only rational values. We do argue, though, that this is not a serious drawback for all applicatory purposes. The posterior and predictive distributions given anM-sample are computed. Examples of Bayesian estimates of some classical indices of concordance are also given. Moreover, we show how to use simulation in order to overcome some difficulties which arise in the computation of the posterior distribution.  相似文献   

3.
Trend tests in dose-response have been central problems in medicine. The likelihood ratio test is often used to test hypotheses involving a stochastic order. Stratified contingency tables are common in practice. The distribution theory of likelihood ratio test has not been full developed for stratified tables and more than two stochastically ordered distributions. Under c strata of m × r tables, for testing the conditional independence against simple stochastic order alternative, this article introduces a model-free test method and gives the asymptotic distribution of the test statistic, which is a chi-bar-squared distribution. A real data set concerning an ordered stratified table will be used to show the validity of this test method.  相似文献   

4.
Abstract

This article is concerned with the comparison of Bayesian and classical testing of a point null hypothesis for the Pareto distribution when there is a nuisance parameter. In the first stage, using a fixed prior distribution, the posterior probability is obtained and compared with the P-value. In the second case, lower bounds of the posterior probability of H0, under a reasonable class of prior distributions, are compared with the P-value. It has been shown that even in the presence of nuisance parameters for the model, these two approaches can lead to different results in statistical inference.  相似文献   

5.
ABSTRACT

In Bayesian theory, calculating a posterior probability distribution is highly important but typically difficult. Therefore, some methods have been proposed to deal with such problem, among which, the most popular one is the asymptotic expansions of posterior distributions. In this paper, we propose an alternative approach, named a random weighting method, for scaled posterior distributions, and give an ideal convergence rate, o(n( ? 1/2)), which serves as the theoretical guarantee for methods of numerical simulations.  相似文献   

6.
A Bayesian test for the point null testing problem in the multivariate case is developed. A procedure to get the mixed distribution using the prior density is suggested. For comparisons between the Bayesian and classical approaches, lower bounds on posterior probabilities of the null hypothesis, over some reasonable classes of prior distributions, are computed and compared with the p-value of the classical test. With our procedure, a better approximation is obtained because the p-value is in the range of the Bayesian measures of evidence.  相似文献   

7.
ABSTRACT

This paper extends the classical methods of analysis of a two-way contingency table to the fuzzy environment for two cases: (1) when the available sample of observations is reported as imprecise data, and (2) the case in which we prefer to categorize the variables based on linguistic terms rather than as crisp quantities. For this purpose, the α-cuts approach is used to extend the usual concepts of the test statistic and p-value to the fuzzy test statistic and fuzzy p-value. In addition, some measures of association are extended to the fuzzy version in order to evaluate the dependence in such contingency tables. Some practical examples are provided to explain the applicability of the proposed methods in real-world problems.  相似文献   

8.
Abstract

In categorical repeated audit controls, fallible auditors classify sample elements in order to estimate the population fraction of elements in certain categories. To take possible misclassifications into account, subsequent checks are performed with a decreasing number of observations. In this paper a model is presented for a general repeated audit control system, where k subsequent auditors classify elements into r categories. Two different subsampling procedures will be discussed, named “stratified” and “random” sampling. Although these two sampling methods lead to different probability distributions, it is shown that the likelihood inferences are identical. The MLE are derived and the situations with undefined MLE are examined in detail; it is shown that an unbiased MLE can be obtained by stratified sampling. Three different methods for constructing confidence upper limits are discussed; the Bayesian upper limit seems to be the most satisfactory. Our theoretical results are applied to two cases with r = 2 and k = 2 or 3, respectively.  相似文献   

9.
Teresa Ledwina 《Statistics》2013,47(1):105-118
We state some necessary and sufficient conditions for admissibility of tests for a simple and a composite null hypotheses against ”one-sided” alternatives on multivariate exponential distributions with discrete support.

Admissibility of the maximum likelihood test for “one –sided” alternatives and z χ2test for the independence hypothesis in r× scontingency tables is deduced among others.  相似文献   

10.
Abstract

The problem of obtaining the maximum probability 2 × c contingency table with fixed marginal sums, R  = (R 1R 2) and C  = (C 1, … , C c ), and row and column independence is equivalent to the problem of obtaining the maximum probability points (mode) of the multivariate hypergeometric distribution MH(R 1; C 1, … , C c ). The most simple and general method for these problems is Joe's (Joe, H. (1988 Joe, H. 1988. Extreme probabilities for contingency tables under row and column independence with application to Fisher's exact test. Commun. Statist. Theory Meth., 17(11): 36773685. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). Extreme probabilities for contingency tables under row and column independence with application to Fisher's exact test. Commun. Statist. Theory Meth. 17(11):3677–3685.) In this article we study a family of MH's in which a connection relationship is defined between its elements. Based on this family and on a characterization of the mode described in Requena and Martín (Requena, F., Martín, N. (2000 Requena, F. and Martín, N. 2000. Characterization of maximum probability points in the multivariate hypergeometric distribution. Statist. Probab. Lett., 50: 3947.  [Google Scholar]). Characterization of maximum probability points in the multivariate hypergeometric distribution. Statist. Probab. Lett. 50:39–47.), we develop a new method for the above problems, which is completely general, non recursive, very simple in practice and more efficient than the Joe's method. Also, under weak conditions (which almost always hold), the proposed method provides a simple explicit solution to these problems. In addition, the well-known expression for the mode of a hypergeometric distribution is just a particular case of the method in this article.  相似文献   

11.
A Markov chain is proposed that uses coupling from the past sampling algorithm for sampling m×n contingency tables. This method is an extension of the one proposed by Kijima and Matsui (Rand. Struct. Alg., 29:243–256, 2006). It is not polynomial, as it is based upon a recursion, and includes a rejection phase but can be used for practical purposes on small contingency tables as illustrated in a classical 4×4 example.  相似文献   

12.
In assessing the area under the ROC curve for the accuracy of a diagnostic test, it is imperative to detect and locate multiple abnormalities per image. This approach takes that into account by adopting a statistical model that allows for correlation between the reader scores of several regions of interest (ROI).

The ROI method of partitioning the image is taken. The readers give a score to each ROI in the image and the statistical model takes into account the correlation between the scores of the ROI's of an image in estimating test accuracy. The test accuracy is given by Pr[Y > Z] + (1/2)Pr[Y = Z], where Y is an ordinal diagnostic measurement of an affected ROI, and Z is the diagnostic measurement of an unaffected ROI. This way of measuring test accuracy is equivalent to the area under the ROC curve. The parameters are the parameters of a multinomial distribution, then based on the multinomial distribution, a Bayesian method of inference is adopted for estimating the test accuracy.

Using a multinomial model for the test results, a Bayesian method based on the predictive distribution of future diagnostic scores is employed to find the test accuracy. By resampling from the posterior distribution of the model parameters, samples from the posterior distribution of test accuracy are also generated. Using these samples, the posterior mean, standard deviation, and credible intervals are calculated in order to estimate the area under the ROC curve. This approach is illustrated by estimating the area under the ROC curve for a study of the diagnostic accuracy of magnetic resonance angiography for diagnosis of arterial atherosclerotic stenosis. A generalization to multiple readers and/or modalities is proposed.

A Bayesian way to estimate test accuracy is easy to perform with standard software packages and has the advantage of employing the efficient inclusion of information from prior related imaging studies.  相似文献   

13.
Just as frequentist hypothesis tests have been developed to check model assumptions, prior predictive p-values and other Bayesian p-values check prior distributions as well as other model assumptions. These model checks not only suffer from the usual threshold dependence of p-values, but also from the suppression of model uncertainty in subsequent inference. One solution is to transform Bayesian and frequentist p-values for model assessment into a fiducial distribution across the models. Averaging the Bayesian or frequentist posterior distributions with respect to the fiducial distribution can reproduce results from Bayesian model averaging or classical fiducial inference.  相似文献   

14.
Consider data arranged into k × 2 × 2 contingency tables. The principal result of this paper is the derivation of the likelihood ratio test and its asymptotic distribution for testing for or against an order restriction placed upon the odds ratios. We will show that the limiting distributions are of chi-bar square type and give the expression of the weighting values.  相似文献   

15.
Power-divergence test statistics have been considered to test linear by linear association for two-way contingency tables. These test statistics have been compared based on designed simulation study and asymptotic results for 2 × 2, 2 × 3, and 3 × 3 tables. According to the results, there are test statistics with better properties than the well-known likelihood ratio test statistic for small and moderate samples.  相似文献   

16.
ABSTRACT

In this paper, the stress-strength reliability, R, is estimated in type II censored samples from Pareto distributions. The classical inference includes obtaining the maximum likelihood estimator, an exact confidence interval, and the confidence intervals based on Wald and signed log-likelihood ratio statistics. Bayesian inference includes obtaining Bayes estimator, equi-tailed credible interval, and highest posterior density (HPD) interval given both informative and non-informative prior distributions. Bayes estimator of R is obtained using four methods: Lindley's approximation, Tierney-Kadane method, Monte Carlo integration, and MCMC. Also, we compare the proposed methods by simulation study and provide a real example to illustrate them.  相似文献   

17.
A Bayesian analysis is provided for the Wilcoxon signed-rank statistic (T+). The Bayesian analysis is based on a sign-bias parameter φ on the (0, 1) interval. For the case of a uniform prior probability distribution for φ and for small sample sizes (i.e., 6 ? n ? 25), values for the statistic T+ are computed that enable probabilistic statements about φ. For larger sample sizes, approximations are provided for the asymptotic likelihood function P(T+|φ) as well as for the posterior distribution P(φ|T+). Power analyses are examined both for properly specified Gaussian sampling and for misspecified non Gaussian models. The new Bayesian metric has high power efficiency in the range of 0.9–1 relative to a standard t test when there is Gaussian sampling. But if the sampling is from an unknown and misspecified distribution, then the new statistic still has high power; in some cases, the power can be higher than the t test (especially for probability mixtures and heavy-tailed distributions). The new Bayesian analysis is thus a useful and robust method for applications where the usual parametric assumptions are questionable. These properties further enable a way to do a generic Bayesian analysis for many non Gaussian distributions that currently lack a formal Bayesian model.  相似文献   

18.
Abstract

Numerous methods—based on exact and asymptotic distributions—can be used to obtain confidence intervals for the odds ratio in 2 × 2 tables. We examine ten methods for generating these intervals based on coverage probability, closeness of coverage probability to target, and length of confidence intervals. Based on these criteria, Cornfield’s method, without the continuity correction, performed the best of the methods examined here. A drawback to use of this method is the significant possibility that the attained coverage probability will not meet the nominal confidence level. Use of a mid-P value greatly improves methods based on the “exact” distribution. When combined with the Wilson rule for selection of a rejection set, the resulting method is a procedure that performed very well. Crow’s method, with use of a mid-P, performed well, although it was only a slight improvement over the Wilson mid-P method. Its cumbersome calculations preclude its general acceptance. Woolf's (logit) method—with the Haldane–Anscombe correction— performed well, especially with regard to length of confidence intervals, and is recommended based on ease of computation.  相似文献   

19.
The restricted minimum φ-divergence estimator, [Pardo, J.A., Pardo, L. and Zografos, K., 2002, Minimum φ-divergence estimators with constraints in multinomial populations. Journal of Statistical Planning and Inference, 104, 221–237], is employed to obtain estimates of the cell frequencies of an I×I contingency table under hypotheses of symmetry, marginal homogeneity or quasi-symmetry. The associated φ-divergence statistics are distributed asymptotically as chi-squared distributions under the null hypothesis. The new estimators and test statistics contain, as particular cases, the classical estimators and test statistics previously presented in the literature for the cited problems. A simulation study is presented, for the symmetry problem, to choose the best function φ2 for estimation and the best function φ1 for testing.  相似文献   

20.
Frequently, contingency tables are generated in a multinomial sampling. Multinomial probabilities are then organized in a table assigning probabilities to each cell. A probability table can be viewed as an element in the simplex. The Aitchison geometry of the simplex identifies independent probability tables as a linear subspace. An important consequence is that, given a probability table, the nearest independent table is obtained by orthogonal projection onto the independent subspace. The nearest independent table is identified as that obtained by the product of geometric marginals, which do not coincide with the standard marginals, except in the independent case. The original probability table is decomposed into orthogonal tables, the independent and the interaction tables. The underlying model is log-linear, and a procedure to test independence of a contingency table, based on a multinomial simulation, is developed. Its performance is studied on an illustrative example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号