首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到12条相似文献,搜索用时 15 毫秒
1.
Large-sample Wilson-type confidence intervals (CIs) are derived for a parameter of interest in many clinical trials situations: the log-odds-ratio, in a two-sample experiment comparing binomial success proportions, say between cases and controls. The methods cover several scenarios: (i) results embedded in a single 2 × 2 contingency table; (ii) a series of K 2 × 2 tables with common parameter; or (iii) K tables, where the parameter may change across tables under the influence of a covariate. The calculations of the Wilson CI require only simple numerical assistance, and for example are easily carried out using Excel. The main competitor, the exact CI, has two disadvantages: It requires burdensome search algorithms for the multi-table case and results in strong over-coverage associated with long confidence intervals. All the application cases are illustrated through a well-known example. A simulation study then investigates how the Wilson CI performs among several competing methods. The Wilson interval is shortest, except for very large odds ratios, while maintaining coverage similar to Wald-type intervals. An alternative to the Wald CI is the Agresti-Coull CI, calculated from the Wilson and Wald CIs, which has same length as the Wald CI but improved coverage.  相似文献   

2.
This article considers Bayesian p-values for testing independence in 2 × 2 contingency tables with cell counts observed from the two independent binomial sampling scheme and the multinomial sampling scheme. From the frequentist perspective, Fisher's p-value (p F ) is the most commonly used p-value but it can be conservative for small to moderate sample sizes. On the other hand, from the Bayesian perspective, Bayarri and Berger (2000 Bayarri , M. J. , Berger , J. O. ( 2000 ). P-values for composite null models (with discussion) . J. Amer. Statist. Assoc. 95 : 11271170 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) first proposed the partial posterior predictive p-value (p PPOST ), which can avoid the double use of the data that occurs in another Bayesian p-value proposed by Guttman (1967 Guttman , I. ( 1967 ). The use of the concept of a future observation in goodness-of-fit problems . J. Roy. Statist. Soc. Ser. B 29 : 83100 . [Google Scholar]) and Rubin (1984 Rubin , D. B. ( 1984 ). Bayesianly justifiable and relevant frequency calculations for the applied statistician . Ann. Statist. 12 : 11511172 .[Crossref], [Web of Science ®] [Google Scholar]), called the posterior predictive p-value (p POST ). The subjective and objective Bayesian p-values in terms of p POST and p PPOST are derived under the beta prior and the (noninformative) Jeffreys prior, respectively. Numerical comparisons among p F , p POST , and p PPOST reveal that p PPOST performs much better than p F and p POST for small to moderate sample sizes from the frequentist perspective.  相似文献   

3.
This article considers the problem of testing marginal homogeneity in a 2 × 2 contingency table. We first review some well-known conditional and unconditional p-values appeared in the statistical literature. Then we treat the p-value as the test statistic and use the unconditional approach to obtain the modified p-value, which is shown to be valid. For a given nominal level, the rejection region of the modified p-value test contains that of the original p-value test. Some nice properties of the modified p-value are given. Especially, under mild conditions the rejection region of the modified p-value test is shown to be the Barnard convex set as described by Barnard (1947 Barnard , G. A. ( 1947 ). Significance tests for 2 × 2 tables . Biometrika 34 : 123138 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). If the one-sided null hypothesis has two nuisance parameters, we show that this result can reduce the dimension of the nuisance parameter space from two to one for computing modified p-values and sizes of tests. Numerical studies including an illustrative example are given. Numerical comparisons show that the sizes of the modified p-value tests are closer to a nominal level than those of the original p-value tests for many cases, especially in the case of small to moderate sample sizes.  相似文献   

4.
While analyzing 2 × 2 contingency tables, the log odds ratio for measuring the strength of association is often approximated by a normal distribution with some variance. We show that the expression of that variance needs to be modified in the presence of correlation between two binomial distributions of the contingency table. In the present paper, we derive a correlation-adjusted variance of the limiting normal distribution of log odds ratio. We also propose a correlation adjusted test based on the standard odds ratio for analyzing matched-pair studies and any other study settings that induce correlated binary outcomes. We demonstrate that our proposed test outperforms the classical McNemar’s test. Simulation studies show the gains in power are especially manifest when sample size is small and strong correlation is present. Two examples of real data sets are used to demonstrate that the proposed method may lead to conclusions significantly different from those reached using McNemar’s test.  相似文献   

5.
It is customary to use two groups of indices to evaluate a diagnostic method with a binary outcome: validity indices with a standard rater (sensitivity, specificity, and positive or negative predictive values) and reliability indices (positive, negative and overall agreements) without a standard rater. However neither of these classic indices is chance-corrected, and this may distort the analysis of the problem (especially in comparative studies). One way of chance-correcting these indices is by using the Delta model (an alternative to the Kappa model), but this means having to use a computer program to work out the calculations. This paper gives an asymptotic version of the Delta model, thus allowing simple expressions to be obtained for the estimator of each of the above-mentioned chance-corrected indices (as well as for its standard error).  相似文献   

6.
This article considers K pairs of incomplete correlated 2 × 2 tables in which the interesting measurement is the risk difference between marginal and conditional probabilities. A Wald-type statistic and a score-type statistic are presented to test the homogeneity hypothesis about risk differences across strata. Powers and sample size formulae based on the above two statistics are deduced. Figures about sample size against risk difference (or marginal probability) are given. A real example is used to illustrate the proposed methods.  相似文献   

7.
Consider dichotomous observations taken from T strata or tables, where within each table, the effect of J>2 doses or treatments are valuated. 'Ihe dose or treatment effect may be measured by various functions of the probability of outcomes, but it is assumed that the effect is the same in each table. Previous work on finding confidence intervals is specific to a particular function of the probabilities, based on only two doses, and limited to ML estimation of the nuisance parameters. In this paper, confidence intervals are developed based on the C, test, allowing for a unification and generalization of previous work. A computational procedure is given that minimizes the number of iterations required. An extension of the procedure to the regression framework suitable when there are large numbers of sparse tables is outlined.  相似文献   

8.
This article studies the construction of a Bayesian confidence interval for risk difference in a 2×2 table with structural zero. The exact posterior distribution of the risk difference is derived under the Dirichlet prior distribution, and a tail-based interval is used to construct the Bayesian confidence interval. The frequentist performance of the tail-based interval is investigated and compared with the score-based interval by simulation. Our results show that the tail-based interval at Jeffreys prior performs as well as or better than the score-based confidence interval.  相似文献   

9.
This paper studies the construction of a Bayesian confidence interval for the risk ratio (RR) in a 2 × 2 table with structural zero. Under a Dirichlet prior distribution, the exact posterior distribution of the RR is derived, and tail-based interval is suggested for constructing Bayesian confidence interval. The frequentist performance of this confidence interval is investigated by simulation and compared with the score-based interval in terms of the mean coverage probability and mean expected width of the interval. An advantage of the Bayesian confidence interval is that it is well defined for all data structure and has shorter expected width. Our simulation shows that the Bayesian tail-based interval under Jeffreys’ prior performs as well as or better than the score-based confidence interval.  相似文献   

10.
Preliminary tests of significance on the crucial assumptions are often done before drawing inferences of primary interest. In a factorial trial, the data may be pooled across the columns or rows for making inferences concerning the efficacy of the drugs {simple effect) in the absence of interaction. Pooling the data has an advantage of higher power due to larger sample size. On the other hand, in the presence of interaction, such pooling may seriously inflate the type I error rate in testing for the simple effect.

A preliminary test for interaction is therefore in order. If this preliminary test is not significant at some prespecified level of significance, then pool the data for testing the efficacy of the drugs at a specified α level. Otherwise, use of the corresponding cell means for testing the efficacy of the drugs at the specified α is recommended. This paper demonstrates that this adaptive procedure may seriously inflate the overall type I error rate. Such inflation happens even in the absence of interaction.

One interesting result is that the type I error rate of the adaptive procedure depends on the interaction and the square root of the sample size only through their product. One consequence of this result is as follows. No matter how small the non-zero interaction might be, the inflation of the type I error rate of the always-pool procedure will eventually become unacceptable as the sample size increases. Therefore, in a very large study, even though the interaction is suspected to be very small but non-zero, the always-pool procedure may seriously inflate the type I error rate in testing for the simple effects.

It is concluded that the 2 × 2 factorial design is not an efficient design for detecting simple effects, unless the interaction is negligible.  相似文献   

11.
12.
Following Gart (1966) a test of significance for the odds ratio in a 2×2 table is developed based on a semi-empirical method of approximating discrete distributions by their continuous analogues. The distribution of the test statistic (W), the ratio of two independent F-variates, is derived. This approximate technique is compared with the "exact" test, uncorrected X test, and a normal approximation based on lnW.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号