首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 15 毫秒
1.
For the analysis of 2 × 2 contingency tables with one set of fixed margins, a number of authors (e.g. Wolf, 1955; Cox, 1970) have proposed the use of various modified estimators based upon the empirical logistic transform. In this paper the moments of such estimators are considered and their small sample properties are investigated numerically.  相似文献   

2.
The paper is concerned with structural properties of the acceptance regions of uniformly most powerful unbiased tests (UMPU-tests) for one- and two-sided hypotheses for 2×2 tables as, for instance, the comparison of two proportions or testing for association. These tests can be considered as randomized versions of Fisher's exact tests. A series of monotonicity and unimodality properties will be proved. These properties are equivalent to a symmetry and convexity condition often required for powerful unconditional tests. Knowledge of such properties allows a fast and in some sense recursive calculation of the critical values of the UMPU-tests which is important if a repeated calculation of all critical values for different sample sizes or different levels is required. This is, for example, the case if the unconditional power has to be controlled over a certain subset of the alternative, or, if one is interested in powerful unconditional non-randomized tests generated by a UMPU-test. Our results also imply some useful properties of the two-dimensional unconditional power function. On the other hand, we found some less nice properties of the UMPU-tests, too.  相似文献   

3.
The power of the Fisher permutation test extended to 2 × k tables is evaluated unconditionally as a function of the under-lying cell probabilities in the table. These results are then applied in assessing the sensitivity of two-generation cancer bioassays in which a fixed number of pups from each litter born in the first generation are selected to continue on test in the second generation. In this case, the two rows of the table correspond to two treatment groups and the k columns correspond to the number of animals responding in a litter. The cell probabilities in this application are based on a suitable beta-binomial superpopulation model.  相似文献   

4.
The 2 × 2 crossover trial uses subjects as their own control to reduce the intersubject variability in the treatment comparison, and typically requires fewer subjects than a parallel design. The generalized estimating equations (GEE) methodology has been commonly used to analyze incomplete discrete outcomes from crossover trials. We propose a unified approach to the power and sample size determination for the Wald Z-test and t-test from GEE analysis of paired binary, ordinal and count outcomes in crossover trials. The proposed method allows misspecification of the variance and correlation of the outcomes, missing outcomes, and adjustment for the period effect. We demonstrate that misspecification of the working variance and correlation functions leads to no or minimal efficiency loss in GEE analysis of paired outcomes. In general, GEE requires the assumption of missing completely at random. For bivariate binary outcomes, we show by simulation that the GEE estimate is asymptotically unbiased or only minimally biased, and the proposed sample size method is suitable under missing at random (MAR) if the working correlation is correctly specified. The performance of the proposed method is illustrated with several numerical examples. Adaption of the method to other paired outcomes is discussed.  相似文献   

5.
The most common asymptotic procedure for analyzing a 2 × 2 table (under the conditioning principle) is the ‰ chi-squared test with correction for continuity (c.f.c). According to the way this is applied, up to the present four methods have been obtained: one for one-tailed tests (Yates') and three for two-tailed tests (those of Mantel, Conover and Haber). In this paper two further methods are defined (one for each case), the 6 resulting methods are grouped in families, their individual behaviour studied and the optimal is selected. The conclusions are established on the assumption that the method studied is applied indiscriminately (without being subjected to validity conditions), and taking a basis of 400,000 tables (with the values of sample size n between 20 and 300 and exact P-values between 1% and 10%) and a criterion of evaluation based on the percentage of times in which the approximate P-value differs from the exact (Fisher's exact test) by an excessive amount. The optimal c.f.c. depends on n, on E (the minimum quantity expected) and on the error α to be used, but the rule of selection is not complicated and the new methods proposed are frequently selected. In the paper we also study what occurs when E ≥ 5, as well as whether the chi-squared by factor (n-1).  相似文献   

6.
Consider dichotomous observations taken from T strata or tables, where within each table, the effect of J>2 doses or treatments are valuated. 'Ihe dose or treatment effect may be measured by various functions of the probability of outcomes, but it is assumed that the effect is the same in each table. Previous work on finding confidence intervals is specific to a particular function of the probabilities, based on only two doses, and limited to ML estimation of the nuisance parameters. In this paper, confidence intervals are developed based on the C, test, allowing for a unification and generalization of previous work. A computational procedure is given that minimizes the number of iterations required. An extension of the procedure to the regression framework suitable when there are large numbers of sparse tables is outlined.  相似文献   

7.
We consider a sequence of contingency tables whose cell probabilities may vary randomly. The distribution of cell probabilities is modelled by a Dirichlet distribution. Bayes and empirical Bayes estimates of the log odds ratio are obtained. Emphasis is placed on estimating the risks associated with the Bayes, empirical Bayes and maximum lilkelihood estimates of the log odds ratio.  相似文献   

8.
Preliminary tests of significance on the crucial assumptions are often done before drawing inferences of primary interest. In a factorial trial, the data may be pooled across the columns or rows for making inferences concerning the efficacy of the drugs {simple effect) in the absence of interaction. Pooling the data has an advantage of higher power due to larger sample size. On the other hand, in the presence of interaction, such pooling may seriously inflate the type I error rate in testing for the simple effect.

A preliminary test for interaction is therefore in order. If this preliminary test is not significant at some prespecified level of significance, then pool the data for testing the efficacy of the drugs at a specified α level. Otherwise, use of the corresponding cell means for testing the efficacy of the drugs at the specified α is recommended. This paper demonstrates that this adaptive procedure may seriously inflate the overall type I error rate. Such inflation happens even in the absence of interaction.

One interesting result is that the type I error rate of the adaptive procedure depends on the interaction and the square root of the sample size only through their product. One consequence of this result is as follows. No matter how small the non-zero interaction might be, the inflation of the type I error rate of the always-pool procedure will eventually become unacceptable as the sample size increases. Therefore, in a very large study, even though the interaction is suspected to be very small but non-zero, the always-pool procedure may seriously inflate the type I error rate in testing for the simple effects.

It is concluded that the 2 × 2 factorial design is not an efficient design for detecting simple effects, unless the interaction is negligible.  相似文献   

9.
For the situation of several 2 × 2 tables two approaches are presented to jackknife the well-known estimators of a common odds ratio proposed by Woolf (1955) and by Mantel and Haenszel (1959). These estimators are compared w.r.t. their bias and mean squared error by means of a Monte Carlo study for a wide range of parameters.  相似文献   

10.
11.
It is customary to use two groups of indices to evaluate a diagnostic method with a binary outcome: validity indices with a standard rater (sensitivity, specificity, and positive or negative predictive values) and reliability indices (positive, negative and overall agreements) without a standard rater. However neither of these classic indices is chance-corrected, and this may distort the analysis of the problem (especially in comparative studies). One way of chance-correcting these indices is by using the Delta model (an alternative to the Kappa model), but this means having to use a computer program to work out the calculations. This paper gives an asymptotic version of the Delta model, thus allowing simple expressions to be obtained for the estimator of each of the above-mentioned chance-corrected indices (as well as for its standard error).  相似文献   

12.
This article studies the construction of a Bayesian confidence interval for risk difference in a 2×2 table with structural zero. The exact posterior distribution of the risk difference is derived under the Dirichlet prior distribution, and a tail-based interval is used to construct the Bayesian confidence interval. The frequentist performance of the tail-based interval is investigated and compared with the score-based interval by simulation. Our results show that the tail-based interval at Jeffreys prior performs as well as or better than the score-based confidence interval.  相似文献   

13.
Following Gart (1966) a test of significance for the odds ratio in a 2×2 table is developed based on a semi-empirical method of approximating discrete distributions by their continuous analogues. The distribution of the test statistic (W), the ratio of two independent F-variates, is derived. This approximate technique is compared with the "exact" test, uncorrected X test, and a normal approximation based on lnW.  相似文献   

14.
This article considers K pairs of incomplete correlated 2 × 2 tables in which the interesting measurement is the risk difference between marginal and conditional probabilities. A Wald-type statistic and a score-type statistic are presented to test the homogeneity hypothesis about risk differences across strata. Powers and sample size formulae based on the above two statistics are deduced. Figures about sample size against risk difference (or marginal probability) are given. A real example is used to illustrate the proposed methods.  相似文献   

15.
A relevant problem in Statistics relates to obtaining conclusions about the shape of the distribution of an experiment from which a sample is drawn. We will consider this problem when the available information from the experimental performance cannot be exactly perceived, but that rather it may be assimilated with fuzzy information (as defined by L.A. Zadeh, and H. Tanaka, T. Okuda and K. Asai).If the hypothetical distribution is completely specified, the extension of the chi-square goodness of fit test on the basis of some concepts in Fuzzy Sets Theory does not entail difficulties. Nevertheless, if the hypothetical distribution involves unknown parameters, the extension of the chi- square goodness of fit test requires the estimation of those parameters from the fuzzy data. The aim of the present paper is to prove that, under certain natural assumptions, the minimum inaccuracy principle of estimation from fuzzy observations (which we have suggested in a previous paper as an operative extension of the maximum likelihood principle) supplies a suitable method for the above requirement.  相似文献   

16.
Abstract

The problem of obtaining the maximum probability 2 × c contingency table with fixed marginal sums, R  = (R 1R 2) and C  = (C 1, … , C c ), and row and column independence is equivalent to the problem of obtaining the maximum probability points (mode) of the multivariate hypergeometric distribution MH(R 1; C 1, … , C c ). The most simple and general method for these problems is Joe's (Joe, H. (1988 Joe, H. 1988. Extreme probabilities for contingency tables under row and column independence with application to Fisher's exact test. Commun. Statist. Theory Meth., 17(11): 36773685. [Taylor & Francis Online], [Web of Science ®] [Google Scholar]). Extreme probabilities for contingency tables under row and column independence with application to Fisher's exact test. Commun. Statist. Theory Meth. 17(11):3677–3685.) In this article we study a family of MH's in which a connection relationship is defined between its elements. Based on this family and on a characterization of the mode described in Requena and Martín (Requena, F., Martín, N. (2000 Requena, F. and Martín, N. 2000. Characterization of maximum probability points in the multivariate hypergeometric distribution. Statist. Probab. Lett., 50: 3947.  [Google Scholar]). Characterization of maximum probability points in the multivariate hypergeometric distribution. Statist. Probab. Lett. 50:39–47.), we develop a new method for the above problems, which is completely general, non recursive, very simple in practice and more efficient than the Joe's method. Also, under weak conditions (which almost always hold), the proposed method provides a simple explicit solution to these problems. In addition, the well-known expression for the mode of a hypergeometric distribution is just a particular case of the method in this article.  相似文献   

17.
We considered binomial distributed random variables whose parameters are unknown and some of those parameters need to be estimated. We studied the maximum likelihood ratio test and the maximally selected χ2-test to detect if there is a change in the distributions among the random variables. Their limit distributions under the null hypothesis and their asymptotic distributions under the alternative hypothesis were obtained when the number of the observations is fixed. We discussed the properties of the limit distribution and found an efficient way to calculate the probability of multivariate normal random variables. Finally, those results for both tests have been applied to examples of Lindisfarne's data, the Talipes Data. Our conclusions are consistent with other researchers' findings.  相似文献   

18.
The profile likelihood of the reliability parameter θP(X < Y) or of the ratio of means, when X and Y are independent exponential random variables, has a simple analytical expression and is a powerful tool for making inferences. Inferences about θ can be given in terms of likelihood-confidence intervals with a simple algebraic structure even for small and unequal samples. The case of right censored data can also be handled in a simple way. This is in marked contrast with the complicated expressions that depend on cumbersome numerical calculations of multidimensional integrals required to obtain asymptotic confidence intervals that have been traditionally presented in scientific literature.  相似文献   

19.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号