首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Since the squared ranks test was first proposed by Taha in 1964 it has been mentioned by several authors as a test that is easy to use, with good power in many situations. It is almost as easy to use as the Wilcoxon rank sum test, and has greater power when two populations differ in their scale parameters rather than in their location parameters. This paper discuss the versatility of the squared ranks test, introduces a test which uses squared ranks, and presents some exact tables  相似文献   

3.
Theorerms are proved for the maxima and minima of IIRi!/IICj!/T!IIyij ! over r× c contingcncy tables Y=(yij) with row sums R1,…,Rr, column sums C1,…,Cc, and grand total T. These results are imlplemented into the network algorithm of Mehta and Patel (1983) for computing the P-value of Fisher's exact test for unordered r×c contingency tables. The decrease in the amount of computing time can be substantial when the column sums are very different.  相似文献   

4.
Existing equivalence tests for multinomial data are valid asymptotically, but the level is not properly controlled for small and moderate sample sizes. We resolve this difficulty by developing an exact multinomial test for equivalence and an associated confidence interval procedure. We also derive a conservative version of the test that is easy to implement even for very large sample sizes. Both tests use a notion of equivalence that is based on the cumulative distribution function, with two probability vectors being considered equivalent if their partial sums never differ by more than some specified constant. We illustrate the methods by applying them to Weldon's dice data, to data on the digits of , and to data collected by Mendel. The Canadian Journal of Statistics 37: 47–59; © 2009 Statistical Society of Canada  相似文献   

5.
Pearson’s chi-square (Pe), likelihood ratio (LR), and Fisher (Fi)–Freeman–Halton test statistics are commonly used to test the association of an unordered r×c contingency table. Asymptotically, these test statistics follow a chi-square distribution. For small sample cases, the asymptotic chi-square approximations are unreliable. Therefore, the exact p-value is frequently computed conditional on the row- and column-sums. One drawback of the exact p-value is that it is conservative. Different adjustments have been suggested, such as Lancaster’s mid-p version and randomized tests. In this paper, we have considered 3×2, 2×3, and 3×3 tables and compared the exact power and significance level of these test’s standard, mid-p, and randomized versions. The mid-p and randomized test versions have approximately the same power and higher power than that of the standard test versions. The mid-p type-I error probability seldom exceeds the nominal level. For a given set of parameters, the power of Pe, LR, and Fi differs approximately the same way for standard, mid-p, and randomized test versions. Although there is no general ranking of these tests, in some situations, especially when averaged over the parameter space, Pe and Fi have the same power and slightly higher power than LR. When the sample sizes (i.e., the row sums) are equal, the differences are small, otherwise the observed differences can be 10% or more. In some cases, perhaps characterized by poorly balanced designs, LR has the highest power.  相似文献   

6.
We consider testing the quasi-independence hypothesis for two-way contingency tables which contain some structural zero cells. For sparse contingency tables where the large sample approximation is not adequate, the Markov chain Monte Carlo exact tests are powerful tools. To construct a connected chain over the two-way contingency tables with fixed sufficient statistics and an arbitrary configuration of structural zero cells, an algebraic algorithm proposed by Diaconis and Sturmfels [Diaconis, P. and Sturmfels, B. (1998). The Annals of statistics, 26, pp. 363–397.] can be used. However, their algorithm does not seem to be a satisfactory answer, because the Markov basis produced by the algorithm often contains many redundant elements and is hard to interpret. We derive an explicit characterization of a minimal Markov basis, prove its uniqueness, and present an algorithm for obtaining the unique minimal basis. A computational example and the discussion on further basis reduction for the case of positive sufficient statistics are also given.  相似文献   

7.
We propose a simple and robust algorithm for exact inference in 2 × 2 contingency tables. It is based on recursive relations allowing efficient computation of odds-ratio estimates, confidence limits and p-values for Fisher's test. A factor of 3–10 is gained in terms of computer time compared with the classical algorithm of Thomas.  相似文献   

8.
9.
Several approximations to the exact distribution of the Kruskal-Wallis test' statistic presently exist. There approximations can roughly be grouped into two classes: (i) computationally difficult with good accuracy, and (ii) easy to compute but not as accurate as the first class. The purpose of this paper is to introduce two nev approximations (one in the latter class and one which is computationally more involved)y and to compare these with other popular approximations. These comparisons use exact probabilities where available and Monte Carlo simulation otherwise.  相似文献   

10.
Using the concept of distributional distance, a test statistic is proposed FOR the hypothesis of independence in multidimensional contingency tables. A Monte Carlo Study is done to empirically compare the power of the proposed test to the Pearson x2 and the likelihood ratio test- Further, the nonnull distribution under various spike alternatives is tabulated  相似文献   

11.
The Friedman's test is used for assessing the independence of repeated experiments resulting in ranks, summarized as a table of integer entries ranging from 1 to k, with k columns and N rows. For its practical use, the hypothesis testing can be derived either from published tables with exact values for small k and N, or using an asymptotic analytical approximation valid for large N or large k. The quality of the approximation, measured as the relative difference of the true critical values with respect those arising from the asymptotic approximation is simply not known. The literature review shows cases where the wrong conclusion could have been drawn using it, although it may not be the only cause of opposite decisions. By Monte Carlo simulation we conclude that published tables do not cover a large enough set of (k, N) values to assure adequate accuracy. Our proposal is to systematically extend existing tables for k and N values, so that using the analytical approximation for values outside it will have less than a prescribed relative error. For illustration purposes some of the tables have been included in the paper, but the complete set is presented as a source code valid for Octave/Matlab/Scilab etc., and amenable to be ported to other programming languages.  相似文献   

12.
Let F(x) be a life distribution. An exact test is given for testing H0 F is exponential, versusH1Fε NBUE (NWUE); along with a table of critical values for n=5(l)80, and n=80(5)65. An asymptotic test is made available for large values of n, where the standardized normal table can be used for testing.  相似文献   

13.
In 1935, R.A. Fisher published his well-known “exact” test for 2x2 contingency tables. This test is based on the conditional distribution of a cell entry when the rows and columns marginal totals are held fixed. Tocher (1950) and Lehmann (1959) showed that Fisher s test, when supplemented by randomization, is uniformly most powerful among all the unbiased tests UMPU). However, since all the practical tests for 2x2 tables are nonrandomized - and therefore biased the UMPU test is not necessarily more powerful than other tests of the same or lower size. Inthis work, the two-sided Fisher exact test and the UMPU test are compared with six nonrandomized unconditional exact tests with respect to their power. In both the two-binomial and double dichotomy models, the UMPU test is often less powerful than some of the unconditional tests of the same (or even lower) size. Thus, the assertion that the Tocher-Lehmann modification of Fisher's conditional test is the optimal test for 2x2 tables is unjustified.  相似文献   

14.
Testing conditional symmetry against various alternative diagonals-parameter symmetry models often provides a point of departure in studies of square contingency tables with ordered categories. Typically, chi-square or likelihood-ratio tests are used for such purposes. Since these tests depend on the validity of asymptotic approximation, they may be inappropriate in small-sample situations where exact tests are required. In this paper, we apply the theory of UMP unbiased tests to develop a class of exact tests for conditional symmetry in small samples. Oesophageal cancer and longitudinal income data are used to illustrate the approach.  相似文献   

15.
The ordinary Wilcoxon signed rank test table provides confidence intervals for the median of one population. Adjusted Wilcoxon signed rank test tables which can provide confidence intervals for the median and the 10th percentile of one population are created in this paper. Base-(n + 1) number system and theorems about property of symmetry of the adjusted Wilcoxon signed rank test statistic are derived for programming. Theorem 1 states that the adjusted Wilcoxon signed rank test statistic are symmetric around n(n + 1)/4. Theorem 2 states that the adjusted Wilcoxon signed rank test statistic with the same number of negative ranks m are symmetric around m(n+1)/2. 87.5% and 85% confidence intervals of the median are given in the table for n = 12, 13,…, 29 to create approximated 95% confidence intervals of the ratio of medians for two independent populations. 95% and 92.5% confidence intervals of the 10th percentile are given in the table for n = 26, 27, 28, 29 to create approximated 95% confidence regions of the ratio of the 10th percentiles for two independent populations. Finally two large datasets from wood industry will be partitioned to verify the correctness of adjusted Wilcoxon signed rank test tables for small samples.  相似文献   

16.
A modified transformed chi-square statistic is defined for testing hypotheses of quasi-independence in the incomplete multi-dimensional contingency table and a simple method for determining degrees of freedom is given. A modified transformed chi-squareestimator of the expected cell frequencies is given in closed form for a general class of exact linear constraints. The co-variance matrix of estimated cell frequencies is derived under the assumption of a conditional Poisson distribution.  相似文献   

17.
The Wilcoxon rank-sum test and its variants are historically well-known to be very powerful nonparametric decision rules for testing no location difference between two groups given paired data versus a shift alternative. In this title, we propose a new alternative empirical likelihood (EL) ratio approach for testing the equality of marginal distributions given that sampling is from a continuous bivariate population. We show that in various shift alternative scenarios the proposed exact test is superior to the classic nonparametric procedures, which may break down completely or are frequently inferior to the density-based EL ratio test. This is particularly true in the cases where there is a nonconstant shift under the alternative or the data distributions are skewed. An extensive Monte Carlo study shows that the proposed test has excellent operating characteristics. We apply the density-based EL ratio test to analyze real data from two medical studies.  相似文献   

18.
Fisher's exact test for two-by-two contingency tables has repeatedly been criticized as being too conservative. These criticisms arise most frequently in the context of a planned experiment for which the numbers of successes in each of two experimental groups are assumed to be binomially distributed. It is argued here that the binomial model is often unrealistic, and that the departures from the binomial assumptions reduce the conservatism in Fisher's exact test. Further discussion supports a recent claim of Barnard (1989) that the residual conservatism is attributable, not to any additional information used by the competing method, but to the discrete nature of the test, and can be drastically reduced through the use of Lancaster's mid-p-value. The binomial model is not recommended in that it depends on extra, questionable assumptions.  相似文献   

19.
We consider a likelihood ratio test of independence for large two-way contingency tables having both structural (non-random) and sampling (random) zeros in many cells. The solution of this problem is not available using standard likelihood ratio tests. One way to bypass this problem is to remove the structural zeroes from the table and implement a test on the remaining cells which incorporate the randomness in the sampling zeros; the resulting test is a test of quasi-independence of the two categorical variables. This test is based only on the positive counts in the contingency table and is valid when there is at least one sampling (random) zero. The proposed (likelihood ratio) test is an alternative to the commonly used ad hoc procedures of converting the zero cells to positive ones by adding a small constant. One practical advantage of our procedure is that there is no need to know if a zero cell is structural zero or a sampling zero. We model the positive counts using a truncated multinomial distribution. In fact, we have two truncated multinomial distributions; one for the null hypothesis of independence and the other for the unrestricted parameter space. We use Monte Carlo methods to obtain the maximum likelihood estimators of the parameters and also the p-value of our proposed test. To obtain the sampling distribution of the likelihood ratio test statistic, we use bootstrap methods. We discuss many examples, and also empirically compare the power function of the likelihood ratio test relative to those of some well-known test statistics.  相似文献   

20.
Practitioners of statistics are too often guilty of routinely selecting a 95% confidence level in interval estimation and ignoring the sample size and the expected size of the interval. One way to balance coverage and size is to use a loss function in a decision problem. Then either the Bayes risk or usual risk (if a pivotal quantity exists) may be minimized. It is found that some non-Bayes solutions are equivalent to Bayes results based on non-informative priors. The decision theory approach is applied to the mean and standard deviation of the univariate normal model and the mean of the multivariate normal. Tables are presented for critical values, expected size, confidence and sample size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号