首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 190 毫秒
1.
Responses of two groups, measured on the same ordinal scale, are compared through the column effect association model, applied on the corresponding 2 × J contingency table. Monotonic or umbrella shaped ordering for the scores of the model are related to stochastic or umbrella ordering of the underlying response distributions, respectively. An algorithm for testing all possible hypotheses of stochastic ordering and deciding on an appropriate one is proposed.  相似文献   

2.
For an R×R square contingency table with nominal categories, the present paper proposes a model which indicates that the absolute values of log odds of the odds ratio for rows i and j and columns j and R to the corresponding symmetric odds ratio for rows j and R and columns i and j are constant for every i<j<R. The model is an extension of the quasi-symmetry model and states a structure of asymmetry of odds ratios. An example is given.  相似文献   

3.
This paper considers a connected Markov chain for sampling 3 × 3 ×K contingency tables having fixed two‐dimensional marginal totals. Such sampling arises in performing various tests of the hypothesis of no three‐factor interactions. A Markov chain algorithm is a valuable tool for evaluating P‐values, especially for sparse datasets where large‐sample theory does not work well. To construct a connected Markov chain over high‐dimensional contingency tables with fixed marginals, algebraic algorithms have been proposed. These algorithms involve computations in polynomial rings using Gröbner bases. However, algorithms based on Gröbner bases do not incorporate symmetry among variables and are very time‐consuming when the contingency tables are large. We construct a minimal basis for a connected Markov chain over 3 × 3 ×K contingency tables. The minimal basis is unique. Some numerical examples illustrate the practicality of our algorithms.  相似文献   

4.
The restricted minimum φ-divergence estimator, [Pardo, J.A., Pardo, L. and Zografos, K., 2002, Minimum φ-divergence estimators with constraints in multinomial populations. Journal of Statistical Planning and Inference, 104, 221–237], is employed to obtain estimates of the cell frequencies of an I×I contingency table under hypotheses of symmetry, marginal homogeneity or quasi-symmetry. The associated φ-divergence statistics are distributed asymptotically as chi-squared distributions under the null hypothesis. The new estimators and test statistics contain, as particular cases, the classical estimators and test statistics previously presented in the literature for the cited problems. A simulation study is presented, for the symmetry problem, to choose the best function φ2 for estimation and the best function φ1 for testing.  相似文献   

5.
We consider a 2×2 contingency table, with dichotomized qualitative characters (A,A) and (B,B), as a sample of size n drawn from a bivariate binomial (0,1) distribution. Maximum likelihood estimates p?1p?2 and p? are derived for the parameters of the two marginals p1p2 and the coefficient of correlation. It is found that p? is identical to Pearson's (1904)?=(χ2/n)½, where ?2 is Pearson's usual chi-square for the 2×2 table. The asymptotic variance-covariance matrix of p?lp?2and p is also derived.  相似文献   

6.
Consider the problem of testing for independence against stochastic order in a 2 × J contingency table, with two treatments and J ordered categories. By conditioning on the margins, the null hypothesis becomes simple. Careful selection of the conditional alternative hypothesis then allows the problem to be formulated as one of a class of problems for which the minimal complete class of admissible tests is known. The exact versions of many common tests, such as t-tests and the Smirnov test, are shown to be inadmissible, and thus the non-randomized versions are overly conservative. The proportional hazards and proportional odds tests are shown to be admissible for a given data set and size. A new result allows a proof of the admissibility of convex hull and adaptive tests for all data sets and sizes.  相似文献   

7.
When an r×c contingency table has many cells having very small expectations, the usual χ2 approximation to the upper tail of the Pearson χ2 goodness-of-fit statistic becomes very conservative. The alternatives considered in this paper are to use either a lognormal approximation, or to scale the usual χ2 approximation. The study involves thousands of tables with various sample sizes, and with tables whose sizes range from 2×2 through 2×10×10. Subject to certain restrictions the new scaled χ2 approximations are recommended for use with tables having an average cell expectation as small as 0·5.  相似文献   

8.
There are numerous situations in categorical data analysis where one wishes to test hypotheses involving a set of linear inequality constraints placed upon the cell probabilities. For example, it may be of interest to test for symmetry in k × k contingency tables against one-sided alternatives. In this case, the null hypothesis imposes a set of linear equalities on the cell probabilities (namely pij = Pji ×i > j), whereas the alternative specifies directional inequalities. Another important application (Robertson, Wright, and Dykstra 1988) is testing for or against stochastic ordering between the marginals of a k × k contingency table when the variables are ordinal and independence holds. Here we extend existing likelihood-ratio results to cover more general situations. To be specific, we consider testing Ht,0 against H1 - H0 and H1 against H2 - H 1 when H0:k × i=1 pixji = 0, j = 1,…, s, H1:k × i=1 pixji × 0, j = 1,…, s, and does not impose any restrictions on p. The xji's are known constants, and s × k - 1. We show that the asymptotic distributions of the likelihood-ratio tests are of chi-bar-square type, and provide expressions for the weighting values.  相似文献   

9.
While analyzing 2 × 2 contingency tables, the log odds ratio for measuring the strength of association is often approximated by a normal distribution with some variance. We show that the expression of that variance needs to be modified in the presence of correlation between two binomial distributions of the contingency table. In the present paper, we derive a correlation-adjusted variance of the limiting normal distribution of log odds ratio. We also propose a correlation adjusted test based on the standard odds ratio for analyzing matched-pair studies and any other study settings that induce correlated binary outcomes. We demonstrate that our proposed test outperforms the classical McNemar’s test. Simulation studies show the gains in power are especially manifest when sample size is small and strong correlation is present. Two examples of real data sets are used to demonstrate that the proposed method may lead to conclusions significantly different from those reached using McNemar’s test.  相似文献   

10.
A Markov chain is proposed that uses coupling from the past sampling algorithm for sampling m×n contingency tables. This method is an extension of the one proposed by Kijima and Matsui (Rand. Struct. Alg., 29:243–256, 2006). It is not polynomial, as it is based upon a recursion, and includes a rejection phase but can be used for practical purposes on small contingency tables as illustrated in a classical 4×4 example.  相似文献   

11.
The indirect mechanism of action of immunotherapy causes a delayed treatment effect, producing delayed separation of survival curves between the treatment groups, and violates the proportional hazards assumption. Therefore using the log‐rank test in immunotherapy trial design could result in a severe loss efficiency. Although few statistical methods are available for immunotherapy trial design that incorporates a delayed treatment effect, recently, Ye and Yu proposed the use of a maximin efficiency robust test (MERT) for the trial design. The MERT is a weighted log‐rank test that puts less weight on early events and full weight after the delayed period. However, the weight function of the MERT involves an unknown function that has to be estimated from historical data. Here, for simplicity, we propose the use of an approximated maximin test, the V0 test, which is the sum of the log‐rank test for the full data set and the log‐rank test for the data beyond the lag time point. The V0 test fully uses the trial data and is more efficient than the log‐rank test when lag exits with relatively little efficiency loss when no lag exists. The sample size formula for the V0 test is derived. Simulations are conducted to compare the performance of the V0 test to the existing tests. A real trial is used to illustrate cancer immunotherapy trial design with delayed treatment effect.  相似文献   

12.
Editor's Report     
There are two common methods for statistical inference on 2 × 2 contingency tables. One is the widely taught Pearson chi-square test, which uses the well-known χ2statistic. The chi-square test is appropriate for large sample inference, and it is equivalent to the Z-test that uses the difference between the two sample proportions for the 2 × 2 case. Another method is Fisher’s exact test, which evaluates the likelihood of each table with the same marginal totals. This article mathematically justifies that these two methods for determining extreme do not completely agree with each other. Our analysis obtains one-sided and two-sided conditions under which a disagreement in determining extreme between the two tests could occur. We also address the question whether or not their discrepancy in determining extreme would make them draw different conclusions when testing homogeneity or independence. Our examination of the two tests casts light on which test should be trusted when the two tests draw different conclusions.  相似文献   

13.
Non-symmetric correspondence analysis (NSCA) is a useful technique for analysing a two-way contingency table. Frequently, the predictor variables are more than one; in this paper, we consider two categorical variables as predictor variables and one response variable. Interaction represents the joint effects of predictor variables on the response variable. When interaction is present, the interpretation of the main effects is incomplete or misleading. To separate the main effects and the interaction term, we introduce a method that, starting from the coordinates of multiple NSCA and using a two-way analysis of variance without interaction, allows a better interpretation of the impact of the predictor variable on the response variable. The proposed method has been applied on a well-known three-way contingency table proposed by Bockenholt and Bockenholt in which they cross-classify subjects by person's attitude towards abortion, number of years of education and religion. We analyse the case where the variables education and religion influence a person's attitude towards abortion.  相似文献   

14.
ABSTRACT

This paper extends the classical methods of analysis of a two-way contingency table to the fuzzy environment for two cases: (1) when the available sample of observations is reported as imprecise data, and (2) the case in which we prefer to categorize the variables based on linguistic terms rather than as crisp quantities. For this purpose, the α-cuts approach is used to extend the usual concepts of the test statistic and p-value to the fuzzy test statistic and fuzzy p-value. In addition, some measures of association are extended to the fuzzy version in order to evaluate the dependence in such contingency tables. Some practical examples are provided to explain the applicability of the proposed methods in real-world problems.  相似文献   

15.
Pearson’s chi-square (Pe), likelihood ratio (LR), and Fisher (Fi)–Freeman–Halton test statistics are commonly used to test the association of an unordered r×c contingency table. Asymptotically, these test statistics follow a chi-square distribution. For small sample cases, the asymptotic chi-square approximations are unreliable. Therefore, the exact p-value is frequently computed conditional on the row- and column-sums. One drawback of the exact p-value is that it is conservative. Different adjustments have been suggested, such as Lancaster’s mid-p version and randomized tests. In this paper, we have considered 3×2, 2×3, and 3×3 tables and compared the exact power and significance level of these test’s standard, mid-p, and randomized versions. The mid-p and randomized test versions have approximately the same power and higher power than that of the standard test versions. The mid-p type-I error probability seldom exceeds the nominal level. For a given set of parameters, the power of Pe, LR, and Fi differs approximately the same way for standard, mid-p, and randomized test versions. Although there is no general ranking of these tests, in some situations, especially when averaged over the parameter space, Pe and Fi have the same power and slightly higher power than LR. When the sample sizes (i.e., the row sums) are equal, the differences are small, otherwise the observed differences can be 10% or more. In some cases, perhaps characterized by poorly balanced designs, LR has the highest power.  相似文献   

16.
Suppose p + 1 experimental groups correspond to increasing dose levels of a treatment and all groups are subject to right censoring. In such instances, permutation tests for trend can be performed based on statistics derived from the weighted log‐rank class. This article uses saddlepoint methods to determine the mid‐P‐values for such permutation tests for any test statistic in the weighted log‐rank class. Permutation simulations are replaced by analytical saddlepoint computations which provide extremely accurate mid‐P‐values that are exact for most practical purposes and almost always more accurate than normal approximations. The speed of mid‐P‐value computation allows for the inversion of such tests to determine confidence intervals for the percentage increase in mean (or median) survival time per unit increase in dosage. The Canadian Journal of Statistics 37: 5‐16; 2009 © 2009 Statistical Society of Canada  相似文献   

17.
Trend tests in dose-response have been central problems in medicine. The likelihood ratio test is often used to test hypotheses involving a stochastic order. Stratified contingency tables are common in practice. The distribution theory of likelihood ratio test has not been full developed for stratified tables and more than two stochastically ordered distributions. Under c strata of m × r tables, for testing the conditional independence against simple stochastic order alternative, this article introduces a model-free test method and gives the asymptotic distribution of the test statistic, which is a chi-bar-squared distribution. A real data set concerning an ordered stratified table will be used to show the validity of this test method.  相似文献   

18.
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross‐over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (?0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non‐central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under‐ or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
Large-sample Wilson-type confidence intervals (CIs) are derived for a parameter of interest in many clinical trials situations: the log-odds-ratio, in a two-sample experiment comparing binomial success proportions, say between cases and controls. The methods cover several scenarios: (i) results embedded in a single 2 × 2 contingency table; (ii) a series of K 2 × 2 tables with common parameter; or (iii) K tables, where the parameter may change across tables under the influence of a covariate. The calculations of the Wilson CI require only simple numerical assistance, and for example are easily carried out using Excel. The main competitor, the exact CI, has two disadvantages: It requires burdensome search algorithms for the multi-table case and results in strong over-coverage associated with long confidence intervals. All the application cases are illustrated through a well-known example. A simulation study then investigates how the Wilson CI performs among several competing methods. The Wilson interval is shortest, except for very large odds ratios, while maintaining coverage similar to Wald-type intervals. An alternative to the Wald CI is the Agresti-Coull CI, calculated from the Wilson and Wald CIs, which has same length as the Wald CI but improved coverage.  相似文献   

20.
A data set in the form of a 2 × 2 × 2 contingency table is presented and analyzed in detail. For instructional purposes, the analysis of the data can be used to illustrate some basic concepts in the loglinear model approach to the analysis of multidimensional contingency tables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号