首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper offers a new method for testing one‐sided hypotheses in discrete multivariate data models. One‐sided alternatives mean that there are restrictions on the multidimensional parameter space. The focus is on models dealing with ordered categorical data. In particular, applications are concerned with R×C contingency tables. The method has advantages over other general approaches. All tests are exact in the sense that no large sample theory or large sample distribution theory is required. Testing is unconditional although its execution is done conditionally, section by section, where a section is determined by marginal totals. This eliminates any potential nuisance parameter issues. The power of the tests is more robust than the power of the typical linear tests often recommended. Furthermore, computer programs are available to carry out the tests efficiently regardless of the sample sizes or the order of the contingency tables. Both censored data and uncensored data models are discussed.  相似文献   

2.
Testing for the difference in the strength of bivariate association in two independent contingency tables is an important issue that finds applications in various disciplines. Currently, many of the commonly used tests are based on single-index measures of association. More specifically, one obtains single-index measurements of association from two tables and compares them based on asymptotic theory. Although they are usually easy to understand and use, often much of the information contained in the data is lost with single-index measures. Accordingly, they fail to fully capture the association in the data. To remedy this shortcoming, we introduce a new summary statistic measuring various types of association in a contingency table. Based on this new summary statistic, we propose a likelihood ratio test comparing the strength of association in two independent contingency tables. The proposed test examines the stochastic order between summary statistics. We derive its asymptotic null distribution and demonstrate that the least favorable distributions are chi-bar distributions. We numerically compare the power of the proposed test to that of the tests based on single-index measures. Finally, we provide two examples illustrating the new summary statistics and the related tests.  相似文献   

3.
In this paper we consider rank-based tests for paired survival data, in which pair members are subject to the same right censoring time. Linear signed-rank tests have already been developed for the two-treatment problem in which pair members receive the opposite treatments. Assuming a bivariate accelerated failure time model, we extend this class of linear signed-rank tests to the case of multiple covariates, making this methodology applicable to more complicated experimental designs. These tests can be reformulated as weighted sums of contigency table measures, giving an alternative method of computation and intuitive view of how these tests work. A simulation study of their small-sample performance relative to other tests demonstrates that the linear signed-rank tests have greater power in cases of moderately to highly correlated data.  相似文献   

4.
Approximate chi-square tests for hypotheses concerning multinomial probabilities are considered in many textbooks. In this article power calculations and sample size based on power are discussed and illustrated for the three most frequently used tests of this type. Available noncentrality parameters and existing tables permit a relatively easy solution of these kinds of problems.  相似文献   

5.
This paper considers exponential and rational regression models that are nonlinear in some parameters. Recently, locally D-optimal designs for such models were investigated in [Melas, V. B., 2005. On the functional approach to optimal designs for nonlinear models. J. Statist. Plann. Inference 132, 93–116] based upon a functional approach. In this article a similar method is applied to construct maximin efficient D-optimal designs. This approach allows one to represent the support points of the designs by Taylor series, which gives us the opportunity to construct the designs by hand using tables of the coefficients of the series. Such tables are provided here for models with two nonlinear parameters. Furthermore, the recurrent formulas for constructing the tables for arbitrary numbers of parameters are introduced.  相似文献   

6.
In 1935, R.A. Fisher published his well-known “exact” test for 2x2 contingency tables. This test is based on the conditional distribution of a cell entry when the rows and columns marginal totals are held fixed. Tocher (1950) and Lehmann (1959) showed that Fisher s test, when supplemented by randomization, is uniformly most powerful among all the unbiased tests UMPU). However, since all the practical tests for 2x2 tables are nonrandomized - and therefore biased the UMPU test is not necessarily more powerful than other tests of the same or lower size. Inthis work, the two-sided Fisher exact test and the UMPU test are compared with six nonrandomized unconditional exact tests with respect to their power. In both the two-binomial and double dichotomy models, the UMPU test is often less powerful than some of the unconditional tests of the same (or even lower) size. Thus, the assertion that the Tocher-Lehmann modification of Fisher's conditional test is the optimal test for 2x2 tables is unjustified.  相似文献   

7.
A method of power assessment for the problem of comparing several treatments with a control is considered. Power assessment is based on the power function of a two-sided hypothesis test that none of the treatment is different from the control. Normally distributed data and binary response data are considered. Minimum power levels are found under certain easily interpretable range conditions on the treatment and control means or success probabilities. Expressions are provided allowing simple computer evaluation of minimum guaranteed power levels, and some illustrative tables of power levels are given.  相似文献   

8.
A probability property that connects the skew normal (SN) distribution with the normal distribution is used for proposing a goodness-of-fit test for the composite null hypothesis that a random sample follows an SN distribution with unknown parameters. The random sample is transformed to approximately normal random variables, and then the Shapiro–Wilk test is used for testing normality. The implementation of this test does not require neither parametric bootstrap nor the use of tables for different values of the slant parameter. An additional test for the same problem, based on a property that relates the gamma and SN distributions, is also introduced. The results of a power study conducted by the Monte Carlo simulation show some good properties of the proposed tests in comparison to existing tests for the same problem.  相似文献   

9.
The standard Cramer-von Mises and Anderson-Darling goodness-of-fit tests require continuous underlying distributions with known parameters. In this paper, tables of critical values are generated for both tests for Weibull distributions with unknown location and scale parameters and known shape parameters. The powers of the Cramer-von Mises, Anderson-Darling, Kolmogorov-Smirnov, and Chi-Square tests for this situation are investigated. The Cramer-von Mises test has most power when the shape is 1.0 and the Anderson-Darling test has most power when the shape is 3.5. Finally, a relation between critical value and inverse shape parameter is presented.  相似文献   

10.
Student's t test as well as Wilcoxon's rank-sum test may be inefficient in situations where treatments bring about changes in both location and scale. In order to rectify this situation, O'Brien (1988, Journal of the American Statistical Association 83, 52-61) has proposed two new statistics, the generalized t and generalized rank-sum procedures, which may be much more powerful than their traditional counterparts in such situations. Recently, however, Blair and Morel (1991, Statistics in Medicine in press) have shown that referencing these new statistics to standard F tables as recommended by O'Brien results in inflations of Type I errors. This paper provides tables of critical values which do not produce such inflations. Use of these new critical values results in Type I error rates near nominal levels for the generalized t statistic and slightly conservative rates for the generalized rank-sum test. In addition to the critical values, some new power results are given for the generalized tests.  相似文献   

11.
A test of congruence among distance matrices is described. It tests the hypothesis that several matrices, containing different types of variables about the same objects, are congruent with one another, so they can be used jointly in statistical analysis. Raw data tables are turned into similarity or distance matrices prior to testing; they can then be compared to data that naturally come in the form of distance matrices. The proposed test can be seen as a generalization of the Mantel test of matrix correspondence to any number of distance matrices. This paper shows that the new test has the correct rate of Type I error and good power. Power increases as the number of objects and the number of congruent data matrices increase; power is higher when the total number of matrices in the study is smaller. To illustrate the method, the proposed test is used to test the hypothesis that matrices representing different types of organoleptic variables (colour, nose, body, palate and finish) in single‐malt Scotch whiskies are congruent.  相似文献   

12.
This paper is a continuation of one (1992) in which the author studied the paradoxes that can arise when a nonparametric statistical test is used to give an ordering of k samples and the subsets of those samples. This article characterizes the projection paradoxes that can occur when using contingency tables, complete block designs, and tests of dichotomous behaviour of several samples. This is done by examining the “dictionaries” of possible orderings of each of these procedures. Specifically, it is shown that contingency tables and complete block designs, like the Kruskal-Wallis nonparametric test on k samples, minimize the number and kinds of projection paradoxes that can occur; however, using a test of dichotomous behaviour of several samples does not. An analysis is given of two procedures used to determine the ordering of a pair of samples from a set of k samples. It is shown that these two procedures may not have anything in common.  相似文献   

13.
Abstract

Mixture experiments have attracted increasingly attention due to their great practical value in production and living, while uniform designs over irregular experimental regions have become a hot topic in the area of experimental designs in the past two decades. Noting that the experimental region of a mixture experiment with q components under some constraints is in fact a (q ? 1)-dimensional geometry, this article proposes a new method for searching nearly uniform designs for mixture experiments with any complex constraints. Two examples with some tables and figures are given to illustrate this method.  相似文献   

14.
The operating characteristics (OCs) of an indifference-zone ranking and selection procedure are derived for randomized response binomial data. The OCs include tables and figures to facilitate tradeoffs between sample size and a stated probability of a correct selection, i.e., correctly identifying the binomial population (out of k ≥ 2) characterized by the largest probability of success. Measures of efficiency are provided to assist the analyst in selection of an appropriate randomized response design for the collection of the data. A hybrid randomized response model, which includes the Warner model and the Greenberg et al. model, is introduced to facilitate comparisons among a wider range of statistical designs than previously available. An example comparing failure rates of contraceptive methods is used to illustrate the use of these new results.  相似文献   

15.
This paper proposes a new model for square contingency tables. The proposed model tests the equality of local odds ratios between the one side of the main diagonal and corresponding other side and it represents the non-symmetric structure of the square contingency table. The proposed model is compared with twenty-five models introduced for analysing the square contingency tables for both symmetric and non-symmetric structures. The results show that the proposed model provides best fit performance than other existing models for square contingency tables.  相似文献   

16.
Kang (2006) and Kang and Larsen (in press) used the log likelihood function with Lagrangian multipliers for estimation of cell probabilities in two-way incomplete contingency tables. This paper extends results and simulations to three-way and multi-way tables. Numerous studies cross-classify subjects by three or more categorical factors. Constraints on cell probabilities are incorporated through Lagrangian multipliers. Variances of the MLEs are derived from the matrix of second derivatives of the log likelihood with respect to cell probabilities and the Lagrange multiplier. Wald and likelihood ratio tests of independence are derived using the estimates and estimated variances. In simulation results in Kang and Larsen (in press), for data missing at random, maximum likelihood estimation (MLE) produced more efficient estimates of population proportions than either multiple imputation (MI) based on data augmentation or complete case (CC) analysis. Neither MLE nor MI, however, lead to an improvement over CC analysis with respect to power of tests for independence in two-way tables. Results are extended to multidimensional tables with arbitrary patterns of missing data when the variables are recorded on individual subjects. In three-way and higher-way tables, however, there is information relevant for judging independence in partially classified information, as long as two or more variables are jointly observed. Simulations study three-dimensional tables with three patterns of association and two levels of missing information.  相似文献   

17.
In randomized complete block designs, a monotonic relationship among treatment groups may already be established from prior information, e.g., a study with different dose levels of a drug. The test statistic developed by Page and another from Jonckheere and Terpstra are two unweighted rank based tests used to detect ordered alternatives when the assumptions in the traditional two-way analysis of variance are not satisfied. We consider a new weighted rank based test by utilizing a weight for each subject based on the sample variance in computing the new test statistic. The new weighted rank based test is compared with the two commonly used unweighted tests with regard to power under various conditions. The weighted test is generally more powerful than the two unweighted tests when the number of treatment groups is small to moderate.  相似文献   

18.
A new rank test family is proposed to test the equality of two multivariate failure times distributions with censored observations. The tests are very simple: they are based on a transformation of the multivariate rank vectors to a univariate rank score and the resulting statistics belong to the familiar class of the weighted logrank test statistics. The new procedure is also applicable to multivariate observations in general, such as repeated measures, some of which may be missing. To investigate the performance of the proposed tests, a simulation study was conducted with bivariate exponential models for various censoring rates. The size and power of these tests against Lehmann alternatives were compared to the size and power of two other tests (Wei and Lachin, 1984 and Wei and Knuiman, 1987). In all simulations the new procedures provide a relatively good power and an accurate control over the size of the test. A real example from the National Cooperative Gallstone Study is given  相似文献   

19.
In this study, we consider different sampling designs of ranked set sampling (RSS) and give empirical distribution function (EDF) estimators for each sampling designs. We provide comparative graphs for the EDFs. Using these EDFs, power of five goodness-of-fit tests are obtained by Monte Carlo simulations for Tukey's gh distributions under RSS and simple random sampling (SRS). Performances of these tests are compared with the tests based on the SRS. Also, critical values belong to these tests are obtained for different set and cycle sizes.  相似文献   

20.
A bioequivalence test is to compare bioavailability parameters, such as the maximum observed concentration (Cmax) or the area under the concentration‐time curve, for a test drug and a reference drug. During the planning of a bioequivalence test, it requires an assumption about the variance of Cmax or area under the concentration‐time curve for the estimation of sample size. Since the variance is unknown, current 2‐stage designs use variance estimated from stage 1 data to determine the sample size for stage 2. However, the estimation of variance with the stage 1 data is unstable and may result in too large or too small sample size for stage 2. This problem is magnified in bioequivalence tests with a serial sampling schedule, by which only one sample is collected from each individual and thus the correct assumption of variance becomes even more difficult. To solve this problem, we propose 3‐stage designs. Our designs increase sample sizes over stages gradually, so that extremely large sample sizes will not happen. With one more stage of data, the power is increased. Moreover, the variance estimated using data from both stages 1 and 2 is more stable than that using data from stage 1 only in a 2‐stage design. These features of the proposed designs are demonstrated by simulations. Testing significance levels are adjusted to control the overall type I errors at the same level for all the multistage designs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号