首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The Wilcoxon–Mann–Whitney test has dominated non parametric analyses in behavioral sciences for the past seven decades. Its widespread use masks the fact that there exist simple “adaptive” procedures which use data-dependent statistical decision rules to select an optimal non parametric test. This paper discusses key adaptive approaches for testing differences in locations in two-sample environments. Our Monte Carlo analysis shows that adaptive procedures often perform substantially better than t-tests, even with moderately sized samples (80 observations). We illustrate adaptive approaches using data from Gneezy and Smorodinsky (2006 Gneezy, U., Smorodinsky, R. (2006). All-pay auctions: an experimental study. J. Economic Behav. Organizat. 61(2): 255275.[Crossref], [Web of Science ®] [Google Scholar]), and offer a Stata package to researchers interested in taking advantage of these techniques.  相似文献   

3.
The Wilcoxon–Mann–Whitney (WMW) test is a popular rank-based two-sample testing procedure for the strong null hypothesis that the two samples come from the same distribution. A modified WMW test, the Fligner–Policello (FP) test, has been proposed for comparing the medians of two populations. A fact that may be under-appreciated among some practitioners is that the FP test can also be used to test the strong null like the WMW. In this article, we compare the power of the WMW and FP tests for testing the strong null. Our results show that neither test is uniformly better than the other and that there can be substantial differences in power between the two choices. We propose a new, modified WMW test that combines the WMW and FP tests. Monte Carlo studies show that the combined test has good power compared to either the WMW and FP test. We provide a fast implementation of the proposed test in an open-source software. Supplementary materials for this article are available online.  相似文献   

4.
Outliers are commonly observed in psychosocial research, generally resulting in biased estimates when comparing group differences using popular mean-based models such as the analysis of variance model. Rank-based methods such as the popular Mann–Whitney–Wilcoxon (MWW) rank sum test are more effective to address such outliers. However, available methods for inference are limited to cross-sectional data and cannot be applied to longitudinal studies under missing data. In this paper, we propose a generalized MWW test for comparing multiple groups with covariates within a longitudinal data setting, by utilizing the functional response models. Inference is based on a class of U-statistics-based weighted generalized estimating equations, providing consistent and asymptotically normal estimates not only under complete but missing data as well. The proposed approach is illustrated with both real and simulated study data.  相似文献   

5.
The Kolassa method implemented in the nQuery Advisor software has been widely used for approximating the power of the Wilcoxon–Mann–Whitney (WMW) test for ordered categorical data, in which Edgeworth approximation is used to estimate the power of an unconditional test based on the WMW U statistic. When the sample size is small or when the sizes in the two groups are unequal, Kolassa’s method may yield quite poor approximation to the power of the conditional WMW test that is commonly implemented in statistical packages. Two modifications of Kolassa’s formula are proposed and assessed by simulation studies.  相似文献   

6.
Sampling cost is a crucial factor in sample size planning, particularly when the treatment group is more expensive than the control group. To either minimize the total cost or maximize the statistical power of the test, we used the distribution-free Wilcoxon–Mann–Whitney test for two independent samples and the van Elteren test for randomized block design, respectively. We then developed approximate sample size formulas when the distribution of data is abnormal and/or unknown. This study derived the optimal sample size allocation ratio for a given statistical power by considering the cost constraints, so that the resulting sample sizes could minimize either the total cost or the total sample size. Moreover, for a given total cost, the optimal sample size allocation is recommended to maximize the statistical power of the test. The proposed formula is not only innovative, but also quick and easy. We also applied real data from a clinical trial to illustrate how to choose the sample size for a randomized two-block design. For nonparametric methods, no existing commercial software for sample size planning has considered the cost factor, and therefore the proposed methods can provide important insights related to the impact of cost constraints.  相似文献   

7.
The main idea behind the proposed class of tests is rooted on an extension of the technique used in the derivation of the Mann–Whitney–Wilcoxon test. Just like the case of two-sample rank-based tests, the new class consists of tests defined through score functions. When properly selected, these score functions lead to consistent and often more powerful tests compared with classical goodness-of-fit tests. Theoretical results are supported by an extensive simulation study.  相似文献   

8.
ABSTRACT

A new method is proposed for identifying clusters in continuous data indexed by time or by space. The scan statistic we introduce is derived from the well-known Mann–Whitney statistic. It is completely non parametric as it relies only on the ranks of the marks. This scan test seems to be very powerful against any clustering alternative. These results have applications in various fields, such as the study of climate data or socioeconomic data.  相似文献   

9.
Pretest–posttest studies are an important and popular method for assessing the effectiveness of a treatment or an intervention in many scientific fields. While the treatment effect, measured as the difference between the two mean responses, is of primary interest, testing the difference of the two distribution functions for the treatment and the control groups is also an important problem. The Mann–Whitney test has been a standard tool for testing the difference of distribution functions with two independent samples. We develop empirical likelihood-based (EL) methods for the Mann–Whitney test to incorporate the two unique features of pretest–posttest studies: (i) the availability of baseline information for both groups; and (ii) the structure of the data with missing by design. Our proposed methods combine the standard Mann–Whitney test with the EL method of Huang, Qin and Follmann [(2008), ‘Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study’, Journal of the American Statistical Association, 103(483), 1270–1280], the imputation-based empirical likelihood method of Chen, Wu and Thompson [(2015), ‘An Imputation-Based Empirical Likelihood Approach to Pretest–Posttest Studies’, The Canadian Journal of Statistics accepted for publication], and the jackknife empirical likelihood method of Jing, Yuan and Zhou [(2009), ‘Jackknife Empirical Likelihood’, Journal of the American Statistical Association, 104, 1224–1232]. Theoretical results are presented and finite sample performances of proposed methods are evaluated through simulation studies.  相似文献   

10.
We propose a measure for interaction for factorial designs that is formulated in terms of a probability similar to the effect size of the Mann–Whitney test. It is shown how asymptotic confidence intervals can be obtained for the effect size and how a statistical test can be constructed. We further show how the test is related to the test proposed by Bhapkar and Gore [Sankhya A, 36:261–272 (1974)]. The results of a simulation study indicate that the test has good power properties and illustrate when the asymptotic approximations are adequate. The effect size is demonstrated on an example dataset.  相似文献   

11.
This paper proposes a new adaptive chart based on the Shiryaev–Roberts procedure, by updating the reference value in an adaptive way to achieve the aim of overall good performance over a range of future expected but unknown mean shifts. A two-dimensional Markov chain model is developed to analyze the run length performance. The design guidelines are given. The comparisons of run length performance of the proposed scheme and other charts show that the proposed chart provides quite effective detecting ability over a range of mean shift sizes. The implementation of the new chart is illustrated by a real data example.  相似文献   

12.
We consider a novel univariate non parametric cumulative sum (CUSUM) control chart for detecting the small shifts in the mean of a process, where the nominal value of the mean is unknown but some historical data are available. This chart is established based on the Mann–Whitney statistic as well as the change-point model, where any assumption for the underlying distribution of the process is not required. The performance comparisons based on simulations show that the proposed control chart is slightly more effective than some other related non parametric control charts.  相似文献   

13.
14.
Abstract

Through simulation and regression, we study the alternative distribution of the likelihood ratio test in which the null hypothesis postulates that the data are from a normal distribution after a restricted Box–Cox transformation and the alternative hypothesis postulates that they are from a mixture of two normals after a restricted (possibly different) Box–Cox transformation. The number of observations in the sample is called N. The standardized distance between components (after transformation) is D = (μ2 ? μ1)/σ, where μ1 and μ2 are the component means and σ2 is their common variance. One component contains the fraction π of observed, and the other 1 ? π. The simulation results demonstrate a dependence of power on the mixing proportion, with power decreasing as the mixing proportion differs from 0.5. The alternative distribution appears to be a non-central chi-squared with approximately 2.48 + 10N ?0.75 degrees of freedom and non-centrality parameter 0.174N(D ? 1.4)2 × [π(1 ? π)]. At least 900 observations are needed to have power 95% for a 5% test when D = 2. For fixed values of D, power, and significance level, substantially more observations are necessary when π ≥ 0.90 or π ≤ 0.10. We give the estimated powers for the alternatives studied and a table of sample sizes needed for 50%, 80%, 90%, and 95% power.  相似文献   

15.
16.
We revisit the well-known Behrens–Fisher problem and apply a newly developed ‘Computational Approach Test’ (CAT) to test the equality of two population means where the populations are assumed to be normal with unknown and possibly unequal variances. An advantage of the CAT is that it does not require the explicit knowledge of the sampling distribution of the test statistic. The CAT is then compared with three widely accepted tests—Welch–Satterthwaite test (WST), Cochran–Cox test (CCT), ‘Generalized p-value’ test (GPT)—and a recently suggested test based on the jackknife procedure, called Singh–Saxena–Srivastava test (SSST). Further, model robustness of these five tests are studied when the data actually came from t-distributions, but wrongly perceived as normal ones. Our detailed study based on a comprehensive simulation indicate some interesting results including the facts that the GPT is quite conservative, and the SSST is not as good as it has been claimed in the literature. To the best of our knowledge, the trends observed in our study have not been reported earlier in the existing literature.  相似文献   

17.
We propose a robust likelihood approach for the Birnbaum–Saunders regression model under model misspecification, which provides full likelihood inferences about regression parameters without knowing the true random mechanisms underlying the data. Monte Carlo simulation experiments and analysis of real data sets are carried out to illustrate the efficacy of the proposed robust methodology.  相似文献   

18.
The A-optimality problem is solved for three treatments in a row–column layout. Depending on the numbers of rows and columns, the requirements for optimality can be decidedly counterintuitive: replication numbers need not be as equal as possible, and trace of the information matrix need not be maximal. General rules for comparing 3×33×3 information matrices for their A-behavior are also developed, and the A-optimality problem is also solved for three treatments in simple block designs.  相似文献   

19.
20.
This article reviews several techniques useful for forming point and interval predictions in regression models with Box-Cox transformed variables. The techniques reviewed—plug-in, mean squared error analysis, predictive likelihood, and stochastic simulation—take account of nonnormality and parameter uncertainty in varying degrees. A Monte Carlo study examining their small-sample accuracy indicates that uncertainty about the Box–Cox transformation parameter may be relatively unimportant. For certain parameters, deterministic point predictions are biased, and plug-in prediction intervals are also biased. Stochastic simulation, as usually carried out, leads to badly biased predictions. A modification of the usual approach renders stochastic simulation predictions largely unbiased.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号