首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
The Kruskal–Wallis test is a rank–based one way ANOVA. Its test statistic is shown here to be a quadratic form among the Mann–Whitney or Kendall tau concordance measures between pairs of treatments. But the full set of such concordance measures has more degrees of freedom than the Kruskal–Wallis test uses, and the independent surplus is attributable to circularity, or non–transitive effects. The meaning of circularity is well illustrated by Efron dice. The cases of k = 3, 4 treatments are analysed thoroughly in this paper, which also shows how the full sum of squares among all concordance measures can be decomposed into uncorrelated transitive and non–transitive circularity effects. A multiple comparisons procedure based on patterns of transitive orderings among treatments is implemented. The testing of circularities involves non–standard asymptotic distributions. The asymptotic theory is deferred, but Monte Carlo permutation tests are easy to implement.  相似文献   

2.
In the spirit of the recent work of Ahmad (1996) this paper introduces another class of Mann–Whitney–Wilcoxon test statistics. The test statistic compares the r th and s th powers of the tail probabilities of the underlying probability distributions. The choice of r + s = 4 improves the Pitman efficiency for uniform, exponential, lognormal and normal distributions and keeps the same efficiency as the Mann–Whitney–Wilcoxon test for logistic and double exponential distributions. The two-sample test is modified for the one-sample problem with symmetric underlying distribution.  相似文献   

3.
Given a probability measure on the unit square, the measure of the region under an empirical P – P -plot defines a two-sample rank statistic. Instances include trimmed and censored versions of the Mann–Whitney–Wilcoxon statistic and a class of statistics with applications in the analysis of receiver operating characteristic (ROC) curves. A large sample distribution for such a statistic is obtained, which is valid under sampling from general populations. Explicit results are presented for comparing arbitrary quantile segments of two populations. The results are not restricted to continuous data and incorporate adjustments for tied values in the discrete case. A multivariate version of the large sample distribution extends the class of tractable statistics in ROC analysis and facilitates the use of methods based on partial areas when the data are discrete.  相似文献   

4.
In socioeconomic areas, functional observations may be collected with weights, called weighted functional data. In this paper, we deal with a general linear hypothesis testing (GLHT) problem in the framework of functional analysis of variance with weighted functional data. With weights taken into account, we obtain unbiased and consistent estimators of the group mean and covariance functions. For the GLHT problem, we obtain a pointwise F-test statistic and build two global tests, respectively, via integrating the pointwise F-test statistic or taking its supremum over an interval of interest. The asymptotic distributions of test statistics under the null and some local alternatives are derived. Methods for approximating their null distributions are discussed. An application of the proposed methods to density function data is also presented. Intensive simulation studies and two real data examples show that the proposed tests outperform the existing competitors substantially in terms of size control and power.  相似文献   

5.
A class of statistics is introduced for testing stochastic ordering between two independent distributions. This class includes as a special case the celebrated Mann—Whitney—Wilcoxon statistic. The new class is shown to be asymptotically normal both under the null and nonnull hypotheses. It is distribution-free. Using Pitman's asymptotic efficacy it is shown that for some alternatives the Mann—Whitney—Wilcoxon statistic is the member with the highest efficacy, although for others it is not, and the member with the highest efficacy is identified.  相似文献   

6.

In this article, the validity of procedures for testing the significance of the slope in quantitative linear models with one explanatory variable and first-order autoregressive [AR(1)] errors is analyzed in a Monte Carlo study conducted in the time domain. Two cases are considered for the regressor: fixed and trended versus random and AR(1). In addition to the classical t -test using the Ordinary Least Squares (OLS) estimator of the slope and its standard error, we consider seven t -tests with n-2\,\hbox{df} built on the Generalized Least Squares (GLS) estimator or an estimated GLS estimator, three variants of the classical t -test with different variances of the OLS estimator, two asymptotic tests built on the Maximum Likelihood (ML) estimator, the F -test for fixed effects based on the Restricted Maximum Likelihood (REML) estimator in the mixed-model approach, two t -tests with n - 2 df based on first differences (FD) and first-difference ratios (FDR), and four modified t -tests using various corrections of the number of degrees of freedom. The FDR t -test, the REML F -test and the modified t -test using Dutilleul's effective sample size are the most valid among the testing procedures that do not assume the complete knowledge of the covariance matrix of the errors. However, modified t -tests are not applicable and the FDR t -test suffers from a lack of power when the regressor is fixed and trended ( i.e. , FDR is the same as FD in this case when observations are equally spaced), whereas the REML algorithm fails to converge at small sample sizes. The classical t -test is valid when the regressor is fixed and trended and autocorrelation among errors is predominantly negative, and when the regressor is random and AR(1), like the errors, and autocorrelation is moderately negative or positive. We discuss the results graphically, in terms of the circularity condition defined in repeated measures ANOVA and of the effective sample size used in correlation analysis with autocorrelated sample data. An example with environmental data is presented.  相似文献   

7.
ABSTRACT

A new method is proposed for identifying clusters in continuous data indexed by time or by space. The scan statistic we introduce is derived from the well-known Mann–Whitney statistic. It is completely non parametric as it relies only on the ranks of the marks. This scan test seems to be very powerful against any clustering alternative. These results have applications in various fields, such as the study of climate data or socioeconomic data.  相似文献   

8.
It is well known that, when sample observations are independent, the area under the receiver operating characteristic (ROC) curve corresponds to the Wilcoxon statistics if the area is calculated by the trapezoidal rule. Correlated ROC curves arise often in medical research and have been studied by various parametric methods. On the basis of the Mann–Whitney U-statistics for clustered data proposed by Rosner and Grove, we construct an average ROC curve and derive nonparametric methods to estimate the area under the average curve for correlated ROC curves obtained from multiple readers. For the more complicated case where, in addition to multiple readers examining results on the same set of individuals, two or more diagnostic tests are involved, we derive analytic methods to compare the areas under correlated average ROC curves for these diagnostic tests. We demonstrate our methods in an example and compare our results with those obtained by other methods. The nonparametric average ROC curve and the analytic methods that we propose are easy to explain and simple to implement.  相似文献   

9.
The distribution of Student's t statistic under non-normal situations is obtained. The effect of non-normality on type I error and the power of the two-sided t-test is studied in some detail.  相似文献   

10.
Early investigations of the effects of non-normality indicated that skewness has a greater effect on the distribution of t-statistic than does kurtosis. When the distribution is skewed, the actual p-values can be larger than the values calculated from the t-tables. Transformation of data to normality has shown good results in the case of univariate t-test. In order to reduce the effect of skewness of the distribution on normal-based t-test, one can transform the data and perform the t-test on the transformed scale. This method is not only a remedy for satisfying the distributional assumption, but it also turns out that one can achieve greater efficiency of the test. We investigate the efficiency of tests after a Box-Cox transformation. In particular, we consider the one sample test of location and study the gains in efficiency for one-sample t-test following a Box-Cox transformation. Under some conditions, we prove that the asymptotic relative efficiency of transformed t-test and Hotelling's T 2-test of multivariate location with respect to the same statistic based on untransformed data is at least one.  相似文献   

11.
In the last few years, two adaptive tests for paired data have been proposed. One test proposed by Freidlin et al. [On the use of the Shapiro–Wilk test in two-stage adaptive inference for paired data from moderate to very heavy tailed distributions, Biom. J. 45 (2003), pp. 887–900] is a two-stage procedure that uses a selection statistic to determine which of three rank scores to use in the computation of the test statistic. Another statistic, proposed by O'Gorman [Applied Adaptive Statistical Methods: Tests of Significance and Confidence Intervals, Society for Industrial and Applied Mathematics, Philadelphia, 2004], uses a weighted t-test with the weights determined by the data. These two methods, and an earlier rank-based adaptive test proposed by Randles and Hogg [Adaptive Distribution-free Tests, Commun. Stat. 2 (1973), pp. 337–356], are compared with the t-test and to Wilcoxon's signed-rank test. For sample sizes between 15 and 50, the results show that the adaptive test proposed by Freidlin et al. and the adaptive test proposed by O'Gorman have higher power than the other tests over a range of moderate to long-tailed symmetric distributions. The results also show that the test proposed by O'Gorman has greater power than the other tests for short-tailed distributions. For sample sizes greater than 50 and for small sample sizes the adaptive test proposed by O'Gorman has the highest power for most distributions.  相似文献   

12.
Abstract.  In this article, we revisit some problems in non-parametric hypothesis testing. First, we extend the classical result of Bahadur & Savage [ Ann. Math. Statist . 25 (1956) 1115] to other testing problems, and we answer a conjecture of theirs. Other examples considered are testing whether or not the mean is rational, testing goodness-of-fit, and equivalence testing. Next, we discuss the uniform behaviour of the classical t -test. For most non-parametric models, the Bahadur–Savage result yields that the size of the t -test is one for every sample size. Even if we restrict attention to the family of symmetric distributions supported on a fixed compact set, the t -test is not even uniformly asymptotically level α . However, the convergence of the rejection probability is established uniformly over a large family with a very weak uniform integrability type of condition. Furthermore, under such a restriction, the t -test possesses an asymptotic maximin optimality property.  相似文献   

13.
Approximating the Shapiro-Wilk W-test for non-normality   总被引:1,自引:0,他引:1  
A new approximation for the coefficients required to calculate the Shapiro-WilkW-test is derived. It is easy to calculate and applies for any sample size greater than 3. A normalizing transformation for theW statistic is given, enabling itsP-value to be computed simply. The distribution of the new approximation toW agrees well with published critical points which use exact coefficients.  相似文献   

14.
Clinical trials are often designed to compare continuous non‐normal outcomes. The conventional statistical method for such a comparison is a non‐parametric Mann–Whitney test, which provides a P‐value for testing the hypothesis that the distributions of both treatment groups are identical, but does not provide a simple and straightforward estimate of treatment effect. For that, Hodges and Lehmann proposed estimating the shift parameter between two populations and its confidence interval (CI). However, such a shift parameter does not have a straightforward interpretation, and its CI contains zero in some cases when Mann–Whitney test produces a significant result. To overcome the aforementioned problems, we introduce the use of the win ratio for analysing such data. Patients in the new and control treatment are formed into all possible pairs. For each pair, the new treatment patient is labelled a ‘winner’ or a ‘loser’ if it is known who had the more favourable outcome. The win ratio is the total number of winners divided by the total numbers of losers. A 95% CI for the win ratio can be obtained using the bootstrap method. Statistical properties of the win ratio statistic are investigated using two real trial data sets and six simulation studies. Results show that the win ratio method has about the same power as the Mann–Whitney method. We recommend the use of the win ratio method for estimating the treatment effect (and CI) and the Mann–Whitney method for calculating the P‐value for comparing continuous non‐Normal outcomes when the amount of tied pairs is small. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
The Kolassa method implemented in the nQuery Advisor software has been widely used for approximating the power of the Wilcoxon–Mann–Whitney (WMW) test for ordered categorical data, in which Edgeworth approximation is used to estimate the power of an unconditional test based on the WMW U statistic. When the sample size is small or when the sizes in the two groups are unequal, Kolassa’s method may yield quite poor approximation to the power of the conditional WMW test that is commonly implemented in statistical packages. Two modifications of Kolassa’s formula are proposed and assessed by simulation studies.  相似文献   

16.
An adaptive test is proposed for the one-way layout. This test procedure uses the order statistics of the combined data to obtain estimates of percentiles, which are used to select an appropriate set of rank scores for the one-way test statistic. This test is designed to have reasonably high power over a range of distributions. The adaptive procedure proposed for a one-way layout is a generalization of an existing two-sample adaptive test procedure. In this Monte Carlo study, the power and significance level of the F-test, the Kruskal-Wallis test, the normal scores test, and the adaptive test were evaluated for the one-way layout. All tests maintained their significance level for data sets having at least 24 observations. The simulation results show that the adaptive test is more powerful than the other tests for skewed distributions if the total number of observations equals or exceeds 24. For data sets having at least 60 observations the adaptive test is also more powerful than the F-test for some symmetric distributions.  相似文献   

17.
Pretest–posttest studies are an important and popular method for assessing the effectiveness of a treatment or an intervention in many scientific fields. While the treatment effect, measured as the difference between the two mean responses, is of primary interest, testing the difference of the two distribution functions for the treatment and the control groups is also an important problem. The Mann–Whitney test has been a standard tool for testing the difference of distribution functions with two independent samples. We develop empirical likelihood-based (EL) methods for the Mann–Whitney test to incorporate the two unique features of pretest–posttest studies: (i) the availability of baseline information for both groups; and (ii) the structure of the data with missing by design. Our proposed methods combine the standard Mann–Whitney test with the EL method of Huang, Qin and Follmann [(2008), ‘Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study’, Journal of the American Statistical Association, 103(483), 1270–1280], the imputation-based empirical likelihood method of Chen, Wu and Thompson [(2015), ‘An Imputation-Based Empirical Likelihood Approach to Pretest–Posttest Studies’, The Canadian Journal of Statistics accepted for publication], and the jackknife empirical likelihood method of Jing, Yuan and Zhou [(2009), ‘Jackknife Empirical Likelihood’, Journal of the American Statistical Association, 104, 1224–1232]. Theoretical results are presented and finite sample performances of proposed methods are evaluated through simulation studies.  相似文献   

18.
Summary.  Previously we used the geometry of n -dimensional space to derive the paired samples t -test and its p -value. In the present paper we describe the 'ubiquitous' application of these results to single degree of freedom linear model hypothesis tests. As examples, we derive the p - and t -values for the independent samples t -test, for testing a contrast in an analysis of variance and for testing the slope in a simple linear regression analysis. An angle θ in n -dimensional space is again pivotal in the development of the ideas. The relationships between p , t , θ , F and the correlation coefficient are also described by using a 'statistical triangle'.  相似文献   

19.
In this article, we propose a new class of semiparametric instrumental variable models with partially varying coefficients, in which the structural function has a partially linear form and the impact of endogenous structural variables can vary over different levels of some exogenous variables. We propose a three-step estimation procedure to estimate both functional and constant coefficients. The consistency and asymptotic normality of these proposed estimators are established. Moreover, a generalized F-test is developed to test whether the functional coefficients are of particular parametric forms with some underlying economic intuitions, and furthermore, the limiting distribution of the proposed generalized F-test statistic under the null hypothesis is established. Finally, we illustrate the finite sample performance of our approach with simulations and two real data examples in economics.  相似文献   

20.
The concept of causality is naturally defined in terms of conditional distribution, however almost all the empirical works focus on causality in mean. This paper aims to propose a nonparametric statistic to test the conditional independence and Granger non-causality between two variables conditionally on another one. The test statistic is based on the comparison of conditional distribution functions using an L2 metric. We use Nadaraya–Watson method to estimate the conditional distribution functions. We establish the asymptotic size and power properties of the test statistic and we motivate the validity of the local bootstrap. We ran a simulation experiment to investigate the finite sample properties of the test and we illustrate its practical relevance by examining the Granger non-causality between S&P 500 Index returns and VIX volatility index. Contrary to the conventional t-test which is based on a linear mean-regression, we find that VIX index predicts excess returns both at short and long horizons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号