首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
After pointing out a drawback in Bartlett's chi-square approximation, we suggest a simple modification and a Gamma approximation to improve Bartlett's M test for homogeneity of variances.  相似文献   

2.
The Bartlett's test (1937) for equality of variances is based on the χ2 distribution approximation. This approximation deteriorates either when the sample size is small (particularly < 4) or when the population number is large. According to a simulation investigation, we find a similar varying trend for the mean differences between empirical distributions of Bartlett's statistics and their χ2 approximations. By using the mean differences to represent the distribution departures, a simple adjustment approach on the Bartlett's statistic is proposed on the basis of equal mean principle. The performance before and after adjustment is extensively investigated under equal and unequal sample sizes, with number of populations varying from 3 to 100. Compared with the traditional Bartlett's statistic, the adjusted statistic is distributed more closely to χ2 distribution, for homogeneity samples from normal populations. The type I error is well controlled and the power is a little higher after adjustment. In conclusion, the adjustment has good control on the type I error and higher power, and thus is recommended for small samples and large population number when underlying distribution is normal.  相似文献   

3.
Book Reviews     
The Levene test is a widely used test for detecting differences in dispersion. The modified Levene transformation using sample medians is considered in this article. After Levene's transformation the data are not normally distributed, hence, nonparametric tests may be useful. As the Wilcoxon rank sum test applied to the transformed data cannot control the type I error rate for asymmetric distributions, a permutation test based on reallocations of the original observations rather than the absolute deviations was investigated. Levene's transformation is then only an intermediate step to compute the test statistic. Such a Levene test, however, cannot control the type I error rate when the Wilcoxon statistic is used; with the Fisher–Pitman permutation test it can be extremely conservative. The Fisher–Pitman test based on reallocations of the transformed data seems to be the only acceptable nonparametric test. Simulation results indicate that this test is on average more powerful than applying the t test after Levene's transformation, even when the t test is improved by the deletion of structural zeros.  相似文献   

4.
We revisit the problem of testing homoscedasticity (or, equality of variances) of several normal populations which has applications in many statistical analyses, including design of experiments. The standard text books and widely used statistical packages propose a few popular tests including Bartlett's test, Levene's test and a few adjustments of the latter. Apparently, the popularity of these tests have been based on limited simulation study carried out a few decades ago. The traditional tests, including the classical likelihood ratio test (LRT), are asymptotic in nature, and hence do not perform well for small sample sizes. In this paper we propose a simple parametric bootstrap (PB) modification of the LRT, and compare it against the other popular tests as well as their PB versions in terms of size and power. Our comprehensive simulation study bursts some popularly held myths about the commonly used tests and sheds some new light on this important problem. Though most popular statistical software/packages suggest using Bartlette's test, Levene's test, or modified Levene's test among a few others, our extensive simulation study, carried out under both the normal model as well as several non-normal models clearly shows that a PB version of the modified Levene's test (which does not use the F-distribution cut-off point as its critical value), and Loh's exact test are the “best” performers in terms of overall size as well as power.  相似文献   

5.
A combination of a smooth test statistic and (an approximate) Schwarz's selection rule has been proposed by Inglot, T., Kallenberg, W. C. M. and Ledwina, T. ((1997). Data-driven smooth tests for composite hypotheses. Ann. Statist. 25, 1222–1250) as a solution of a standard goodness-of-fit problem when nuisance parameters are present. In the present paper we modify the above solution in the sense that we propose another analogue of Schwarz's rule and rederive properties of it and the resulting test statistic. To avoid technicalities we restrict our attention to location-scale family and method of moments estimators of its parameters. In a parallel paper [Janic-Wróblewska, A. (2004). Data-driven smooth tests for the extreme value distribution. Statistics, in press] we illustrate an application of our solution and advantages of modification when testing of fit to extreme value distribution.  相似文献   

6.
ABSTRACT

On the basis of Csiszar's φ-divergence discrimination information, we propose a measure of discrepancy between equilibriums associated with two distributions. Proving that a distribution can be characterized by associated equilibrium distribution, a Renyi distance of the equilibrium distributions is constructed that made us to propose an EDF-based goodness-of-fit test for exponential distribution. For comparing the performance of the proposed test, some well-known EDF-based tests and some entropy-based tests are considered. Based on the simulation results, the proposed test has better powers than those of competing entropy-based tests for the alternatives with decreasing hazard rate function. The use of the proposed test is evaluated in an illustrative example.  相似文献   

7.
Abstract

In this paper, we introduce a version of Hayter and Tsui's statistical test with double sampling for the vector mean of a population under multivariate normal assumption. A study showed that this new test was more or as efficient than the well-known Hotelling's T2 with double sampling. Some nice features of Hayter and Tsui's test are its simplicity of implementation and its capability of identifying the errant variables when the null hypothesis is rejected. Taking that into consideration, a new control chart called HTDS is also introduced as a tool to monitor multivariate process vector mean when using double sampling.  相似文献   

8.
In teaching the development of uniformly most powerful unbiased (UMPU) tests, one rarely discusses the performance of alternative biased tests. It is shown, through the comparison of two independent Bernoulli proportions, that a biased test (the Z test) can be more powerful than the UMPU test (Fisher's exact test—randomized) in a large region of the alternative parameter space. A more general example is also given.  相似文献   

9.
In this paper we present data-driven smooth tests for the extreme value distribution. These tests are based on a general idea of construction of data-driven smooth tests for composite hypotheses introduced by Inglot, T., Kallenberg, W. C. M. and Ledwina, T. [(1997). Data-driven smooth tests for composite hypotheses. Ann. Statist., 25, 1222–1250] and its modification for location-scale family proposed in Janic-Wróblewska, A. [(2004). Data-driven smooth test for a location-scale family. Statistics, in press]. Results of power simulations show that the newly introduced test performs very well for a wide range of alternatives and is competitive with other commonly used tests for the extreme value distribution.  相似文献   

10.
Fisher's exact test, difference in proportions, log odds ratio, Pearson's chi-squared, and likelihood ratio are compared as test statistics for testing independence of two dichotomous factors when the associated p values are computed by using the conditional distribution given the marginals. The statistics listed above that can be used for a one-sided alternative give identical p values. For a two-sided alternative, many of the above statistics lead to different p values. The p values are shown to differ only by which tables in the opposite tail from the observed table are considered more extreme than the observed table.  相似文献   

11.
The distribution of the sample correlation coefficient is derived when the population is a mixture of two bivariate normal distributions with zero mean but different covariances and mixing proportions 1 - λ and λ respectively; λ will be called the proportion of contamination. The test of ρ = 0 based on Student's t, Fisher's z, arcsine, or Ruben's transformation is shown numerically to be nonrobust when λ, the proportion of contamination, lies between 0.05 and 0.50 and the contaminated population has 9 times the variance of the standard (bivariate normal) population. These tests are also sensitive to the presence of outliers.  相似文献   

12.
This paper investigates a new family of goodness-of-fit tests based on the negative exponential disparities. This family includes the popular Pearson's chi-square as a member and is a subclass of the general class of disparity tests (Basu and Sarkar, 1994) which also contains the family of power divergence statistics. Pitman efficiency and finite sample power comparisons between different members of this new family are made. Three asymptotic approximations of the exact null distributions of the negative exponential disparity famiiy of tests are discussed. Some numerical results on the small sample perfomance of this family of tests are presented for the symmetric null hypothesis. It is shown that the negative exponential disparity famiiy, Like the power divergence family, produces a new goodness-of-fit test statistic that can be a very attractive alternative to the Pearson's chi-square. Some numerical results suggest that, application of this test statistic, as an alternative to Pearson's chi-square, could be preferable to the I 2/3 statistic of Cressie and Read (1984) under the use of chi-square critical values.  相似文献   

13.
Tests that combine p-values, such as Fisher's product test, are popular to test the global null hypothesis H0 that each of n component null hypotheses, H1,…,Hn, is true versus the alternative that at least one of H1,…,Hn is false, since they are more powerful than classical multiple tests such as the Bonferroni test and the Simes tests. Recent modifications of Fisher's product test, popular in the analysis of large scale genetic studies include the truncated product method (TPM) of Zaykin et al. (2002), the rank truncated product (RTP) test of Dudbridge and Koeleman (2003) and more recently, a permutation based test—the adaptive rank truncated product (ARTP) method of Yu et al. (2009). The TPM and RTP methods require users' specification of a truncation point. The ARTP method improves the performance of the RTP method by optimizing selection of the truncation point over a set of pre-specified candidate points. In this paper we extend the ARTP by proposing to use all the possible truncation points {1,…,n} as the candidate truncation points. Furthermore, we derive the theoretical probability distribution of the test statistic under the global null hypothesis H0. Simulations are conducted to compare the performance of the proposed test with the Bonferroni test, the Simes test, the RTP test, and Fisher's product test. The simulation results show that the proposed test has higher power than the Bonferroni test and the Simes test, as well as the RTP method. It is also significantly more powerful than Fisher's product test when the number of truly false hypotheses is small relative to the total number of hypotheses, and has comparable power to Fisher's product test otherwise.  相似文献   

14.
Taku Moriyama 《Statistics》2018,52(5):1096-1115
We discuss smoothed rank statistics for testing the location shift parameter of the two-sample problem. They are based on discrete test statistics – the median and Wilcoxon's rank sum tests. For the one-sample problem, Maesono et al. [Smoothed nonparametric tests and their properties. arXiv preprint. 2016; ArXiv:1610.02145] reported that some nonparametric discrete tests have a problem with their p-values because of their discreteness. The p-values of Wilcoxon's test are frequently smaller than those of the median test in the tail area. This leads to an arbitrary choice of the median and Wilcoxon's rank sum tests. To overcome this problem, we propose smoothed versions of those tests. The smoothed tests inherit the good properties of the original tests and are asymptotically equivalent to them. We study the significance probabilities and local asymptotic powers of the proposed tests.  相似文献   

15.

The problem of comparing several samples to decide whether the means and/or variances are significantly different is considered. It is shown that with very non-normal distributions even a very robust test to compare the means has poor properties when the distributions have different variances, and therefore a new testing scheme is proposed. This starts by using an exact randomization test for any significant difference (in means or variances) between the samples. If a non-significant result is obtained then testing stops. Otherwise, an approximate randomization test for mean differences (but allowing for variance differences) is carried out, together with a bootstrap procedure to assess whether this test is reliable. A randomization version of Levene's test is also carried out for differences in variation between samples. The five possible conclusions are then that (i) there is no evidence of any differences, (ii) evidence for mean differences only, (iii) evidence for variance differences only, (iv) evidence for mean and variance differences, or (v) evidence for some indeterminate differences. A simulation experiment to assess the properties of the proposed scheme is described. From this it is concluded that the scheme is useful as a robust, conservative method for comparing samples in cases where they may be from very non-normal distributions.  相似文献   

16.
This paper proposes a class of non‐parametric test procedures for testing the null hypothesis that two distributions, F and G, are equal versus the alternative hypothesis that F is ‘more NBU (new better than used) at specified age t0’ than G. Using Hoeffding's two‐sample U‐statistic theorem, it establishes the asymptotic normality of the test statistics and produces a class of asymptotically distribution‐free tests. Pitman asymptotic efficacies of the proposed tests are calculated with respect to the location and shape parameters. A numerical example is provided for illustrative purposes.  相似文献   

17.
Tests for the equality of variances are often needed in applications. In genetic studies the assumption of equal variances of continuous traits, measured in identical and fraternal twins, is crucial for heritability analysis. To test the equality of variances of traits, which are non-normally distributed, Levene [H. Levene, Robust tests for equality of variances, in Contributions to Probability and Statistics, I. Olkin, ed. Stanford University Press, Palo Alto, California, 1960, pp. 278–292] suggested a method that was surprisingly robust under non-normality, and the procedure was further improved by Brown and Forsythe [M.B. Brown and A.B. Forsythe, Robust tests for the equality of variances, J. Amer. Statis. Assoc. 69 (1974), pp. 364–367]. These tests assumed independence of observations. However, twin data are clustered – observations within a twin pair may be dependent due to shared genes and environmental factors. Uncritical application of the tests of Brown and Forsythe to clustered data may result in much higher than nominal Type I error probabilities. To deal with clustering we developed an extended version of Levene's test, where the ANOVA step is replaced with a regression analysis followed by a Wald-type test based on a clustered version of the robust Huber–White sandwich estimator of the covariance matrix. We studied the properties of our procedure using simulated non-normal clustered data and obtained Type I error rates close to nominal as well as reasonable powers. We also applied our method to oral glucose tolerance test data obtained from a twin study of the metabolic syndrome and related components and compared the results with those produced by the traditional approaches.  相似文献   

18.
For a multivariate linear model, Wilk's likelihood ratio test (LRT) constitutes one of the cornerstone tools. However, the computation of its quantiles under the null or the alternative hypothesis requires complex analytic approximations, and more importantly, these distributional approximations are feasible only for moderate dimension of the dependent variable, say p≤20. On the other hand, assuming that the data dimension p as well as the number q of regression variables are fixed while the sample size n grows, several asymptotic approximations are proposed in the literature for Wilk's Λ including the widely used chi-square approximation. In this paper, we consider necessary modifications to Wilk's test in a high-dimensional context, specifically assuming a high data dimension p and a large sample size n. Based on recent random matrix theory, the correction we propose to Wilk's test is asymptotically Gaussian under the null hypothesis and simulations demonstrate that the corrected LRT has very satisfactory size and power, surely in the large p and large n context, but also for moderately large data dimensions such as p=30 or p=50. As a byproduct, we give a reason explaining why the standard chi-square approximation fails for high-dimensional data. We also introduce a new procedure for the classical multiple sample significance test in multivariate analysis of variance which is valid for high-dimensional data.  相似文献   

19.
《Econometric Reviews》2013,32(4):337-349
Abstract

This paper reconsiders the nonlinearity test proposed by Ko[cbreve]enda (Ko[cbreve]enda, E. (2001). An alternative to the BDS test: integration across the correlation integral. Econometric Reviews20:337–351). When the analyzed series is non‐Gaussian, the empirical rejection rates can be much larger than the nominal size. In this context, the necessity of tabulating the empirical distribution of the statistic each time the test is computed is stressed. To that end, simple random permutation works reasonably well. This paper also shows, through Monte Carlo experiments, that Ko[cbreve]enda's test can be more powerful than the Brock et al. (Brock, W., Dechert, D., Scheickman, J., LeBaron, B. (1996). A test for independence based on the correlation dimension. Econometric Reviews15:197–235) procedure. However, more than one range of values for the proximity parameter should be used. Finally, empirical evidence on exchange rates is reassessed.  相似文献   

20.
The problem of testing whether two samples of possibly right-censored survival data come from the same distribution is considered. The aim is to develop a test which is capable of detection of a wide spectrum of alternatives. A new class of tests based on Neyman's embedding idea is proposed. The null hypothesis is tested against a model where the hazard ratio of the two survival distributions is expressed by several smooth functions. A data-driven approach to the selection of these functions is studied. Asymptotic properties of the proposed procedures are investigated under fixed and local alternatives. Small-sample performance is explored via simulations which show that the power of the proposed tests appears to be more robust than the power of some versatile tests previously proposed in the literature (such as combinations of weighted logrank tests, or Kolmogorov–Smirnov tests).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号