首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
ABSTRACT

A simple test based on Gini's mean difference is proposed to test the hypothesis of equality of population variances. Using 2000 replicated samples and empirical distributions, we show that the test compares favourably with Bartlett's and Levene's test for the normal population. Also, it is more powerful than Bartlett's and Levene's tests for some alternative hypotheses for some non-normal distributions and more robust than the other two tests for large sample sizes under some alternative hypotheses. We also give an approximate distribution to the test statistic to enable one to calculate the nominal levels and P-values.  相似文献   

2.
ABSTRACT

This paper discusses the problem of testing the complete independence of random variables when the dimension of observations can be much larger than the sample size. It is reported that two typical tests based on, respectively, the biggest off-diagonal entry and the largest eigenvalue of the sample correlation matrix lose their control of type I error in such high-dimensional scenarios, and exhibit distinct behaviours in type II error under different types of alternative hypothesis. Given these facts, we propose a permutation test procedure by synthesizing these two extreme statistics. Simulation results show that for finite dimension and sample size the proposed test outperforms the existing methods in various cases.  相似文献   

3.
Uniformly most powerful Bayesian tests (UMPBTs) are a new class of Bayesian tests in which null hypotheses are rejected if their Bayes factor exceeds a specified threshold. The alternative hypotheses in UMPBTs are defined to maximize the probability that the null hypothesis is rejected. Here, we generalize the notion of UMPBTs by restricting the class of alternative hypotheses over which this maximization is performed, resulting in restricted most powerful Bayesian tests (RMPBTs). We then derive RMPBTs for linear models by restricting alternative hypotheses to g priors. For linear models, the rejection regions of RMPBTs coincide with those of usual frequentist F‐tests, provided that the evidence thresholds for the RMPBTs are appropriately matched to the size of the classical tests. This correspondence supplies default Bayes factors for many common tests of linear hypotheses. We illustrate the use of RMPBTs for ANOVA tests and t‐tests and compare their performance in numerical studies.  相似文献   

4.
Exact ksample permutation tests for binary data for three commonly encountered hypotheses tests are presented,, The tests are derived both under the population and randomization models . The generating function for the number of cases in the null distribution is obtained, The asymptotic distributions of the test statistics are derived . Actual significance levels are computed for the asymptotic test versions , Random sampling of the null distribution is suggested as a superior alternative to the asymptotics and an efficient computer technique for implementing the random sampling is described., finally, some numerical examples are presented and sample size guidelines given for computer implementation of the exact tests.  相似文献   

5.
ABSTRACT

This article examines the evidence contained in t statistics that are marginally significant in 5% tests. The bases for evaluating evidence are likelihood ratios and integrated likelihood ratios, computed under a variety of assumptions regarding the alternative hypotheses in null hypothesis significance tests. Likelihood ratios and integrated likelihood ratios provide a useful measure of the evidence in favor of competing hypotheses because they can be interpreted as representing the ratio of the probabilities that each hypothesis assigns to observed data. When they are either very large or very small, they suggest that one hypothesis is much better than the other in predicting observed data. If they are close to 1.0, then both hypotheses provide approximately equally valid explanations for observed data. I find that p-values that are close to 0.05 (i.e., that are “marginally significant”) correspond to integrated likelihood ratios that are bounded by approximately 7 in two-sided tests, and by approximately 4 in one-sided tests.

The modest magnitude of integrated likelihood ratios corresponding to p-values close to 0.05 clearly suggests that higher standards of evidence are needed to support claims of novel discoveries and new effects.  相似文献   

6.

We developed an alternative estimator for the probability proportional to size with replacement sampling scheme when certain characteristics under study have low correlation with the size measured used for sample selection. The performance of the proposed estimator has been studied with other related alternative estimators by comparing biases and the variances of respective alternative estimators. Most of the alternative estimators assume the knowledge of the product moment correlation coefficient. Therefore an empirical study, with the help of wide variety of populations, has been carried out to study their respective efficiency when correlation coefficient is departed from its true value.  相似文献   

7.
ABSTRACT

Bootstrap-based unit root tests are a viable alternative to asymptotic distribution-based procedures and, in some cases, are preferable because of the serious size distortions associated with the latter tests under certain situations. While several bootstrap-based unit root tests exist for autoregressive moving average processes with homoskedastic errors, only one such test is available when the innovations are conditionally heteroskedastic. The details for the exact implementation of this procedure are currently available only for the first order autoregressive processes. Monte-Carlo results are also published only for this limited case. In this paper we demonstrate how this procedure can be extended to higher order autoregressive processes through a transformed series used in augmented Dickey–Fuller unit root tests. We also investigate the finite sample properties for higher order processes through a Monte-Carlo study. Results show that the proposed tests have reasonable power and size properties.  相似文献   

8.
ABSTRACT

This article argues that researchers do not need to completely abandon the p-value, the best-known significance index, but should instead stop using significance levels that do not depend on sample sizes. A testing procedure is developed using a mixture of frequentist and Bayesian tools, with a significance level that is a function of sample size, obtained from a generalized form of the Neyman–Pearson Lemma that minimizes a linear combination of α, the probability of rejecting a true null hypothesis, and β, the probability of failing to reject a false null, instead of fixing α and minimizing β. The resulting hypothesis tests do not violate the Likelihood Principle and do not require any constraints on the dimensionalities of the sample space and parameter space. The procedure includes an ordering of the entire sample space and uses predictive probability (density) functions, allowing for testing of both simple and compound hypotheses. Accessible examples are presented to highlight specific characteristics of the new tests.  相似文献   

9.
ABSTRACT

In this paper, the testing problem for homogeneity in the mixture exponential family is considered. The model is irregular in the sense that each interest parameter forms a part of the null hypothesis (sub-null hypothesis) and the null hypothesis is the union of the sub-null hypotheses. The generalized likelihood ratio test does not distinguish between the sub-null hypotheses. The Supplementary Score Test is proposed by combining two orthogonalized score tests obtained corresponding to the two sub-null hypotheses after proper reparameterization. The test is easy to design and performs better than the generalized likelihood ratio test and other alternative tests by numerical comparisons.  相似文献   

10.
This paper is concerned with asymptotic distribution of Hotelling's T2-statistic under the elliptical distribution for the null hypothesis and the local alternative under the elliptical distribution. Asymptotic expansions for the distribution of T2 for the null case and the local alternative are given up to the order N−1, where N is the sample size. The percentiles of T2 and the approximate powers are calculated to evaluate the effect of the elliptical distribution for some numerical examples.Also to evaluate the effect of an adjustment of Bartlett type to Hotelling's T2 for the local alternative, the approximate power of adjusted T2 is calculated in comparison with one of nonadjusted T2.  相似文献   

11.
For location–scale families, we consider a random distance between the sample order statistics and the quasi sample order statistics derived from the null distribution as a measure of discrepancy. The conditional qth quantile and expectation of the random discrepancy on the given sample are chosen as test statistics. Simulation results of powers against various alternatives are illustrated under the normal and exponential hypotheses for moderate sample size. The proposed tests, especially the qth quantile tests with a small or large q, are shown to be more powerful than other prominent goodness-of-fit tests in most cases.  相似文献   

12.
Goodness of Fit via Non-parametric Likelihood Ratios   总被引:1,自引:0,他引:1  
Abstract.  To test if a density f is equal to a specified f 0, one knows by the Neyman–Pearson lemma the form of the optimal test at a specified alternative f 1. Any non-parametric density estimation scheme allows an estimate of f . This leads to estimated likelihood ratios. Properties are studied of tests which for the density estimation ingredient use log-linear expansions. Such expansions are either coupled with subset selectors like the Akaike information criterion and the Bayesian information criterion regimes, or use order growing with sample size. Our tests are generalized to testing the adequacy of general parametric models, and to work also in higher dimensions. The tests are related to, but are different from, the 'smooth tests' that go back to Neyman [Skandinavisk Aktuarietidsskrift 20(1937) 149] and that have been studied extensively in recent literature. Our tests are large-sample equivalent to such smooth tests under local alternative conditions, but different from the smooth tests and often better under non-local conditions.  相似文献   

13.
Abstract

Two families of test statistics for testing the null hypothesis of exponentiality against Harmonic New Better than Used in Expectation (HNBUE) alternatives are proposed. Asymptotic distributions of the test statistics are derived under the null and alternative hypotheses and the consistency of the tests established. Comparison with competing tests are made in terms of Pitman Asymptotic Relative Efficiency (PARE). Simulation studies have been carried out to assess the performance of the tests. Finally, the test has been applied to three real life data sets described in Proschan, Susarla and Van Ryzin and Engelhardt, Bain and Wright.  相似文献   

14.

We present correction formulae to improve likelihood ratio and score teats for testing simple and composite hypotheses on the parameters of the beta distribution. As a special case of our results we obtain improved tests for the hypothesis that a sample is drawn from a uniform distribution on (0, 1). We present some Monte Carlo investigations to show that both corrected tests have better performances than the classical likelihood ratio and score tests at least for small sample sizes.  相似文献   

15.

This paper examines robust techniques for estimation and tests of hypotheses using the family of generalized Kullback-Leibler (GKL) divergences. The GKL family is a new group of density based divergences which forms a subclass of disparities defined by Lindsay (1994). We show that the corresponding minimum divergence estimators have a breakdown point of 50% under the model. The performance of the proposed estimators and tests are investigated through an extensive numerical study involving real-data examples and simulation results. The results show that the proposed methods are attractive choices for highly efficient and robust methods.  相似文献   

16.
ABSTRACT

In this paper, Vasicek [A test for normality based on sample entropy. J R Stat Soc Ser B. 1976;38:54–59] entropy estimator is modified using paired ranked set sampling (PRSS) method. Also, two goodness-of-fit tests using PRSS are suggested for the inverse Gaussian and Laplace distributions. The new suggested entropy estimator and goodness-of-fit tests using PRSS are compared with their counterparts using simple random sampling (SRS) via Monte Carlo simulations. The critical values of the suggested tests are obtained, and the powers of the tests based on several alternatives hypotheses using SRS and PRSS are calculated. It turns out that the proposed PRSS entropy estimator is more efficient than the SRS counterpart in terms of root mean square error. Also, the proposed PRSS goodness-of-fit tests have higher powers than their counterparts using SRS for all alternative considered in this study.  相似文献   

17.
The class of symmetric linear regression models has the normal linear regression model as a special case and includes several models that assume that the errors follow a symmetric distribution with longer-than-normal tails. An important member of this class is the t linear regression model, which is commonly used as an alternative to the usual normal regression model when the data contain extreme or outlying observations. In this article, we develop second-order asymptotic theory for score tests in this class of models. We obtain Bartlett-corrected score statistics for testing hypotheses on the regression and the dispersion parameters. The corrected statistics have chi-squared distributions with errors of order O(n ?3/2), n being the sample size. The corrections represent an improvement over the corresponding original Rao's score statistics, which are chi-squared distributed up to errors of order O(n ?1). Simulation results show that the corrected score tests perform much better than their uncorrected counterparts in samples of small or moderate size.  相似文献   

18.
A challenge arising in cancer immunotherapy trial design is the presence of a delayed treatment effect wherein the proportional hazard assumption no longer holds true. As a result, a traditional survival trial design based on the standard log‐rank test, which ignores the delayed treatment effect, will lead to substantial loss of statistical power. Recently, a piecewise weighted log‐rank test is proposed to incorporate the delayed treatment effect into consideration of the trial design. However, because the sample size formula was derived under a sequence of local alternative hypotheses, it results in an underestimated sample size when the hazard ratio is relatively small for a balanced trial design and an inaccurate sample size estimation for an unbalanced design. In this article, we derived a new sample size formula under a fixed alternative hypothesis for the delayed treatment effect model. Simulation results show that the new formula provides accurate sample size estimation for both balanced and unbalanced designs.  相似文献   

19.
For a multivariate linear model, Wilk's likelihood ratio test (LRT) constitutes one of the cornerstone tools. However, the computation of its quantiles under the null or the alternative hypothesis requires complex analytic approximations, and more importantly, these distributional approximations are feasible only for moderate dimension of the dependent variable, say p≤20. On the other hand, assuming that the data dimension p as well as the number q of regression variables are fixed while the sample size n grows, several asymptotic approximations are proposed in the literature for Wilk's Λ including the widely used chi-square approximation. In this paper, we consider necessary modifications to Wilk's test in a high-dimensional context, specifically assuming a high data dimension p and a large sample size n. Based on recent random matrix theory, the correction we propose to Wilk's test is asymptotically Gaussian under the null hypothesis and simulations demonstrate that the corrected LRT has very satisfactory size and power, surely in the large p and large n context, but also for moderately large data dimensions such as p=30 or p=50. As a byproduct, we give a reason explaining why the standard chi-square approximation fails for high-dimensional data. We also introduce a new procedure for the classical multiple sample significance test in multivariate analysis of variance which is valid for high-dimensional data.  相似文献   

20.
This paper investigates a new family of goodness-of-fit tests based on the negative exponential disparities. This family includes the popular Pearson's chi-square as a member and is a subclass of the general class of disparity tests (Basu and Sarkar, 1994) which also contains the family of power divergence statistics. Pitman efficiency and finite sample power comparisons between different members of this new family are made. Three asymptotic approximations of the exact null distributions of the negative exponential disparity famiiy of tests are discussed. Some numerical results on the small sample perfomance of this family of tests are presented for the symmetric null hypothesis. It is shown that the negative exponential disparity famiiy, Like the power divergence family, produces a new goodness-of-fit test statistic that can be a very attractive alternative to the Pearson's chi-square. Some numerical results suggest that, application of this test statistic, as an alternative to Pearson's chi-square, could be preferable to the I 2/3 statistic of Cressie and Read (1984) under the use of chi-square critical values.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号