首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

This paper discusses Johansen’s likelihood ratio tests to determine the cointegration rank under local alternative hypotheses in the vector autoregressive models (VARs) in which drift or linear trend related to the hypotheses is not dependent upon the sample size, and evaluates related asymptotic properties. We show that the test statistics diverge at the rate of the sample size even under one of local alternative hypotheses, owing to the existence of such a deterministic term. This implies that under our situations, the tests are far more powerful than those under the conventional local alternative hypotheses.  相似文献   

2.
ABSTRACT

In this paper, the testing problem for homogeneity in the mixture exponential family is considered. The model is irregular in the sense that each interest parameter forms a part of the null hypothesis (sub-null hypothesis) and the null hypothesis is the union of the sub-null hypotheses. The generalized likelihood ratio test does not distinguish between the sub-null hypotheses. The Supplementary Score Test is proposed by combining two orthogonalized score tests obtained corresponding to the two sub-null hypotheses after proper reparameterization. The test is easy to design and performs better than the generalized likelihood ratio test and other alternative tests by numerical comparisons.  相似文献   

3.
Exact ksample permutation tests for binary data for three commonly encountered hypotheses tests are presented,, The tests are derived both under the population and randomization models . The generating function for the number of cases in the null distribution is obtained, The asymptotic distributions of the test statistics are derived . Actual significance levels are computed for the asymptotic test versions , Random sampling of the null distribution is suggested as a superior alternative to the asymptotics and an efficient computer technique for implementing the random sampling is described., finally, some numerical examples are presented and sample size guidelines given for computer implementation of the exact tests.  相似文献   

4.
In this paper, we propose several tests for detecting difference in means and variances simultaneously between two populations under normality. First of all, we propose a likelihood ratio test. Then we obtain an expression of the likelihood ratio statistic by a product of two functions of random quantities, which can be used to test the two individual partial hypotheses for differences in means and variances. With those individual partial tests, we propose a union-intersection test. Also we consider two optimal tests by combining the p-values of the two individual partial tests. For obtaining null distributions, we apply the permutation principle with the Monte Carlo approach. Then we compare efficiency among the proposed tests with well-known ones through a simulation study. Finally, we discuss some interesting features related to the simultaneous tests and resampling methods as concluding remarks.  相似文献   

5.
The parametric bootstrap tests and the asymptotic or approximate tests for detecting difference of two Poisson means are compared. The test statistics used are the Wald statistics with and without log-transformation, the Cox F statistic and the likelihood ratio statistic. It is found that the type I error rate of an asymptotic/approximate test may deviate too much from the nominal significance level α under some situations. It is recommended that we should use the parametric bootstrap tests, under which the four test statistics are similarly powerful and their type I error rates are all close to α. We apply the tests to breast cancer data and injurious motor vehicle crash data.  相似文献   

6.
Two analysis of means type randomization tests for testing the equality of I variances for unbalanced designs are presented. Randomization techniques for testing statistical hypotheses can be used when parametric tests are inappropriate. Suppose that I independent samples have been collected. Randomization tests are based on shuffles or rearrangements of the (combined) sample. Putting each of the I samples ‘in a bowl’ forms the combined sample. Drawing samples ‘from the bowl’ forms a shuffle. Shuffles can be made with replacement (bootstrap shuffling) or without replacement (permutation shuffling). The tests that are presented offer two advantages. They are robust to non-normality and they allow the user to graphically present the results via a decision chart similar to a Shewhart control chart. A Monte Carlo study is used to verify that the permutation version of the tests exhibit excellent power when compared to other robust tests. The Monte Carlo study also identifies circumstances under which the popular Levene's test fails.  相似文献   

7.
In testing of hypothesis, the robustness of the tests is an important concern. Generally, the maximum likelihood-based tests are most efficient under standard regularity conditions, but they are highly non-robust even under small deviations from the assumed conditions. In this paper, we have proposed generalized Wald-type tests based on minimum density power divergence estimators for parametric hypotheses. This method avoids the use of nonparametric density estimation and the bandwidth selection. The trade-off between efficiency and robustness is controlled by a tuning parameter β. The asymptotic distributions of the test statistics are chi-square with appropriate degrees of freedom. The performance of the proposed tests is explored through simulations and real data analysis.  相似文献   

8.
Trend tests in dose-response have been central problems in medicine. The likelihood ratio test is often used to test hypotheses involving a stochastic order. Stratified contingency tables are common in practice. The distribution theory of likelihood ratio test has not been full developed for stratified tables and more than two stochastically ordered distributions. Under c strata of m × r tables, for testing the conditional independence against simple stochastic order alternative, this article introduces a model-free test method and gives the asymptotic distribution of the test statistic, which is a chi-bar-squared distribution. A real data set concerning an ordered stratified table will be used to show the validity of this test method.  相似文献   

9.
The asymptotic distributions of two tests for sphericity:the locally most powerful invariant test and the likelihood ratio test are derived under the general alternaties ∑?σ2 I. The powers of these two tests are then compared when the data are from a trivariate normal population. The bootstrap method is also used to obtain the powers and the powers obtained by this method agree with those from the asymptotic distributions.  相似文献   

10.
We introduce a family of Rényi statistics of orders r?∈?R for testing composite hypotheses in general exponential models, as alternatives to the previously considered generalized likelihood ratio (GLR) statistic and generalized Wald statistic. If appropriately normalized exponential models converge in a specific sense when the sample size (observation window) tends to infinity, and if the hypothesis is regular, then these statistics are shown to be χ2-distributed under the hypothesis. The corresponding Rényi tests are shown to be consistent. The exact sizes and powers of asymptotically α-size Rényi, GLR and generalized Wald tests are evaluated for a concrete hypothesis about a bivariate Lévy process and moderate observation windows. In this concrete situation the exact sizes of the Rényi test of the order r?=?2 practically coincide with those of the GLR and generalized Wald tests but the exact powers of the Rényi test are on average somewhat better.  相似文献   

11.
In the last few years, two adaptive tests for paired data have been proposed. One test proposed by Freidlin et al. [On the use of the Shapiro–Wilk test in two-stage adaptive inference for paired data from moderate to very heavy tailed distributions, Biom. J. 45 (2003), pp. 887–900] is a two-stage procedure that uses a selection statistic to determine which of three rank scores to use in the computation of the test statistic. Another statistic, proposed by O'Gorman [Applied Adaptive Statistical Methods: Tests of Significance and Confidence Intervals, Society for Industrial and Applied Mathematics, Philadelphia, 2004], uses a weighted t-test with the weights determined by the data. These two methods, and an earlier rank-based adaptive test proposed by Randles and Hogg [Adaptive Distribution-free Tests, Commun. Stat. 2 (1973), pp. 337–356], are compared with the t-test and to Wilcoxon's signed-rank test. For sample sizes between 15 and 50, the results show that the adaptive test proposed by Freidlin et al. and the adaptive test proposed by O'Gorman have higher power than the other tests over a range of moderate to long-tailed symmetric distributions. The results also show that the test proposed by O'Gorman has greater power than the other tests for short-tailed distributions. For sample sizes greater than 50 and for small sample sizes the adaptive test proposed by O'Gorman has the highest power for most distributions.  相似文献   

12.
ABSTRACT

This article examines the evidence contained in t statistics that are marginally significant in 5% tests. The bases for evaluating evidence are likelihood ratios and integrated likelihood ratios, computed under a variety of assumptions regarding the alternative hypotheses in null hypothesis significance tests. Likelihood ratios and integrated likelihood ratios provide a useful measure of the evidence in favor of competing hypotheses because they can be interpreted as representing the ratio of the probabilities that each hypothesis assigns to observed data. When they are either very large or very small, they suggest that one hypothesis is much better than the other in predicting observed data. If they are close to 1.0, then both hypotheses provide approximately equally valid explanations for observed data. I find that p-values that are close to 0.05 (i.e., that are “marginally significant”) correspond to integrated likelihood ratios that are bounded by approximately 7 in two-sided tests, and by approximately 4 in one-sided tests.

The modest magnitude of integrated likelihood ratios corresponding to p-values close to 0.05 clearly suggests that higher standards of evidence are needed to support claims of novel discoveries and new effects.  相似文献   

13.
Pearson’s chi-square (Pe), likelihood ratio (LR), and Fisher (Fi)–Freeman–Halton test statistics are commonly used to test the association of an unordered r×c contingency table. Asymptotically, these test statistics follow a chi-square distribution. For small sample cases, the asymptotic chi-square approximations are unreliable. Therefore, the exact p-value is frequently computed conditional on the row- and column-sums. One drawback of the exact p-value is that it is conservative. Different adjustments have been suggested, such as Lancaster’s mid-p version and randomized tests. In this paper, we have considered 3×2, 2×3, and 3×3 tables and compared the exact power and significance level of these test’s standard, mid-p, and randomized versions. The mid-p and randomized test versions have approximately the same power and higher power than that of the standard test versions. The mid-p type-I error probability seldom exceeds the nominal level. For a given set of parameters, the power of Pe, LR, and Fi differs approximately the same way for standard, mid-p, and randomized test versions. Although there is no general ranking of these tests, in some situations, especially when averaged over the parameter space, Pe and Fi have the same power and slightly higher power than LR. When the sample sizes (i.e., the row sums) are equal, the differences are small, otherwise the observed differences can be 10% or more. In some cases, perhaps characterized by poorly balanced designs, LR has the highest power.  相似文献   

14.
The authors consider hidden Markov models (HMMs) whose latent process has m ≥ 2 states and whose state‐dependent distributions arise from a general one‐parameter family. They propose a test of the hypothesis m = 2. Their procedure is an extension to HMMs of the modified likelihood ratio statistic proposed by Chen, Chen & Kalbfleisch (2004) for testing two states in a finite mixture. The authors determine the asymptotic distribution of their test under the hypothesis m = 2 and investigate its finite‐sample properties in a simulation study. Their test is based on inference for the marginal mixture distribution of the HMM. In order to illustrate the additional difficulties due to the dependence structure of the HMM, they show how to test general regular hypotheses on the marginal mixture of HMMs via a quasi‐modified likelihood ratio. They also discuss two applications.  相似文献   

15.
Lehmann & Stein (1948) proved the existence of non-similar tests which can be more powerful than best similar tests. They used Student's problem of testing for a non-zero mean given a random sample from the normal distribution with unknown variance as an example. This raises the question: should we use a non-similar test instead of Student's t test? Questions like this can be answered by comparing the power of the test with the power envelope. This paper discusses the difficulties involved in computing power envelopes. It reports an empirical comparison of the power of the t test and the power envelope and finds that the two are almost identical especially for sample sizes greater than 20. These findings suggest that, as well as being uniformly most powerful (UMP) within the class of similar tests, Student's t test is approximately UMP within the class of all tests. For practical purposes it might also be regarded as UMP when moderate or large sample sizes are involved.  相似文献   

16.
Lachin [1981] and Lachin and Foulkes [1986] consider two groups of identically independently exponentially distributed random variables and four models of data sampling. The test problem they treat is to decide whether the two distributions are identical (null-hypothesis H0) or not (alternative hypothesis H1). Basing the test on maximum-likelihood estimators and their asymptotic normal densities they obtain formulae for the group sizes necessary to yield asymptotic tests with guaranteed power under a prescribed level for specified hypotheses. It is intuitively reasonable to expect the sizes decrease the more the hypotheses differ. It the distance betwen H0 and H1 is measured by the difference of the exponential parameters this assumption time or the deviation of the exponential parameter ratio from unity is the measure larger distances between the hypotheses do not necessarily lead to smaller sample sizes.  相似文献   

17.
A robust procedure is developed for testing the equality of means in the two sample normal model. This is based on the weighted likelihood estimators of Basu et al. (1993). When the normal model is true the tests proposed have the same asymptotic power as the two sample Student's t-statistic in the equal variance case. However, when the normality assumptions are only approximately true the proposed tests can be substantially more powerful than the classical tests. In a Monte Carlo study for the equal variance case under various outlier models the proposed test using Hellinger distance based weighted likelihood estimator compared favorably with the classical test as well as the robust test proposed by Tiku (1980).  相似文献   

18.
For location–scale families, we consider a random distance between the sample order statistics and the quasi sample order statistics derived from the null distribution as a measure of discrepancy. The conditional qth quantile and expectation of the random discrepancy on the given sample are chosen as test statistics. Simulation results of powers against various alternatives are illustrated under the normal and exponential hypotheses for moderate sample size. The proposed tests, especially the qth quantile tests with a small or large q, are shown to be more powerful than other prominent goodness-of-fit tests in most cases.  相似文献   

19.
ABSTRACT

A frequently encountered statistical problem is to determine if the variability among k populations is heterogeneous. If the populations are measured using different scales, comparing variances may not be appropriate. In this case, comparing coefficient of variation (CV) can be used because CV is unitless. In this paper, a non-parametric test is introduced to test whether the CVs from k populations are different. With the assumption that the populations are independent normally distributed, the Miller test, Feltz and Miller test, saddlepoint-based test, log likelihood ratio test and the proposed simulated Bartlett-corrected log likelihood ratio test are derived. Simulation results show the extreme accuracy of the simulated Bartlett-corrected log likelihood ratio test if the model is correctly specified. If the model is mis-specified and the sample size is small, the proposed test still gives good results. However, with a mis-specified model and large sample size, the non-parametric test is recommended.  相似文献   

20.
The minimum disparity estimators proposed by Lindsay (1994) for discrete models form an attractive subclass of minimum distance estimators which achieve their robustness without sacrificing first order efficiency at the model. Similarly, disparity test statistics are useful robust alternatives to the likelihood ratio test for testing of hypotheses in parametric models; they are asymptotically equivalent to the likelihood ratio test statistics under the null hypothesis and contiguous alternatives. Despite their asymptotic optimality properties, the small sample performance of many of the minimum disparity estimators and disparity tests can be considerably worse compared to the maximum likelihood estimator and the likelihood ratio test respectively. In this paper we focus on the class of blended weight Hellinger distances, a general subfamily of disparities, and study the effects of combining two different distances within this class to generate the family of “combined” blended weight Hellinger distances, and identify the members of this family which generally perform well. More generally, we investigate the class of "combined and penal-ized" blended weight Hellinger distances; the penalty is based on reweighting the empty cells, following Harris and Basu (1994). It is shown that some members of the combined and penalized family have rather attractive properties  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号