首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
In this paper, two new statistics based on comparison of the theoretical and empirical distribution functions are proposed to test exponentiality. Critical values are determined by means of Monte Carlo simulations for various sample sizes and different significance levels. Through an extensive simulation study, 50 selected exponentiality tests are studied for a wide collection of alternative distributions. From the empirical power study, it is concluded that, firstly, one of our proposals is preferable for IFR (increasing failure rate) and UFR (unimodal failure rate) alternatives, whereas the other one is preferable for DFR (decreasing failure rate) and BFR (bathtub failure rate) alternatives and, secondly, the new tests can be considered serious and powerful competitors to other existing proposals, since they have the same (or higher) level of performance than the best tests in the statistical literature.  相似文献   

2.
Multinomial goodness-of-fit tests arise in a diversity of milieu. The long history of the problem has spawned a multitude of asymptotic tests. If the sample size relative to the number of categories is small, the accuracy of these tests is compromised. In that case, an exact test is a prudent option. But such tests are computationally intensive and need efficient algorithms. This paper gives a conceptual overview, and empirical comparisons of two avenues, namely the network and fast Fourier transform (FFT) algorithms, for an exact goodness-of-fit test on a multinomial. We show that a recursive execution of a polynomial product forms the basis of both these approaches. Specific details to implement the network method, and techniques to enhance the efficiency of the FFT algorithm are given. Our empirical comparisons show that for exact analysis with the chi-square and likelihood ratio statistics, the network-cum-polynomial multiplication algorithm is the more efficient and accurate of the two.  相似文献   

3.
This paper investigates how classical measurement error and additive outliers (AO) influence tests for structural change based on F-statistics. We derive theoretically the impact of general additive disturbances in the regressors on the asymptotic distribution of these tests for structural change. The small sample properties in the case of classical measurement error and AO are investigated via Monte Carlo simulations, revealing that sizes are biased upwards and that powers are reduced. Two-wavelet-based denoising methods are used to reduce these distortions. We show that these two methods can significantly improve the performance of structural break tests.  相似文献   

4.
It is shown that if a binary regression function is increasing then retrospective sampling induces a stochastic ordering of the covariate distributions among the responders, which we call cases, and the non-responders, which we call controls. We also show that if the covariate distributions are stochastically ordered then the regression function must be increasing. This means that testing whether the regression function is monotone is equivalent to testing whether the covariate distributions are stochastically ordered. Capitalizing on these new probabilistic observations we proceed to develop two new non-parametric tests for stochastic order. The new tests are based on either the maximally selected, or integrated, chi-bar statistic of order one. The tests are easy to compute and interpret and their large sampling distributions are easily found. Numerical comparisons show that they compare favorably with existing methods in both small and large samples. We emphasize that the new tests are applicable to any testing problem involving two stochastically ordered distributions.  相似文献   

5.
New tests are proposed for the Pareto distribution as well as its discrete version, the so called Zipf’s law. In both cases the discrepancy between the empirical moment of arbitrary negative order and its theoretical counterpart is utilized in a weighted integral test statistic. If the weight function is of exponential rate of decay interesting limit statistics are obtained. The tests are shown to be consistent under fixed alternatives and a Monte Carlo study is drawn to investigate the performance of the proposed procedures in small samples. Furthermore a bootstrap procedure is proposed to cope with the case of unknown shape parameter. We conclude with applications to real data.  相似文献   

6.
We consider the efficient outcome of a canonical economic market model involving buyers and sellers with independent and identically distributed random valuations and costs, respectively. When the number of buyers and sellers is large, we show that the joint distribution of the equilibrium quantity traded and welfare is asymptotically normal. Moreover, we bound the approximation rate. The proof proceeds by constructing, on a common probability space, a representation consisting of two independent empirical quantile processes, which in large markets can be approximated by independent Brownian bridges. The distribution of interest can then be approximated by that of a functional of a Gaussian process. This methodology applies to a variety of mechanism design problems.  相似文献   

7.
ABSTRACT

For two-way layouts in a between-subjects analysis of variance design, the parametric F-test is compared with seven nonparametric methods: rank transform (RT), inverse normal transform (INT), aligned rank transform (ART), a combination of ART and INT, Puri & Sen's L statistic, Van der Waerden, and Akritas and Brunners ANOVA-type statistics (ATS). The type I error rates and the power are computed for 16 normal and nonnormal distributions, with and without homogeneity of variances, for balanced and unbalanced designs as well as for several models including the null and the full model. The aim of this study is to identify a method that is applicable without too much testing for all the attributes of the plot. The Van der Waerden test shows the overall best performance though there are some situations in which it is disappointing. The Puri & Sen's and the ATS tests show generally very low power. These two and the other methods cannot keep the type I error rate under control in too many situations. Especially in the case of lognormal distributions, the use of any of the rank-based procedures can be dangerous for cell sizes above 10. As already shown by many other authors, nonnormal distributions do not violate the parametric F-test, but unequal variances do, and heterogeneity of variances leads to an inflated error rate more or less also for the nonparametric methods. Finally, it should be noted that some procedures show rising error rates with increasing cell sizes, the ART, especially for discrete variables, and the RT, Puri & Sen, and the ATS in the cases of heteroscedasticity.  相似文献   

8.
For the problem of testing the equality of slopes of several regression lines against the alternative that the slopes are in increasing (decreasing) order of magnitude, two types of tests are proposed. These are the likelihood ratio test and a test that depends on suitable linear combination of one group statistics. Rank analogues of the two tests are also examined.  相似文献   

9.
Score statistics utilizing historical control data have been proposed to test for increasing trend in tumour occurrence rates in laboratory carcinogenicity studies. Novel invariance arguments are used to confirm, under slightly weaker conditions, previously established asymptotic distributions (mixtures of normal distributions) of tests unconditional on the tumor response rate in the concurrent control group. Conditioning on the control response rate, an ancillary statistic, leads to a new conditional limit theorem in which the test statistic converges to an unknown random variable. Because of this, a subasymptotic approximation to the conditional limiting distribution is also considered. The adequacy of these large-sample approximations in finite samples is evaluated using computer simulation. Bootstrap methods for use in finite samples are also proposed. The application of the conditional and unconditional tests is illustrated using bioassay data taken from the literature. The results presented in this paper are used to formulate recommendations for the use of tests for trend with historical controls in practice.  相似文献   

10.
This article describes testing for periodicity in the presence of FD processes. We propose two approaches for testing the periodicity based on Fisher's test. The first one is performed using the periodogram which has been divided into different parts. The second one is based on the discrete wavelet transform. Properties of the tests are illustrated by means of Monte Carlo simulations.  相似文献   

11.
For two-way layouts in a between subjects ANOVA design the aligned rank transform (ART) is compared with the parametric F-test as well as six other nonparametric methods: rank transform (RT), inverse normal transform (INT), a combination of ART and INT, Puri & Sen's L statistic, van der Waerden and Akritas & Brunners ATS. The type I error rates are computed for the uniform and the exponential distributions, both as continuous and in several variations as discrete distribution. The computations had been performed for balanced and unbalanced designs as well as for several effect models. The aim of this study is to analyze the impact of discrete distributions on the error rate. And it is shown that this scaling impact is restricted to the ART- as well as the combination of ART- and INT-method. There are two effects: first with increasing cell counts their error rates rise beyond any acceptable limit up to 20 percent and more. And secondly their rates rise when the number of distinct values of the dependent variable decreases. This behavior is more severe for underlying exponential distributions than for uniform distributions. Therefore there is a recommendation not to apply the ART if the mean cell frequencies exceed 10.  相似文献   

12.
We consider the problem of testing for additivity and joint effects in multivariate nonparametric regression when the data are modelled as observations of an unknown response function observed on a d-dimensional (d 2) lattice and contaminated with additive Gaussian noise. We propose tests for additivity and joint effects, appropriate for both homogeneous and inhomogeneous response functions, using the particular structure of the data expanded in tensor product Fourier or wavelet bases studied recently by Amato and Antoniadis (2001) and Amato, Antoniadis and De Feis (2002). The corresponding tests are constructed by applying the adaptive Neyman truncation and wavelet thresholding procedures of Fan (1996), for testing a high-dimensional Gaussian mean, to the resulting empirical Fourier and wavelet coefficients. As a consequence, asymptotic normality of the proposed test statistics under the null hypothesis and lower bounds of the corresponding powers under a specific alternative are derived. We use several simulated examples to illustrate the performance of the proposed tests, and we make comparisons with other tests available in the literature.  相似文献   

13.
The power-generalized Weibull probability distribution is very often used in survival analysis mainly because different values of its parameters allow for various shapes of hazard rate such as monotone increasing/decreasing, ∩-shaped, ∪-shaped, or constant. Modified chi-squared tests based on maximum likelihood estimators of parameters that are shown to be -consistent are proposed. Power of these tests against exponentiated Weibull, three-parameter Weibull, and generalized Weibull distributions is studied using Monte Carlo simulations. It is proposed to use the left-tailed rejection region because these tests are biased with respect to the above alternatives if one will use the right-tailed rejection region. It is also shown that power of the McCulloch test investigated can be two or three times higher than that of Nikulin–Rao–Robson test with respect to the alternatives considered if expected cell frequencies are about 5.  相似文献   

14.
It is the purpose of this paper to review recently-proposed exact tests based on the Baumgartner-Weiß-Schindler statistic and its modification. Except for the generalized Behrens-Fisher problem, these tests are broadly applicable, and they can be used to compare two groups irrespective of whether or not ties occur. In addition, a nonparametric trend test and a trend test for binomial proportions are possible. These exact tests are preferable to commonly-applied tests, such as the Wilcoxon rank sum test, in terms of both type I error rate and power.  相似文献   

15.
We propose a class of goodness-of-fit tests for the gamma distribution that utilizes the empirical Laplace transform. The consistency of the tests as well as their asymptotic distribution under the null hypothesis are investigated. As the decay of the weight function tends to infinity, the test statistics approach limit values related to the first non zero component of Neyman's smooth test for the gamma law. The new tests are compared with other omnibus tests for the gamma distribution.  相似文献   

16.
Developments since 1960 in goodness-of-fit tests for the one and two parameter exponential models using both complete and censored samples are reviewed. Special attention is given to both the omnibus or general alternative and to specialized alter-natives such as the class of distributions with increasing failure rates. The use of transformations in developing tests is also discussed.  相似文献   

17.
ABSTRACT

A simple and efficient goodness-of-fit test for exponentiality is developed by exploiting the characterization of the exponential distribution using the probability integral transformation. We adopted the empirical likelihood methodology in constructing the test statistic. The proposed test statistic has a chi-square limiting distribution. For small to moderate sample sizes Monte-Carlo simulations revealed that our proposed tests are much more superior under increasing failure rate (IFR) and bathtub decreasing-increasing failure rate (BFR) alternatives. Real data examples were used to demonstrate the robustness and applicability of our proposed tests in practice.  相似文献   

18.
A common approach to analysing clinical trials with multiple outcomes is to control the probability for the trial as a whole of making at least one incorrect positive finding under any configuration of true and false null hypotheses. Popular approaches are to use Bonferroni corrections or structured approaches such as, for example, closed-test procedures. As is well known, such strategies, which control the family-wise error rate, typically reduce the type I error for some or all the tests of the various null hypotheses to below the nominal level. In consequence, there is generally a loss of power for individual tests. What is less well appreciated, perhaps, is that depending on approach and circumstances, the test-wise loss of power does not necessarily lead to a family wise loss of power. In fact, it may be possible to increase the overall power of a trial by carrying out tests on multiple outcomes without increasing the probability of making at least one type I error when all null hypotheses are true. We examine two types of problems to illustrate this. Unstructured testing problems arise typically (but not exclusively) when many outcomes are being measured. We consider the case of more than two hypotheses when a Bonferroni approach is being applied while for illustration we assume compound symmetry to hold for the correlation of all variables. Using the device of a latent variable it is easy to show that power is not reduced as the number of variables tested increases, provided that the common correlation coefficient is not too high (say less than 0.75). Afterwards, we will consider structured testing problems. Here, multiplicity problems arising from the comparison of more than two treatments, as opposed to more than one measurement, are typical. We conduct a numerical study and conclude again that power is not reduced as the number of tested variables increases.  相似文献   

19.
ABSTRACT

Motivated by an example in marine science, we use Fisher’s method to combine independent likelihood ratio tests (LRTs) and asymptotic independent score tests to assess the equivalence of two zero-inflated Beta populations (mixture distributions with three parameters). For each test, test statistics for the three individual parameters are combined into a single statistic to address the overall difference between the two populations. We also develop non parametric and semiparametric permutation-based tests for simultaneously comparing two or three features of unknown populations. Simulations show that the likelihood-based tests perform well for large sample sizes and that the statistics based on combining LRT statistics outperforms the ones based on combining score test statistics. The permutation-based tests have overall better performance in terms of both power and type I error rate. Our methods are easy to implement and computationally efficient, and can be expanded to more than two populations and to other multiple parameter families. The permutation tests are entirely generic and can be useful in various applications dealing with zero (or other) inflation.  相似文献   

20.
Permutation tests based on medians are examined for pairwise comparison of scale. Tests that have been found in the literature to be effective for comparing scale for two groups are extended to the case of all pairwise comparisons, using the Tukey-type adjustment of Richter and McCann [Multiple comparison of medians using permutation tests. J Mod Appl Stat Methods. 2007;6(2):399–412] to guarantee strong Type I error rate control. Power and Type I error rate estimates are computed using simulated data. A method based on the ratio of deviances performed best and appears to be the best overall test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号