首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
ABSTRACT

Area statistics are sample versions of areas occurring in a probability plot of two distribution functions F and G. This paper presents a unified basis for five statistics of this type. They can be used for various testing problems in the framework of the two sample problem for independent observations, such as testing equality of distributions against inequality or testing stochastic dominance of distributions in one or either direction against nondominance. Though three of the statistics considered have already been suggested in literature, two of them are new and deserve our interest. The finite sample distributions of the statistics (under F=G) can be calculated via recursion formulae. Two tables with critical values of the new statistics are included. The asymptotic distribution of the properly normalized versions of the area statistics are functionals of the Brownian bridge. The distribution functions and quantiles thereof are obtained by Monte Carlo simulation. Finally, the power functions of the two new tests based on area statistics are compared to the power functions of the tests based on the corresponding supremum statistics, i.e., statistics of the Kolmogorov–Smirnov type.  相似文献   

2.
The Fisher exact test has been unjustly dismissed by some as ‘only conditional,’ whereas it is unconditionally the uniform most powerful test among all unbiased tests, tests of size α and with power greater than its nominal level of significance α. The problem with this truly optimal test is that it requires randomization at the critical value(s) to be of size α. Obviously, in practice, one does not want to conclude that ‘with probability x the we have a statistical significant result.’ Usually, the hypothesis is rejected only if the test statistic's outcome is more extreme than the critical value, reducing the actual size considerably.

The randomized unconditional Fisher exact is constructed (using Neyman–structure arguments) by deriving a conditional randomized test randomizing at critical values c(t) by probabilities γ(t), that both depend on the total number of successes T (the complete-sufficient statistic for the nuisance parameter—the common success probability) conditioned upon.

In this paper, the Fisher exact is approximated by deriving nonrandomized conditional tests with critical region including the critical value only if γ (t) > γ0, for a fixed threshold value γ0, such that the size of the unconditional modified test is for all value of the nuisance parameter—the common success probability—smaller, but as close as possible to α. It will be seen that this greatly improves the size of the test as compared with the conservative nonrandomized Fisher exact test.

Size, power, and p value comparison with the (virtual) randomized Fisher exact test, and the conservative nonrandomized Fisher exact, Pearson's chi-square test, with the more competitive mid-p value, the McDonald's modification, and Boschloo's modifications are performed under the assumption of two binomial samples.  相似文献   

3.
Various exact tests for statistical inference are available for powerful and accurate decision rules provided that corresponding critical values are tabulated or evaluated via Monte Carlo methods. This article introduces a novel hybrid method for computing p‐values of exact tests by combining Monte Carlo simulations and statistical tables generated a priori. To use the data from Monte Carlo generations and tabulated critical values jointly, we employ kernel density estimation within Bayesian‐type procedures. The p‐values are linked to the posterior means of quantiles. In this framework, we present relevant information from the Monte Carlo experiments via likelihood‐type functions, whereas tabulated critical values are used to reflect prior distributions. The local maximum likelihood technique is employed to compute functional forms of prior distributions from statistical tables. Empirical likelihood functions are proposed to replace parametric likelihood functions within the structure of the posterior mean calculations to provide a Bayesian‐type procedure with a distribution‐free set of assumptions. We derive the asymptotic properties of the proposed nonparametric posterior means of quantiles process. Using the theoretical propositions, we calculate the minimum number of needed Monte Carlo resamples for desired level of accuracy on the basis of distances between actual data characteristics (e.g. sample sizes) and characteristics of data used to present corresponding critical values in a table. The proposed approach makes practical applications of exact tests simple and rapid. Implementations of the proposed technique are easily carried out via the recently developed STATA and R statistical packages.  相似文献   

4.
This article introduces a class of statistical tests for the hypothesis that some feature that is present in each of several variables is common to them. Features are data properties such as serial correlation, trends, seasonality, heteroscedasticity, autoregressive conditional hetero-scedasticity, and excess kurtosis. A feature is detected by a hypothesis test taking no feature as the null, and a common feature is detected by a test that finds linear combinations of variables with no feature. Often, an exact asymptotic critical value can be obtained that is simply a test of overidentifying restrictions in an instrumental variable regression. This article tests for a common international business cycle.  相似文献   

5.
Heterogeneity in lifetime data may be modelled by multiplying an individual's hazard by an unobserved frailty. We test for the presence of frailty of this kind in univariate and bivariate data with Weibull distributed lifetimes, using statistics based on the ordered Cox–Snell residuals from the null model of no frailty. The form of the statistics is suggested by outlier testing in the gamma distribution. We find through simulation that the sum of the k largest or k smallest order statistics, for suitably chosen k, provides a powerful test when the frailty distribution is assumed to be gamma or positive stable, respectively. We provide recommended values of k for sample sizes up to 100 and simple formulae for estimated critical values for tests at the 5% level.  相似文献   

6.
We define the Wishart distribution on the cone of positive definite matrices and an exponential distribution on the Lorentz cone as exponential dispersion models. We show that these two distributions possess a property of exact decomposition, and we use this property to solve the following problem: given q samples (yil,… yiNj), i = l,…,q, from a N(μii,) distribution, test H1 = Σ2 = … = σq. Using the exact decomposition property, the classical test statistic for H, involving q parameters pi = (Ni, - l)/2, i = 1,…,q, is replaced by a sequence of q - l test statistics for the sequence of tests Hi,:σ12 = … =σi given that Hi-1 is true, i = 2,…,q. Each one of these test statistics involves two parameters only, p.i-1 = p1 + … + pi-1 and pi. We also use the exact decomposition property to test equality of the “direction parameters” for q sample points from the exponential distribution on the Lorentz cone. We give a table of critical values for the distribution on the three-dimensional Lorentz cone. Tables of critical values in higher dimensions can easily be computed following the same method as in dimension three.  相似文献   

7.
The article derives Bartlett corrections for improving the chi-square approximation to the likelihood ratio statistics in a class of symmetric nonlinear regression models. This is a wide class of models which encompasses the t model and several other symmetric distributions with longer-than normal tails. In this paper we present, in matrix notation, Bartlett corrections to likelihood ratio statistics in nonlinear regression models with errors that follow a symmetric distribution. We generalize the results obtained by Ferrari, S. L. P. and Arellano-Valle, R. B. (1996). Modified likelihood ratio and score tests in linear regression models using the t distribution. Braz. J. Prob. Statist., 10, 15–33, who considered a t distribution for the errors, and by Ferrari, S. L. P. and Uribe-Opazo, M. A. (2001). Corrected likelihood ratio tests in a class of symmetric linear regression models. Braz. J. Prob. Statist., 15, 49–67, who considered a symmetric linear regression model. The formulae derived are simple enough to be used analytically to obtain several Bartlett corrections in a variety of important models. We also present simulation results comparing the sizes and powers of the usual likelihood ratio tests and their Bartlett corrected versions.  相似文献   

8.
One of the multisample problems is discussed in this article. A new multisample rank tests based on a k-sample Baumgartner statistic are proposed for testing the location-scale parameters. The exact critical values of proposed statistics are calculated. Simulations are used to investigate the power of proposed statistics for various population distributions.  相似文献   

9.
According to ISO 5725-2 (1994), measurement results obtained in an interlaboratory experiment are inspected for consistency by plotting Mandel’s h and k statistics and for outliers by application of the Grubbs test and the Cochran test. Critical values of these statistics for significance levels α=5% and α=1% and for some numbers p of laboratories and n of repeated measurements in the laboratories are supplied in ISO 5725-2 without reference to methods for their calculation. In this paper, exact formulae for the critical values of Mandel’s h and k and approximate formulae for the critical values of the Single Grubbs test, the Double Grubbs test and the Cochran test are derived.  相似文献   

10.
Exact unconditional tests for comparing two binomial probabilities are generally more powerful than conditional tests like Fisher's exact test. Their power can be further increased by the Berger and Boos confidence interval method, where a p-value is found by restricting the common binomial probability under H 0 to a 1?γ confidence interval. We studied the average test power for the exact unconditional z-pooled test for a wide range of cases with balanced and unbalanced sample sizes, and significance levels 0.05 and 0.01. The detailed results are available online on the web. Among the values 10?3, 10?4, …, 10?10, the value γ=10?4 gave the highest power, or close to the highest power, in all the cases we looked at, and can be given as a general recommendation as an optimal γ.  相似文献   

11.
In partly linear models, the dependence of the response y on (x T, t) is modeled through the relationship y=x T β+g(t)+?, where ? is independent of (x T, t). We are interested in developing an estimation procedure that allows us to combine the flexibility of the partly linear models, studied by several authors, but including some variables that belong to a non-Euclidean space. The motivating application of this paper deals with the explanation of the atmospheric SO2 pollution incidents using these models when some of the predictive variables belong in a cylinder. In this paper, the estimators of β and g are constructed when the explanatory variables t take values on a Riemannian manifold and the asymptotic properties of the proposed estimators are obtained under suitable conditions. We illustrate the use of this estimation approach using an environmental data set and we explore the performance of the estimators through a simulation study.  相似文献   

12.
In randomized clinical trials, it is often necessary to demonstrate that a new medical treatment does not substantially differ from a standard reference treatment. Formal testing of such ‘equivalence hypotheses’ is typically done by combining two one‐sided tests (TOST). A quite different strand of research has demonstrated that replacing nuisance parameters with a null estimate produces P‐values that are close to exact ( Lloyd 2008a ) and that maximizing over the residual dependence on the nuisance parameter produces P‐values that are exact and optimal within a class ( Röhmel & Mansmann 1999 ; Lloyd 2008a ). The three procedures – TOST, estimation and maximization of a nuisance parameter – can each be expressed as a transformation of an approximate P‐value. In this paper, we point out that TOST‐based P‐values will generally be conservative, even if based on exact and optimal one‐sided tests. This conservatism is avoided by applying the three transforms in a certain order – estimation followed by TOST followed by maximization. We compare this procedure with existing alternatives through a numerical study of binary matched pairs where the two treatments are compared by the difference of response rates. The resulting tests are uniformly more powerful than the considered competitors, although the difference in power can range from very small to moderate.  相似文献   

13.
Abstract

The development of unit root tests continues unabated, with many recent contributions using techniques such as generalized least squares (GLS) detrending and recursive detrending to improve the power of the test. In this article, the relation between the seemingly disparate tests is demonstrated by algebraically nesting all of them as ratios of quadratic forms in normal variables. By doing so, and using the exact sampling distribution of the ratio, it is straightforward to compute, examine, and compare the test' critical values and power functions. It is shown that use of GLS detrending parameters other than those recommended in the literature can lead to substantial power improvements. The open and important question regarding the nature of the first observation is addressed. Tests with high power are proposed irrespective of the distribution of the initial observation, which should be of great use in practical applications.  相似文献   

14.
Let F(x) be a life distribution. An exact test is given for testing H0 F is exponential, versusH1Fε NBUE (NWUE); along with a table of critical values for n=5(l)80, and n=80(5)65. An asymptotic test is made available for large values of n, where the standardized normal table can be used for testing.  相似文献   

15.
ABSTRACT

In the design of CUSUM control charts, it is common to use charts, tables, or software to find an appropriate critical threshold (h). This article provides an approximate formula to calculate the threshold directly from prespecified values of the reference value (k) and the in-control average run length (ARL0). Formulas are also provided for choosing k and h from prespecified values of the in-control and out-of-control average run lengths.  相似文献   

16.
We derive the exact finite sample distribution of the L1 -version of the Fisz–Cramér–von Mises test statistic (FCvM 1). We first characterize the set of all distinct sample p-p plots for two balanced samples of size n absent ties. Next, we order this set according to the corresponding value of FCvM 1. Finally, we link these values to the probabilities that the underlying p-p plots emerge. Comparing the finite sample distribution with the (known) limiting distribution shows that the latter can always be used for hypothesis testing: although for finite samples the critical percentiles of the limiting distribution differ from the exact values, this will not lead to differences in the rejection of the underlying hypothesis.  相似文献   

17.
Under proper conditions, two independent tests of the null hypothesis of homogeneity of means are provided by a set of sample averages. One test, with tail probability P 1, relates to the variation between the sample averages, while the other, with tail probability P 2, relates to the concordance of the rankings of the sample averages with the anticipated rankings under an alternative hypothesis. The quantity G = P 1 P 2 is considered as the combined test statistic and, except for the discreteness in the null distribution of P 2, would correspond to the Fisher statistic for combining probabilities. Illustration is made, for the case of four means, on how to get critical values of G or critical values of P 1 for each possible value of P 2, taking discreteness into account. Alternative measures of concordance considered are Spearman's ρ and Kendall's τ. The concept results, in the case of two averages, in assigning two-thirds of the test size to the concordant tail, one-third to the discordant tail.  相似文献   

18.
This paper proposes an approximation to the distribution of a goodness-of-fit statistic proposed recently by Balakrishnan et al. [Balakrishnan, N., Ng, H.K.T. and Kannan, N., 2002, A test of exponentiality based on spacings for progressively Type-II censored data. In: C. Huber-Carol et al. (Eds.), Goodness-of-Fit Tests and Model Validity (Boston: Birkhäuser), pp. 89–111.] for testing exponentiality based on progressively Type-II right censored data. The moments of this statistic can be easily calculated, but its distribution is not known in an explicit form. We first obtain the exact moments of the statistic using Basu's theorem and then the density approximants based on these exact moments of the statistic, expressed in terms of Laguerre polynomials, are proposed. A comparative study of the proposed approximation to the exact critical values, computed by Balakrishnan and Lin [Balakrishnan, N. and Lin, C.T., 2003, On the distribution of a test for exponentiality based on progressively Type-II right censored spacings. Journal of Statistical Computation and Simulation, 73 (4), 277–283.], is carried out. This reveals that the proposed approximation is very accurate.  相似文献   

19.
ABSTRACT

Bootstrap-based unit root tests are a viable alternative to asymptotic distribution-based procedures and, in some cases, are preferable because of the serious size distortions associated with the latter tests under certain situations. While several bootstrap-based unit root tests exist for autoregressive moving average processes with homoskedastic errors, only one such test is available when the innovations are conditionally heteroskedastic. The details for the exact implementation of this procedure are currently available only for the first order autoregressive processes. Monte-Carlo results are also published only for this limited case. In this paper we demonstrate how this procedure can be extended to higher order autoregressive processes through a transformed series used in augmented Dickey–Fuller unit root tests. We also investigate the finite sample properties for higher order processes through a Monte-Carlo study. Results show that the proposed tests have reasonable power and size properties.  相似文献   

20.
In this article, we point out some interesting relations between the exact test and the score test for a binomial proportion p. Based on the properties of the tests, we propose some approximate as well as exact methods of computing sample sizes required for the tests to attain a specified power. Sample sizes required for the tests are tabulated for various values of p to attain a power of 0.80 at level 0.05. We also propose approximate and exact methods of computing sample sizes needed to construct confidence intervals with a given precision. Using the proposed exact methods, sample sizes required to construct 95% confidence intervals with various precisions are tabulated for p = .05(.05).5. The approximate methods for computing sample sizes for score confidence intervals are very satisfactory and the results coincide with those of the exact methods for many cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号