共查询到20条相似文献,搜索用时 15 毫秒
1.
In this article, we propose a unified sequentially rejective test procedure for testing simultaneously the equality of several independent binomial proportions to a specified standard. The proposed test procedure is general enough to include some well-known multiple testing procedures such as the Ordinary Bonferroni procedure, Hochberg procedure and Rom procedure. It involves multiple tests of significance based on the simple binomial tests (exact or approximate) which can be easily found in many elementary standard statistics textbooks. Unlike the traditional Chi-square test of the overall hypothesis, the procedure can identify the subset of the binomial proportions, which are different from the prespecified standard with the control of the familywise type I error rate. Moreover, the power computation of the procedure is provided and the procedure is illustrated by two real examples from an ecological study and a carcinogenicity study. 相似文献
2.
AbstractWhen estimating a proportion p by group testing, and it is desired to increase precision, it is sometimes impractical to obtain additional individuals but it is possible to retest groups formed from the individuals within the groups that test positive at the first stage. Hepworth and Watson assessed four methods of retesting, and recommended a random regrouping of individuals from the first stage. They developed an estimator of p for their proposed method, and, because of the analytic complexity, used simulation to examine its variance properties. We now provide an analytical solution to the variance of the estimator, and compare its performance with the earlier simulated results. We show that our solution gives an acceptable approximation in a reasonable range of circumstances. 相似文献
3.
Gregory Campbell 《Journal of statistical planning and inference》1984,10(3):311-321
Let (ψi,φi) be independent, identically distributed pairs of zero-one random variables with (possible) dependence of ψi and φi within the pair. For n pairs, both variables are observed, but for m1 additional pairs only ψi is observed and for m2 others φi is observed. If π1· = P{ψi = 1} and π·1=P{φi, the problem is to test π1·=π·1. Maximum likelihood estimates of π1· and π·1 are obtained via the EM algorithm. A test statistic is developed whose null distribution is asymptotically chi-square with one degree of freedom (as n and either m1 or m2 tend to infinity). If m1 = m2 = 0 the statistic reduces to that of McNemar's test; if n = 0, it is equivalent to the statistic for testing equality of two independent proportions. This test is compared with other tests by means of Pitman efficiency. Examples are presented. 相似文献
4.
In the estimation of a proportion p by group testing (pooled testing), retesting of units within positive groups has received little attention due to the minimal gain in precision compared to testing additional units. If acquisition of additional units is impractical or too expensive, and testing is not destructive, we show that retesting can be a useful option. We propose the retesting of a random grouping of units from positive groups, and compare it with nested halving procedures suggested by others. We develop an estimator of p for our proposed method, and examine its variance properties. Using simulation we compare retesting methods across a range of group testing situations, and show that for most realistic scenarios, our method is more efficient. 相似文献
5.
Graham Hepworth 《Australian & New Zealand Journal of Statistics》2019,61(1):51-60
Group testing, in which individuals are pooled together and tested as a group, can be combined with inverse sampling to estimate the prevalence of a disease. Alternatives to the MLE are desirable because of its severe bias. We propose an estimator based on the bias correction method of Firth (1993), which is almost unbiased across the range of prevalences consistent with the group testing design. For equal group sizes, this estimator is shown to be equivalent to that derived by applying the correction method of Burrows (1987), and better than existing methods. For unequal group sizes, the problem has some intractable elements, but under some circumstances our proposed estimator can be found, and we show it to be almost unbiased. Calculation of the bias requires computer‐intensive approximation because of the infinite number of possible outcomes. 相似文献
6.
S. R. Paul 《Revue canadienne de statistique》1989,17(2):217-227
We derive two C(α) statistics and the likelihood-ratio statistic for testing the equality of several correlation coefficients, from k ≥ 2 independent random samples from bivariate normal populations. The asymptotic relationship of the C(α) tests, the likelihood-ratio test, and a statistic based on the normality assumption of Fisher's Z-transform of the sample correlation coefficient is established. A comparative performance study, in terms of size and power, is then conducted by Monte Carlo simulations. The likelihood-ratio statistic is often too liberal, and the statistic based on Fisher's Z-transform is conservative. The performance of the two C(α) statistics is identical. They maintain significance level well and have almost the same power as the other statistics when empirically calculated critical values of the same size are used. The C(α) statistic based on a noniterative estimate of the common correlation coefficient (based on Fisher's Z-transform) is recommended. 相似文献
7.
Assessment of non-inferiority is often performed using a one-sided statistical test through an analogous one-sided confidence limit. When the focus of attention is the difference in success rates between test and active control proportions, the lower confidence limit is computed, and many methods exist in the literature to address this objective. This paper considers methods which have been shown to be popular in the literature and have surfaced in this research as having good performance with respect to controlling type I error at the specified level. Performance of these methods is assessed with respect to power and type I error through simulations. Sample size considerations are also included to aid in the planning stages of non-inferiority trials focusing on the difference in proportions. Results suggest that the appropriate method to use depends on the sample size allocation of subjects in the test and active control groups. 相似文献
8.
A Monte Carlo study was used to examine the Type I error and power values of five multivariate tests for the single-factor repeated measures model The performance of Hotelling's T2 and four nonparametric tests, including a chi-square and an F-test version of a rank-transform procedure, were investigated for different distributions, sample sizes, and numbers of repeated measures. The results indicated that both Hotellings T* and the F-test version of the rank-transform performed well, producing Type I error rates which were close to the nominal value. The chi-square version of the rank-transform test, on the other hand, produced inflated Type I error rates for every condition studied. The Hotelling and F-test version of the rank-transform procedure showed similar power for moderately-skewed distributions, but for strongly skewed distributions the F-test showed much better power. The performance of the other nonparametric tests depended heavily on sample size. Based on these results, the F-test version of the rank-transform procedure is recommended for the single-factor repeated measures model. 相似文献
9.
Partial correlations can be used to statistically control for the effects of unwanted variables.Perhaps the most frequently used test of a partial correlation is the parametric F test,which requires normality of the joint distribution of observations.The possibility that this assumption may not be met in practice suggests a need for procedures that do not require normality.Unfortunately,the statistical literature provides little guidance for choosing other tests when the normalityassumption is not satisfied.Several nonparametric tests of partial correlations are investigated using a computer simulation study.Recommendations are made for selecting certain tests under particular conditions 相似文献
10.
Various methods for “Studentizing” the sample median are com-pared on the basis of a Monte Carlo study. Several of the methods do rather poorly while two, the bootstrap and the standardized length of a distribution free confidence interval, behave accept-ably acrors a wide range of sample sizes and several distributions of varying tail length. These two methods seem to agree closely with the distribution free confidence intervals and moreover, un-like these intervals, the methods can be extended to a method of accurate inference for λ1 regreasion. 相似文献
11.
SUMMARY The problem of testing the equality of several variances arises in many areas. For testing the equality of variances, several tests are available in the literature which demonstrate only the statistical significance of the variances. In this paper, a graphical method is presented for testing the equality of variances. This method simultaneously demonstrates the statistical and engineering significance. Two examples are given to illustrate the proposed graphical method, and the conclusions obtained are compared with the existing tests. 相似文献
12.
This paper examines the robustness of the Welch test, the James test as well as Tan's ANOVA test (to be referred as Fβ test) for testing parallelism in k straight lines under heteroscedasticity and nonnormality. Results of Monte Carlo studies demonstrate the robustness of all tests with respect to departure from normality. Further, there is hardly any difference between these methods with respect to both power and size of the test. 相似文献
13.
A.I. Khuri 《统计学通讯:理论与方法》2013,42(13):1283-1293
In many experimental situations we need to test the hypothesis concerning the equality of parameters of two or more binomial populations. Of special interest is the knowledge of the sample sizes needed to detect certain differences among the parameters, for a specified power, and at a given level of significance. Al-Bayyati (1971) derived a rule of thumb for a quick calculation of the sample size needed to compare two binomial parameters. The rule is defined in terms of the difference desired to be detected between the two parameters. In this paper, we introduce a generalization of Al-Bayyatifs rule to several independent proportions. The generalized rule gives a conservative estimate of the sample size needed to achieve a specified power in detecting certain differences among the binomial parameters at a given level of significance. The method is illustrated with an example 相似文献
14.
Thomas R. Willemain Ali Allahverdi Philip Desautels Janine ldredge Ozden Gur Gregory Panos 《统计学通讯:模拟与计算》2013,42(4):1043-1075
We compare the performance of seven robust estimators for the parameter of an exponential distribution. These include the debiased median and two optimally-weighted one-sided trimmed means. We also introduce four new estimators: the Transform, Bayes, Scaled and Bicube estimators. We make the Monte Carlo comparisons for three sample sizes and six situations. We evaluate the comparisons in terms of a new performance measure, Mean Absolute Differential Error (MADE), and a premium/protection interpretation of MADE. We organize the comparisons to enhance statistical power by making maximal use of common random deviates. The Transform estimator provides the best performance as judged by MADE. The singly-trimmed mean and Transform method define the efficient frontier of premium/protection. 相似文献
15.
S. P. Brooks 《Statistics and Computing》2001,11(2):179-190
When the results of biological experiments are tested for a possible difference between treatment and control groups, the inference is only valid if based upon a model that fits the experimental results satisfactorily. In dominant-lethal testing, foetal death has previously been assumed to follow a variety of models, including a Poisson, Binomial, Beta-binomial and various mixture models. However, discriminating between models has always been a particularly difficult problem. In this paper, we consider the data from 6 separate dominant-lethal assay experiments and discriminate between the competing models which could be used to describe them. We adopt a Bayesian approach and illustrate how a variety of different models may be considered, using Markov chain Monte Carlo (MCMC) simulation techniques and comparing the results with the corresponding maximum likelihood analyses. We present an auxiliary variable method for determining the probability that any particular data cell is assigned to a given component in a mixture and we illustrate the value of this approach. Finally, we show how the Bayesian approach provides a natural and unique perspective on the model selection problem via reversible jump MCMC and illustrate how probabilities associated with each of the different models may be calculated for each data set. In terms of estimation we show how, by averaging over the different models, we obtain reliable and robust inference for any statistic of interest. 相似文献
16.
In this paper, we propose a nonparametric method based on jackknife empirical likelihood ratio to test the equality of two variances. The asymptotic distribution of the test statistic has been shown to follow χ2 distribution with the degree of freedom 1. Simulations have been conducted to show the type I error and the power compared to Levene's test and F test under different distribution settings. The proposed method has been applied to a real data set to illustrate the testing procedure. 相似文献
17.
《Journal of Statistical Computation and Simulation》2012,82(3-4):181-187
A procedure is proposed for testing the equality of k dependent correlation coefficients. The procedure is simulated utilizing Monte Carlo techniques; and, a method for post hoc probing is also suggested. 相似文献
18.
Jean-J. Fortier 《Revue canadienne de statistique》1992,20(1):23-33
It has been recognized that counting the objects allocated by a rule of classification to several unknown classes often does not provide good estimates of the true class proportions of the objects to be classified. We propose a linear transformation of these classification estimates, which minimizes the mean squared error of the transformed estimates over all possible sets of true proportions. This so-called best-linear-corrector (BLC) transformation is a function of the confusion (classification-error) matrix and of the first and second moments of the prior distribution of the vector of proportions. When the number of objects to be classified increases, the BLC tends to the inverse of the confusion matrix. The estimates that are obtained directly by this inverse-confusion corrector (ICC) are also the maximum-likelihood unbiased estimates of the probabilities that the objects originate from one or the other class, had the objects been preselected with those probabilities. But for estimating the actual proportions, the ICC estimates behave less well than the raw classification estimates for some collections. In that situation, the BLC is substantially superior to the ICC even for some large collections of objects and is always substantially superior to the raw estimates. The statistical model is applied concretely to the measure of forest covers in remote sensing. 相似文献
19.
We consider the problem of simultaneously estimating k + 1 related proportions, with a special emphasis on the estimation of Hardy-Weinberg (HW) proportions. We prove that the uniformly minimum-variance unbiased estimator (UMVUE) of two proportions which are individually admissible under squared-error loss are inadmissible in estimating the proportions jointly. Furthermore, rules that dominate the UMVUE are given. A Bayesian analysis is then presented to provide insight into this inadmissibility issue: The UMVUE is undesirable because the two estimators are Bayes rules corresponding to different priors. It is also shown that there does not exist a prior which yields the maximum-likelihood estimators simultaneously. When the risks of several estimators for the HW proportions are compared, it is seen that some Bayesian estimates yield significantly smaller risks over a large portion of the parameter space for small samples. However, the differences in risks become less significant as the sample size gets larger. 相似文献