首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A problem where one subpopulation is compared with several other subpopulations in terms of means with the goal of estimating the smallest difference between the means commonly arises in biology, medicine, and many other scientific fields. A generalization of Strass-burger-Bretz-Hochberg approach for two comparisons is presented for cases with three and more comparisons. The method allows constructing an interval estimator for the smallest mean difference, which is compatible with the Min test. An application to a fluency-disorder study is illustrated. Simulations confirmed adequate probability coverage for normally distributed outcomes for a number of designs.  相似文献   

2.
A sequentially rejective (SR) testing procedure introduced by Holm (1979) and modified (MSR) by Shaffer (1986) is considered for testing all pairwise mean comparisons.For such comparisons, both the SR and MSR methods require that the observed test statistics be ordered and compared, each in turn, to appropriate percentiles on Student's t distribution.For the MSR method these percentiles are based on the maximum number of true null hypotheses remaining at each stage of the sequential procedure, given prior significance at previous stages, A function is developed for determining this number from the number of means being tested and the stage of the test.For a test of all pairwise comparisons, the logical implications which follow the rejection of a null hypothesis renders the MSR procedure uniformly more powerful than the SR procedure.Tables of percentiles for comparing K means, 3 < K < 6, using the MSR method are presented.These tables use Sidak's (1967) multiplicative inequality and simplify the use of t he MSR procedure.Several modifications to the MSR are suggested as a means of further increasing the power for testing the pairwise comparisons.General use of the MSR and the corresponding function for testing other parameters besides the mean is discussed.  相似文献   

3.
The comparison amongmproportions can be viewed as the clustering of the means of Bernoulli trials. By introducing a distribution which is supported on the means of Bernoulli trials, we suggest a moment method approach to determine the center of the clusters. We also suggest using model selection criteria rather than the usual testing hypothesis approach to determine the grouping of the means. The discrepancy function for all possible models are compared based on the bootstrap results.  相似文献   

4.
Comparing treatment means from populations that follow independent normal distributions is a common statistical problem. Many frequentist solutions exist to test for significant differences amongst the treatment means. A different approach would be to determine how likely it is that particular means are grouped as equal. We developed a fiducial framework for this situation. Our method provides fiducial probabilities that any number of means are equal based on the data and the assumed normal distributions. This methodology was developed when there is constant and non-constant variance across populations. Simulations suggest that our method selects the correct grouping of means at a relatively high rate for small sample sizes and asymptotic calculations demonstrate good properties. Additionally, we have demonstrated the flexibility in the methods ability to calculate the fiducial probability for any number of equal means. This was done by analyzing a simulated data set and a data set measuring the nitrogen levels of red clover plants that were inoculated with different treatments.  相似文献   

5.
Halperin et al. (1988) suggested an approach which allows for k Type I errors while using Scheffe's method of multiple comparisons for linear combinations of p means. In this paper we apply the same type of error control to Tukey's method of multiple pairwise comparisons. In fact, the variant of the Tukey (1953) approach discussed here defines the error control objective as assuring with a specified probability that at most one out of the p(p-l)/2 comparisons between all pairs of the treatment means is significant in two-sided tests when an overall null hypothesis (all p means are equal) is true or, from a confidence interval point of view, that at most one of a set of simultaneous confidence intervals for all of the pairwise differences of the treatment means is incorrect. The formulae which yield the critical values needed to carry out this new procedure are derived and the critical values are tabulated. A Monte Carlo study was conducted and several tables are presented to demonstrate the experimentwise Type I error rates and the gains in power furnished by the proposed procedure  相似文献   

6.
Abstract

We discuss the accuracy of the computation and present a fortran program to compute the cumulative distribution function (CDF) for the analysis of means (ANOM).  相似文献   

7.
In this paper, we establish the optimal size of the choice sets in generic choice experiments for asymmetric attributes when estimating main effects only. We give an upper bound for the determinant of the information matrix when estimating main effects and all two-factor interactions for binary attributes. We also derive the information matrix for a choice experiment in which the choice sets are of different sizes and use this to determine the optimal sizes for the choice sets.  相似文献   

8.
Hahn (1977) suggested a procedure for constructing prediction intervals for the difference between the means of two future samples from normal populations having equal variance, based on past samples selected from both populations. In this paper, we extend Hahn's work by constructing simultaneous prediction intervals for all pairwise differences among the means of k ≥ 2 future samples from normal populations with equal variances, using past samples taken from each of the k populations. For K = 2, this generalization reduces to Hahn's special case. These prediction intervals may be used when one has sampled the performance of several products and wishes to simultaneously as- sess the differences in future sample mean performance of these products with a predetermined overall coverage probability. The use of the new procedure is demonstrated with a numerical example.  相似文献   

9.
10.
In this, article we consider a Bayesian approach to the problem of ranking the means of normal distributed populations, which is a common problem in the biological sciences. We use a decision-theoretic approach with a straightforward loss function to determine a set of candidate rankings. This loss function allows the researcher to balance the risk of not including the correct ranking with the risk of increasing the number of rankings selected. We apply our new procedure to an example regarding the effect of zinc on the diversity of diatom species.  相似文献   

11.
This article considers a Bayesian hierarchical model for multiple comparisons in linear models where the population medians satisfy a simple order restriction. Representing the asymmetric Laplace distribution as a scale mixture of normals with an exponential mixing density and a continuous prior restricted to order constraints, a Gibbs sampling algorithm for parameter estimation and simultaneous comparison of treatment medians is proposed. Posterior probabilities of all possible hypotheses on the equality/inequality of treatment medians are estimated using Bayes factors that are computed via the Savage-Dickey density ratios. The performance of the proposed median-based model is investigated in the simulated and real datasets. The results show that the proposed method can outperform the commonly used method that is based on treatment means, when data are from nonnormal distributions.  相似文献   

12.
We propose a Bayesian hierarchical model for multiple comparisons in mixed models where the repeated measures on subjects are described with the subject random effects. The model facilitates inferences in parameterizing the successive differences of the population means, and for them, we choose independent prior distributions that are mixtures of a normal distribution and a discrete distribution with its entire mass at zero. For the other parameters, we choose conjugate or vague priors. The performance of the proposed hierarchical model is investigated in the simulated and two real data sets, and the results illustrate that the proposed hierarchical model can effectively conduct a global test and pairwise comparisons using the posterior probability that any two means are equal. A simulation study is performed to analyze the type I error rate, the familywise error rate, and the test power. The Gibbs sampler procedure is used to estimate the parameters and to calculate the posterior probabilities.  相似文献   

13.
The problem of selection of a subset containing the largest of several location parameters is considered, and a Gupta-type selection rule based on sample medians is investigated for normal and double exponential populations. Numerical comparisons between rules based on medians and means of small samples are made for normal and contaminated normal populations, assuming the popula-tion means to be equally spaced. It appears that the rule based on sample means loses its superiority over the rule based on sample medians in case the samples are heavily contaminated. The asymptotic relative efficiency (ARE) of the medians procedure relative to the means procedure is also computed, assuming the normal means to be in a slippage configuration. The means proce-dure is found to be superior to the median procedure in the sense of ARE. As in the small sample case, the situation is reversed if the normal populations are highly contaminate.  相似文献   

14.
We establish general conditions for the asymptotic validity of single-stage multiple-comparison procedures (MCPs) under the following general framework. There is a finite number of independent alternatives to compare, where each alternative can represent, e.g., a population, treatment, system or stochastic process. Associated with each alternative is an unknown parameter to be estimated, and the goal is to compare the alternatives in terms of the parameters. We establish the MCPs’ asymptotic validity, which occurs as the sample size of each alternative grows large, under two assumptions. First, for each alternative, the estimator of its parameter satisfies a central limit theorem (CLT). Second, we have a consistent estimator of the variance parameter appearing in the CLT. Our framework encompasses comparing means (or other moments) of independent (not necessarily normal) populations, functions of means, quantiles, steady-state means of stochastic processes, and optimal solutions of stochastic approximation by the Kiefer–Wolfowitz algorithm. The MCPs we consider are multiple comparisons with the best, all pairwise comparisons, all contrasts, and all linear combinations, and they allow for unknown and unequal variance parameters and unequal sample sizes across alternatives.  相似文献   

15.
Confidence intervals for location parameters are expanded (in either direction) to some “crucial” points and the resulting increase in the confidence coefficient investigated.Particaular crucial points are chosen to illuminate some hypothesis testing problems.Special results are dervied for the normal distribution with estimated variance and, in particular, for the problem of classifiying treatments as better or worse than a control.For this problem the usual two-sided Dunnett procedure is seen to be inefficient.Suggestions are made for the use of already published tables for this problem.Mention is made of the use of expanded confidence intervals for all pairwise comparisons of treatments using an “honest ordering difference” rather than Tukey's “honest siginificant difference”.  相似文献   

16.
Fisher's least significant difference (LSD) procedure is a two-step testing procedure for pairwise comparisons of several treatment groups. In the first step of the procedure, a global test is performed for the null hypothesis that the expected means of all treatment groups under study are equal. If this global null hypothesis can be rejected at the pre-specified level of significance, then in the second step of the procedure, one is permitted in principle to perform all pairwise comparisons at the same level of significance (although in practice, not all of them may be of primary interest). Fisher's LSD procedure is known to preserve the experimentwise type I error rate at the nominal level of significance, if (and only if) the number of treatment groups is three. The procedure may therefore be applied to phase III clinical trials comparing two doses of an active treatment against placebo in the confirmatory sense (while in this case, no confirmatory comparison has to be performed between the two active treatment groups). The power properties of this approach are examined in the present paper. It is shown that the power of the first step global test--and therefore the power of the overall procedure--may be relevantly lower than the power of the pairwise comparison between the more-favourable active dose group and placebo. Achieving a certain overall power for this comparison with Fisher's LSD procedure--irrespective of the effect size at the less-favourable dose group--may require slightly larger treatment groups than sizing the study with respect to the simple Bonferroni alpha adjustment. Therefore if Fisher's LSD procedure is used to avoid an alpha adjustment for phase III clinical trials, the potential loss of power due to the first-step global test should be considered at the planning stage.  相似文献   

17.
A Monte Carlo simulation evaluated five pairwise multiple comparison procedures for controlling Type I error rates, any-pair power, and all-pairs power. Realistic conditions of non-normality were based on a previous survey. Variance ratios were varied from 1:1 to 64:1. Procedures evaluated included Tukey's honestly significant difference (HSD) preceded by an F test, the Hayter–Fisher, the Games–Howell preceded by an F test, the Pertiz with F tests, and the Peritz with Alexander–Govern tests. Tukey's procedure shows the greatest robustness in Type I error control. Any-pair power is generally best with one of the Peritz procedures. All-pairs power is best with the Pertiz F test procedure. However, Tukey's HSD preceded by the Alexander–Govern F test may provide the best combination for controlling Type I and power rates in a variety of conditions of non-normality and variance heterogeneity.  相似文献   

18.
This article discusses estimation of several percentiles simultaneously, develops a simple test to compare the sizes of two test statistics, and considers the use of logit models to adjust power curves to have the same null hypothesis level.  相似文献   

19.
Consider k( ? 2) normal populations whose means are all known or unknown and whose variances are unknown. Let σ2[1] ? ??? ? σ[k]2 denote the ordered variances. Our goal is to select a non empty subset of the k populations whose size is at most m(1 ? m ? k ? 1) so that the population associated with the smallest variance (called the best population) is included in the selected subset with a guaranteed minimum probability P* whenever σ2[2][1]2 ? δ* > 1, where P* and δ* are specified in advance of the experiment. Based on samples of size n from each of the populations, we propose and investigate a procedure called RBCP. We also derive some asymptotic results for our procedure. Some comparisons with an earlier available procedure are presented in terms of the average subset sizes for selected slippage configurations based on simulations. The results are illustrated by an example.  相似文献   

20.
Richter and McCann (2007 Richter , S. J. , McCann , M. H. ( 2007 ). Multiple comparisons using medians and permutation tests . Journal of Modern Applied Statistical Methods 6 ( 2 ): 399412 . [Google Scholar]) presented a median-based multiple comparison procedure for assessing evidence of group location differences. The sampling distribution was based on the permutation distribution of the maximum median difference among all pairs, and provides strong control of the FWE. This idea is extended to develop a step-down procedure for comparing group locations. The new step-down procedure exploits logical dependencies between pairwise hypotheses and provides greater power than the single-step procedure, while still maintaining strong FWE control. The new procedure can also be a more powerful alternative to existing methods based on means, especially for heavy-tailed distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号