共查询到20条相似文献,搜索用时 15 毫秒
1.
Jing Xu 《统计学通讯:模拟与计算》2018,47(2):420-431
To study the equality of regression coefficients in several heteroscedastic regression models, we propose a fiducial-based test, and theoretically examine the frequency property of the proposed test. We numerically compare the performance of the proposed approach with the parametric bootstrap (PB) approach. Simulation results indicate that the fiducial approach controls the Type I error rates satisfactorily regardless of the number of regression models and sample sizes, whereas the PB approach tends to be a little of liberal in some scenarios. Finally, the proposed approach is applied to an analysis of a real dataset for illustration. 相似文献
2.
In a searching analysis of the fiducial argument Hacking (1965) proposed the Principle of Irrelevance as a condition under which the argument is valid. His statement of the Principle was essentially non-mathematical and this paper presents a mathematical development of the Principle. The relationship with likelihood inference is explored and some of the proposed counter-examples to fiducial theory are considered. It is shown that even with the Principle of Irrelevance examples of non-uniqueness of fiducial distributions exist. 相似文献
3.
This paper presents a kernel estimation of the distribution of the scale parameter of the inverse Gaussian distribution under type II censoring together with the distribution of the remaining time. Estimation is carried out via the Gibbs sampling algorithm combined with a missing data approach. Estimates and confidence intervals for the parameters of interest are also presented. 相似文献
4.
This paper considers the problem of identifying which treatments are strictly worse than the best treatment or treatments in a one-way layout, which has many important applications in screening trials for new product development. A procedure is proposed that selects a subset of the treatments containing only treatments that are known to be strictly worse than the best treatment or treatments. In addition, simultaneous confidence intervals are obtained which provide upper bounds on how inferior the treatments are compared with these best treatments. In this way, the new procedure shares the characteristics of both subset selection procedures and multiple comparison procedures. Some tables of critical points are provided for implementing the new procedure, and some examples of its use are given. 相似文献
5.
A. Aghamohammadi 《统计学通讯:模拟与计算》2018,47(4):939-953
This article considers a Bayesian hierarchical model for multiple comparisons in linear models where the population medians satisfy a simple order restriction. Representing the asymmetric Laplace distribution as a scale mixture of normals with an exponential mixing density and a continuous prior restricted to order constraints, a Gibbs sampling algorithm for parameter estimation and simultaneous comparison of treatment medians is proposed. Posterior probabilities of all possible hypotheses on the equality/inequality of treatment medians are estimated using Bayes factors that are computed via the Savage-Dickey density ratios. The performance of the proposed median-based model is investigated in the simulated and real datasets. The results show that the proposed method can outperform the commonly used method that is based on treatment means, when data are from nonnormal distributions. 相似文献
6.
John D. Storey 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2002,64(3):479-498
Summary. Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for a single-hypothesis test, a compound error rate is controlled for multiple-hypothesis tests. For example, controlling the false discovery rate FDR traditionally involves intricate sequential p -value rejection methods based on the observed data. Whereas a sequential p -value method fixes the error rate and estimates its corresponding rejection region, we propose the opposite approach—we fix the rejection region and then estimate its corresponding error rate. This new approach offers increased applicability, accuracy and power. We apply the methodology to both the positive false discovery rate pFDR and FDR, and provide evidence for its benefits. It is shown that pFDR is probably the quantity of interest over FDR. Also discussed is the calculation of the q -value, the pFDR analogue of the p -value, which eliminates the need to set the error rate beforehand as is traditionally done. Some simple numerical examples are presented that show that this new approach can yield an increase of over eight times in power compared with the Benjamini–Hochberg FDR method. 相似文献
7.
Halperin et al. (1988) suggested an approach which allows for k Type I errors while using Scheffe's method of multiple comparisons for linear combinations of p means. In this paper we apply the same type of error control to Tukey's method of multiple pairwise comparisons. In fact, the variant of the Tukey (1953) approach discussed here defines the error control objective as assuring with a specified probability that at most one out of the p(p-l)/2 comparisons between all pairs of the treatment means is significant in two-sided tests when an overall null hypothesis (all p means are equal) is true or, from a confidence interval point of view, that at most one of a set of simultaneous confidence intervals for all of the pairwise differences of the treatment means is incorrect. The formulae which yield the critical values needed to carry out this new procedure are derived and the critical values are tabulated. A Monte Carlo study was conducted and several tables are presented to demonstrate the experimentwise Type I error rates and the gains in power furnished by the proposed procedure 相似文献
8.
Jason C. Hsu 《统计学通讯:理论与方法》2013,42(9):2009-2028
There are three types of multiple comparisons: all-pairwise multiple comparisons (MCA), multiple comparisons with the best (MCB), and multiple comparisons with a control (MCC). There are also three levels of multiple comparisons inference: confidence sets, subset comparisons, test of homogeneity. In current practice, MCA procedures dominate. In correct attempts at more efficient comparisons, in the form of employing lower level MCA procedures for higher level inference, account for the most frequent abuses in multiple comparisons. A better strategy is to choose the correct type of inference at the level of inference desired. In particular, very often the simulataneous comparisons of each treatment with the best of the other treatments (MCB) suffice. Hsu (1984b) gave simultaneous confidence intervals for θi ? maxj≠iθj having the simple form [? (Yi ?maxj≠i Yj ? C) (Yi?maxj≠i Yj + C)+]. Those intervals were constrained, sothat even if a treatment is inferred to be the best, no positive bound on how much it is better thatn the rest is given, a somewhat undesirable property. In this article it is shown that by employing a slightly larger critical value, the nonpositivity constraint on the lower bound is removed. 相似文献
9.
A sample size selection procedure for paired comparisons of means is presented which controls the half width of the confidence intervals while allowing for unequal variances of treatment means. 相似文献
10.
Shu-Fei Wu 《统计学通讯:模拟与计算》2013,42(7):2056-2064
ABSTRACTIn this paper, a modified one-stage multiple comparison procedures with a control for exponential location parameters based on the doubly censored sample under heteroscedasticity is proposed. A simulation study is done and the results show that the proposed procedures have shorter confidence length with coverage probabilities closer to the nominal ones compared with the one proposed in Wu (2017). At last, an example of comparing the duration of remission for four drugs as the treatment of leukemia is given to demonstrate the proposed procedures. 相似文献
11.
M. Rauf Ahmad 《Journal of Statistical Computation and Simulation》2019,89(6):1044-1059
ABSTRACTMultiple comparisons for two or more mean vectors are considered when the dimension of the vectors may exceed the sample size, the design may be unbalanced, populations need not be normal, and the true covariance matrices may be unequal. Pairwise comparisons, including comparisons with a control, and their linear combinations are considered. Under fairly general conditions, the asymptotic multivariate distribution of the vector of test statistics is derived whose quantiles can be used in multiple testing. Simulations are used to show the accuracy of the tests. Real data applications are also demonstrated. 相似文献
12.
The purpose of this note is to derive simple testing procedures for ANOVA under heteroscedasticity by a single approach that are equivalent to the prior art in the literature obtained by the Parametric Bootstrap and the Generalized Fiducial approach. By similar approach, researchers are encouraged to derive generalized tests in other applications, as alternative to parametric bootstrap tests and fiducial tests, including ANCOVA and MANOVA under heteroscedasticity, especially in Mixed Model applications, where the bootstrap approach fails. 相似文献
13.
Julia N. Soulakova 《统计学通讯:理论与方法》2017,46(19):9441-9449
A problem where one subpopulation is compared with several other subpopulations in terms of means with the goal of estimating the smallest difference between the means commonly arises in biology, medicine, and many other scientific fields. A generalization of Strass-burger-Bretz-Hochberg approach for two comparisons is presented for cases with three and more comparisons. The method allows constructing an interval estimator for the smallest mean difference, which is compatible with the Min test. An application to a fluency-disorder study is illustrated. Simulations confirmed adequate probability coverage for normally distributed outcomes for a number of designs. 相似文献
14.
David R. Bickel 《Revue canadienne de statistique》2011,39(4):610-631
The normalized maximum likelihood (NML) is a recent penalized likelihood that has properties that justify defining the amount of discrimination information (DI) in the data supporting an alternative hypothesis over a null hypothesis as the logarithm of an NML ratio, namely, the alternative hypothesis NML divided by the null hypothesis NML. The resulting DI, like the Bayes factor but unlike the P‐value, measures the strength of evidence for an alternative hypothesis over a null hypothesis such that the probability of misleading evidence vanishes asymptotically under weak regularity conditions and such that evidence can support a simple null hypothesis. Instead of requiring a prior distribution, the DI satisfies a worst‐case minimax prediction criterion. Replacing a (possibly pseudo‐) likelihood function with its weighted counterpart extends the scope of the DI to models for which the unweighted NML is undefined. The likelihood weights leverage side information, either in data associated with comparisons other than the comparison at hand or in the parameter value of a simple null hypothesis. Two case studies, one involving multiple populations and the other involving multiple biological features, indicate that the DI is robust to the type of side information used when that information is assigned the weight of a single observation. Such robustness suggests that very little adjustment for multiple comparisons is warranted if the sample size is at least moderate. The Canadian Journal of Statistics 39: 610–631; 2011. © 2011 Statistical Society of Canada 相似文献
15.
Charles W Dunnett 《统计学通讯:理论与方法》2013,42(22):2611-2629
The use of several robust estimators of location with their associated variance estimates in a modified T-method for pairwise multiple comparisons between treatment means was compared with the sample mean and variance and with the k-sample rank sum test. The methods were compared with respect to the stability of their experimentwise error rates under a variety of non-normal situations (robustness of validity) and their average confidence interval lengths (robustness of efficiency). 相似文献
16.
John H. Skillings 《统计学通讯:模拟与计算》2013,42(4):373-387
Two questions of interest involving nonparametric multiple comparisons are considered. The first question concerns whether it is appropriate to use a multiple comparison procedure as a test of the equality of k treatments, and if it is, which procedure performs best as a test. Our results show that for smaller k values some multiple comparison procedures perform well as tests. The second question concerns whether a joint ranking or a separate ranking multiple comparison procedure performs better as a test and as a device for treatment separation. We find that the joint ranking procedure does slightly better as a test, but for treatment separation the answer depends on the situation. 相似文献
17.
Strong control, conservative point estimation and simultaneous conservative consistency of false discovery rates: a unified approach 总被引:1,自引:0,他引:1
John D. Storey Jonathan E. Taylor David Siegmund 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(1):187-205
Summary. The false discovery rate (FDR) is a multiple hypothesis testing quantity that describes the expected proportion of false positive results among all rejected null hypotheses. Benjamini and Hochberg introduced this quantity and proved that a particular step-up p -value method controls the FDR. Storey introduced a point estimate of the FDR for fixed significance regions. The former approach conservatively controls the FDR at a fixed predetermined level, and the latter provides a conservatively biased estimate of the FDR for a fixed predetermined significance region. In this work, we show in both finite sample and asymptotic settings that the goals of the two approaches are essentially equivalent. In particular, the FDR point estimates can be used to define valid FDR controlling procedures. In the asymptotic setting, we also show that the point estimates can be used to estimate the FDR conservatively over all significance regions simultaneously, which is equivalent to controlling the FDR at all levels simultaneously. The main tool that we use is to translate existing FDR methods into procedures involving empirical processes. This simplifies finite sample proofs, provides a framework for asymptotic results and proves that these procedures are valid even under certain forms of dependence. 相似文献
18.
Michael A Seaman 《统计学通讯:模拟与计算》2013,42(2):687-705
A sequentially rejective (SR) testing procedure introduced by Holm (1979) and modified (MSR) by Shaffer (1986) is considered for testing all pairwise mean comparisons.For such comparisons, both the SR and MSR methods require that the observed test statistics be ordered and compared, each in turn, to appropriate percentiles on Student's t distribution.For the MSR method these percentiles are based on the maximum number of true null hypotheses remaining at each stage of the sequential procedure, given prior significance at previous stages, A function is developed for determining this number from the number of means being tested and the stage of the test.For a test of all pairwise comparisons, the logical implications which follow the rejection of a null hypothesis renders the MSR procedure uniformly more powerful than the SR procedure.Tables of percentiles for comparing K means, 3 < K < 6, using the MSR method are presented.These tables use Sidak's (1967) multiplicative inequality and simplify the use of t he MSR procedure.Several modifications to the MSR are suggested as a means of further increasing the power for testing the pairwise comparisons.General use of the MSR and the corresponding function for testing other parameters besides the mean is discussed. 相似文献
19.
This paper considers the multiple comparisons problem for normal variances. We propose a solution based on a Bayesian model selection procedure to this problem in which no subjective input is considered. We construct the intrinsic and fractional priors for which the Bayes factors and model selection probabilities are well defined. The posterior probability of each model is used as a model selection tool. The behaviour of these Bayes factors is compared with the Bayesian information criterion of Schwarz and some frequentist tests. 相似文献
20.
Based on two-sample rank order statistics, a repeated significance testing procedure for a multi-sample location problem is considered. The asymptotic distribution theory of the proposed tests is given under the null hypothesis as well as under local alternatives. A Bahadur efficiency result of the repeated significance test relative to the terminal test based solely on the target sample size is presented. In the adaptation of the proposed tests to multiple comparisons, an asymptotically equivalent test statistic in terms of the rank estimators of the location parameters is derived from which the Scheffé method of multiple comparisons can be obtained in a convinient way. 相似文献