首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 0 毫秒
1.
This paper considers p-value based step-wise rejection procedures for testing multiple hypotheses. The existing procedures have used constants as critical values at all steps. With the intention of incorporating the exact magnitude of the p-values at the earlier steps into the decisions at the later steps, this paper applies a different strategy that the critical values at the later steps are determined as functions of the p-values from the earlier steps. As a result, we have derived a new equality and developed a two-step rejection procedure following that. The new procedure is a short-cut of a step-up procedure, and it possesses great simplicity. In terms of power, the proposed procedure is generally comparable to the existing ones and exceptionally superior when the largest p-value is anticipated to be less than 0.5.  相似文献   

2.
Simultaneously testing a family of n null hypotheses can arise in many applications. A common problem in multiple hypothesis testing is to control Type-I error. The probability of at least one false rejection referred to as the familywise error rate (FWER) is one of the earliest error rate measures. Many FWER-controlling procedures have been proposed. The ability to control the FWER and achieve higher power is often used to evaluate the performance of a controlling procedure. However, when testing multiple hypotheses, FWER and power are not sufficient for evaluating controlling procedure’s performance. Furthermore, the performance of a controlling procedure is also governed by experimental parameters such as the number of hypotheses, sample size, the number of true null hypotheses and data structure. This paper evaluates, under various experimental settings, the performance of some FWER-controlling procedures in terms of five indices, the FWER, the false discovery rate, the false non-discovery rate, the sensitivity and the specificity. The results can provide guidance on how to select an appropriate FWER-controlling procedure to meet a study’s objective.  相似文献   

3.
Many multiple testing procedures (MTPs) are available today, and their number is growing. Also available are many type I error rates: the family-wise error rate (FWER), the false discovery rate, the proportion of false positives, and others. Most MTPs are designed to control a specific type I error rate, and it is hard to compare different procedures. We approach the problem by studying the exact level at which threshold step-down (TSD) procedures (an important class of MTPs exemplified by the classic Holm procedure) control the generalized FWER   defined as the probability of kk or more false rejections. We find that level explicitly for any TSD procedure and any kk. No assumptions are made about the dependency structure of the pp-values of the individual tests. We derive from our formula a criterion for unimprovability   of a procedure in the class of TSD procedures controlling the generalized FWER at a given level. In turn, this criterion implies that for each kk the number of such unimprovable procedures is finite and is greater than one if k>1k>1. Consequently, in this case the most rejective procedure in the above class does not exist.  相似文献   

4.
The issues and dangers involved in testing multiple hypotheses are well recognised within the pharmaceutical industry. In reporting clinical trials, strenuous efforts are taken to avoid the inflation of type I error, with procedures such as the Bonferroni adjustment and its many elaborations and refinements being widely employed. Typically, such methods are conservative. They tend to be accurate if the multiple test statistics involved are mutually independent and achieve less than the type I error rate specified if these statistics are positively correlated. An alternative approach is to estimate the correlations between the test statistics and to perform a test that is conditional on those estimates being the true correlations. In this paper, we begin by assuming that test statistics are normally distributed and that their correlations are known. Under these circumstances, we explore several approaches to multiple testing, adapt them so that type I error is preserved exactly and then compare their powers over a range of true parameter values. For simplicity, the explorations are confined to the bivariate case. Having described the relative strengths and weaknesses of the approaches under study, we use simulation to assess the accuracy of the approximate theory developed when the correlations are estimated from the study data rather than being known in advance and when data are binary so that test statistics are only approximately normally distributed.  相似文献   

5.
We are considered with the problem of m simultaneous statistical test problems with composite null hypotheses. Usually, marginal p-values are computed under least favorable parameter configurations (LFCs), thus being over-conservative under non-LFCs. Our proposed randomized p-value leads to a tighter exhaustion of the marginal (local) significance level. In turn, it is stochastically larger than the LFC-based p-value under alternatives. While these distributional properties are typically nonsensical for m  =1, the exhaustion of the local significance level is extremely helpful for cases with m>1m>1 in connection with data-adaptive multiple tests as we will demonstrate by considering multiple one-sided tests for Gaussian means.  相似文献   

6.
We are concerned with three different types of multivariate chi-square distributions. Their members play important roles as limiting distributions of vectors of test statistics in several applications of multiple hypotheses testing. We explain these applications and consider the computation of multiplicity-adjusted p-values under the respective global hypothesis. By means of numerical examples, we demonstrate how much gain in level exhaustion or, equivalently, power can be achieved with corresponding multivariate multiple tests compared with approaches which are only based on univariate marginal distributions and do not take the dependence structure among the test statistics into account. As a further contribution of independent value, we provide an overview of essentially all analytic formulas for computing multivariate chi-square probabilities of the considered types which are available up to present. These formulas were scattered in the previous literature and are presented here in a unified manner.  相似文献   

7.
8.
In the literature related to the study of lifelengths of experimental units, little attention has been paid to the models where shocks to the units generate outliers. In the present article, we consider a situation where n experimental units under investigation receive shocks at several time points. The parameter values of the lifelength distribution may change due to each shock, resulting in the generation of outliers. We derive the likelihood ratio test statistic to investigate if the shocks have significantly altered the parameter values. We also derive a likelihood ratio test under the labelled slippage alternative with multiple contaminations. Monte Carlo studies have been carried out to investigate the power of the proposed test statistics.  相似文献   

9.
Censored survival data are analysed by regression models which require some assumptions on the waycovariates affect the hazard function. Praportional Hazards (PH) and Accelerated Failure Times (AFT) are the hypothese most often used in practice. A method is introduced here for testing the PH and the AFT hypotheses against a general model for the hazard function. Simulated and real data arepresented to show the usefulness of the method.  相似文献   

10.
The sequential probability ratio test (SPRT) chart is a very effective tool for monitoring manufacturing processes. This paper proposes a rational SPRT chart to monitor both process mean and variance. This SPRT chart determines the sampling interval d based on the rational subgroup concept according to the process conditions and administrative considerations. Since the rational subgrouping is widely adopted in the design and implementation of control charts, the studies of the rational SPRT have a practical significance. The rational SPRT chart is designed optimally in order to minimize the index average extra quadratic loss for the best overall performance. A systematic performance study has also been conducted. From an overall viewpoint, the rational SPRT chart is more effective than the cumulative sum chart by more than 63%. Furthermore, this article provides a design table, which contains the optimal values of the parameters of the rational SPRT charts for different specifications. This will greatly facilitate the potential users to select an appropriate SPRT chart for their applications. The users can also justify the application of the rational SPRT chart according to the achievable enhancement in detection effectiveness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号