首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract.  We propose a confidence envelope for false discovery control when testing multiple hypotheses of association simultaneously. The method is valid under arbitrary and unknown dependence between the test statistics and allows for an exploratory approach when choosing suitable rejection regions while still retaining strong control over the proportion of false discoveries.  相似文献   

2.
Summary.  The false discovery rate (FDR) is a multiple hypothesis testing quantity that describes the expected proportion of false positive results among all rejected null hypotheses. Benjamini and Hochberg introduced this quantity and proved that a particular step-up p -value method controls the FDR. Storey introduced a point estimate of the FDR for fixed significance regions. The former approach conservatively controls the FDR at a fixed predetermined level, and the latter provides a conservatively biased estimate of the FDR for a fixed predetermined significance region. In this work, we show in both finite sample and asymptotic settings that the goals of the two approaches are essentially equivalent. In particular, the FDR point estimates can be used to define valid FDR controlling procedures. In the asymptotic setting, we also show that the point estimates can be used to estimate the FDR conservatively over all significance regions simultaneously, which is equivalent to controlling the FDR at all levels simultaneously. The main tool that we use is to translate existing FDR methods into procedures involving empirical processes. This simplifies finite sample proofs, provides a framework for asymptotic results and proves that these procedures are valid even under certain forms of dependence.  相似文献   

3.
Abstract.  Controlling the false discovery rate (FDR) is a powerful approach to multiple testing, with procedures developed with applications in many areas. Dependence among the test statistics is a common problem, and many attempts have been made to extend the procedures. In this paper, we show that a certain degree of dependence is allowed among the test statistics, when the number of tests is large, with no need for any correction. We then suggest a way to conservatively estimate the proportion of false nulls, both under dependence and independence, and discuss the advantages of using such estimators when controlling the FDR.  相似文献   

4.
Multiple comparison, which tests tens of thousands hypotheses simultaneously, has been developed extensively under the assumption of independence of test statistics. Practically the independence assumption, without which multiple comparison inference becomes very challenging, may not be suitable. Fan, Han and Gu proposed an efficient way to estimate the false discovery proportion when the test statistics are normally distributed with arbitrary covariance dependence. The normal distribution assumption constrains the employment of their estimator. In this article, we generalize their method to the cases where the test statistics are built on the normally distributed sample.  相似文献   

5.
Abstract.  A new multiple testing procedure, the generalized augmentation procedure (GAUGE), is introduced. The procedure is shown to control the false discovery exceedance and to be competitive in terms of power. It is also shown how to apply the idea of GAUGE to achieve control of other error measures. Extensions to dependence are discussed, together with a modification valid under arbitrary dependence. We present an application to an original study on prostate cancer and on a benchmark data set on colon cancer.  相似文献   

6.
Sun [2006, The Bahadur representation for sample quantiles under weak dependence. Statist. Probab. Lett. 76, 1238-1244] established the Bahadur representation for sample quantiles under the strongly mixing sequence. But there are some problems in the proofs of the main results. In the paper, we further investigate the Bahadur representation for sample quantiles under strongly mixing sequence and get better bound than that in Sun (2006).  相似文献   

7.
 基于错误发现率(FDR: False Discovery Rate)的多重假设检验(MHT:Multiple Hypothesis Testing),已成为一种有效解决大规模统计推断问题的新方法。本文以错误控制为主线,对多重假设检验问题的错误控制理论、方法、过程和最新进展进行综述,并对多重假设检验方法在经济计量中的应用进行展望。  相似文献   

8.
Summary.  The use of a fixed rejection region for multiple hypothesis testing has been shown to outperform standard fixed error rate approaches when applied to control of the false discovery rate. In this work it is demonstrated that, if the original step-up procedure of Benjamini and Hochberg is modified to exercise adaptive control of the false discovery rate, its performance is virtually identical to that of the fixed rejection region approach. In addition, the dependence of both methods on the proportion of true null hypotheses is explored, with a focus on the difficulties that are involved in the estimation of this quantity.  相似文献   

9.
Simultaneously testing a family of n null hypotheses can arise in many applications. A common problem in multiple hypothesis testing is to control Type-I error. The probability of at least one false rejection referred to as the familywise error rate (FWER) is one of the earliest error rate measures. Many FWER-controlling procedures have been proposed. The ability to control the FWER and achieve higher power is often used to evaluate the performance of a controlling procedure. However, when testing multiple hypotheses, FWER and power are not sufficient for evaluating controlling procedure’s performance. Furthermore, the performance of a controlling procedure is also governed by experimental parameters such as the number of hypotheses, sample size, the number of true null hypotheses and data structure. This paper evaluates, under various experimental settings, the performance of some FWER-controlling procedures in terms of five indices, the FWER, the false discovery rate, the false non-discovery rate, the sensitivity and the specificity. The results can provide guidance on how to select an appropriate FWER-controlling procedure to meet a study’s objective.  相似文献   

10.
Many exploratory studies such as microarray experiments require the simultaneous comparison of hundreds or thousands of genes. It is common to see that most genes in many microarray experiments are not expected to be differentially expressed. Under such a setting, a procedure that is designed to control the false discovery rate (FDR) is aimed at identifying as many potential differentially expressed genes as possible. The usual FDR controlling procedure is constructed based on the number of hypotheses. However, it can become very conservative when some of the alternative hypotheses are expected to be true. The power of a controlling procedure can be improved if the number of true null hypotheses (m 0) instead of the number of hypotheses is incorporated in the procedure [Y. Benjamini and Y. Hochberg, On the adaptive control of the false discovery rate in multiple testing with independent statistics, J. Edu. Behav. Statist. 25(2000), pp. 60–83]. Nevertheless, m 0 is unknown, and has to be estimated. The objective of this article is to evaluate some existing estimators of m 0 and discuss the feasibility of these estimators in incorporating into FDR controlling procedures under various experimental settings. The results of simulations can help the investigator to choose an appropriate procedure to meet the requirement of the study.  相似文献   

11.
Summary.  The paper considers the problem of multiple testing under dependence in a compound decision theoretic framework. The observed data are assumed to be generated from an underlying two-state hidden Markov model. We propose oracle and asymptotically optimal data-driven procedures that aim to minimize the false non-discovery rate FNR subject to a constraint on the false discovery rate FDR. It is shown that the performance of a multiple-testing procedure can be substantially improved by adaptively exploiting the dependence structure among hypotheses, and hence conventional FDR procedures that ignore this structural information are inefficient. Both theoretical properties and numerical performances of the procedures proposed are investigated. It is shown that the procedures proposed control FDR at the desired level, enjoy certain optimality properties and are especially powerful in identifying clustered non-null cases. The new procedure is applied to an influenza-like illness surveillance study for detecting the timing of epidemic periods.  相似文献   

12.
Consider testing multiple hypotheses using tests that can only be evaluated by simulation, such as permutation tests or bootstrap tests. This article introduces MMCTest , a sequential algorithm that gives, with arbitrarily high probability, the same classification as a specific multiple testing procedure applied to ideal p‐values. The method can be used with a class of multiple testing procedures that include the Benjamini and Hochberg false discovery rate procedure and the Bonferroni correction controlling the familywise error rate. One of the key features of the algorithm is that it stops sampling for all the hypotheses that can already be decided as being rejected or non‐rejected. MMCTest can be interrupted at any stage and then returns three sets of hypotheses: the rejected, the non‐rejected and the undecided hypotheses. A simulation study motivated by actual biological data shows that MMCTest is usable in practice and that, despite the additional guarantee, it can be computationally more efficient than other methods.  相似文献   

13.
High-throughput data analyses are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. False discovery rate (FDR) has been considered a proper type I error rate to control for discovery-based high-throughput data analysis. Various multiple testing procedures have been proposed to control the FDR. The power and stability properties of some commonly used multiple testing procedures have not been extensively investigated yet, however. Simulation studies were conducted to compare power and stability properties of five widely used multiple testing procedures at different proportions of true discoveries for various sample sizes for both independent and dependent test statistics. Storey's two linear step-up procedures showed the best performance among all tested procedures considering FDR control, power, and variance of true discoveries. Leukaemia and ovarian cancer microarray studies were used to illustrate the power and stability characteristics of these five multiple testing procedures with FDR control.  相似文献   

14.
In many scientific fields, it is interesting and important to determine whether an observed data stream comes from a prespecified model or not, particularly when the number of data streams is of large scale, where multiple hypotheses testing is necessary. In this article, we consider large-scale model checking under certain dependence among different data streams observed at the same time. We propose a false discovery rate (FDR) control procedure to check those unusual data streams. Specifically, we derive an approximation of false discovery and construct a point estimate of FDR. Theoretical results show that, under some mild assumptions, our proposed estimate of FDR is simultaneously conservatively consistent with the true FDR, and hence it is an asymptotically strong control procedure. Simulation comparisons with some competing procedures show that our proposed FDR procedure behaves better in general settings. Application of our proposed FDR procedure is illustrated by the StarPlus fMRI data.  相似文献   

15.
Most of current false discovery rate (FDR) procedures in a microarray experiment assume restrictive dependence structures, resulting in being less reliable. FDR controlling procedure under suitable dependence structures based on Poisson distributional approximation is shown. Unlike other procedures, the distribution of false null hypotheses is estimated by using kernel density estimation allowing for dependent structures among the genes. Furthermore, we develop an FDR framework that minimizes the false nondiscovery rate (FNR) with a constraint on the controlled level of the FDR. The performance of the proposed FDR procedure is compared with that of other existing FDR controlling procedures, with an application to the microarray study of simulated data.  相似文献   

16.
We consider hypothesis testing problems for low‐dimensional coefficients in a high dimensional additive hazard model. A variance reduced partial profiling estimator (VRPPE) is proposed and its asymptotic normality is established, which enables us to test the significance of each single coefficient when the data dimension is much larger than the sample size. Based on the p‐values obtained from the proposed test statistics, we then apply a multiple testing procedure to identify significant coefficients and show that the false discovery rate can be controlled at the desired level. The proposed method is also extended to testing a low‐dimensional sub‐vector of coefficients. The finite sample performance of the proposed testing procedure is evaluated by simulation studies. We also apply it to two real data sets, with one focusing on testing low‐dimensional coefficients and the other focusing on identifying significant coefficients through the proposed multiple testing procedure.  相似文献   

17.
We consider the problem of estimating the proportion θ of true null hypotheses in a multiple testing context. The setup is classically modelled through a semiparametric mixture with two components: a uniform distribution on interval [0,1] with prior probability θ and a non‐parametric density f . We discuss asymptotic efficiency results and establish that two different cases occur whether f vanishes on a non‐empty interval or not. In the first case, we exhibit estimators converging at a parametric rate, compute the optimal asymptotic variance and conjecture that no estimator is asymptotically efficient (i.e. attains the optimal asymptotic variance). In the second case, we prove that the quadratic risk of any estimator does not converge at a parametric rate. We illustrate those results on simulated data.  相似文献   

18.
For a specified decision rule, a general class of likelihood ratio based repeated significance tests is considered. An invariance principle for the likelihood ratio statistics is established and incorporated in the study of the asymptotic theory of the proposed tests. For comparing these tests with the conventional likelihood ratio tests, based solely on the target sample size, some Bahadur efficiency results are presented. The theory is then adapted in the study of some multiple comparison procedures  相似文献   

19.
In this paper, the Bahadur representation of sample quantiles for ψ-mixing sequences is obtained under the given conditions. As its application, the uniformly asymptotic normality is derived.  相似文献   

20.
We examine the issue of asymptotic efficiency of estimation for response adaptive designs of clinical trials, from which the collected data set contains a dependency structure. We establish the asymptotic lower bound of exponential rates for consistent estimators. Under certain regularity conditions, we show that the maximum likelihood estimator achieves the asymptotic lower bound for response adaptive trials with dichotomous responses. Furthermore, it is shown that the maximum likelihood estimator of the treatment effect is asymptotically efficient in the Bahadur sense for response adaptive clinical trials.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号