首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper studies the asymptotic behaviour of the false discovery and non‐discovery proportions of the dynamic adaptive procedure under some dependence structure. A Bahadur‐type representation of the cut point in simultaneously performing a large scale of tests is presented. The asymptotic bias decompositions of the false discovery and non‐discovery proportions are given under some dependence structure. In addition to existing literatures, we find that the randomness due to the dynamic selection of the tuning parameter in estimating the true null rate serves as a source of the approximation error in the Bahadur representation and enters into the asymptotic bias term of the false discovery proportion and those of the false non‐discovery proportion. The theory explains to some extent why some seemingly attractive dynamic adaptive procedures do not outperform the competing fixed adaptive procedures substantially in some situations. Simulations justify our theory and findings.  相似文献   

2.
Proactive evaluation of drug safety with systematic screening and detection is critical to protect patients' safety and important in regulatory approval of new drug indications and postmarketing communications and label renewals. In recent years, quite a few statistical methodologies have been developed to better evaluate drug safety through the life cycle of the product development. The statistical methods for flagging safety signals have been developed in two major areas – one for data collected from spontaneous reporting system, mostly in the postmarketing area, and the other for data from clinical trials. To our knowledge, the methods developed for one area have not been applied to the other one so far. In this article, we propose to utilize all such methods for flagging safety signals in both areas regardless of which specific area they were originally developed for. Therefore, we selected eight typical methods, that is, proportional reporting ratios, reporting odds ratios, the maximum likelihood ratio test, Bayesian confidence propagation neural network method, chi‐square test for rates comparison, Benjamini and Hochberg procedure, new double false discovery rate control procedure, and Bayesian hierarchical mixture model for systematic comparison through simulations. The Benjamini and Hochberg procedure and new double false discovery rate control procedure perform best overall in terms of sensitivity and false discovery rate. The likelihood ratio test also performs well when the sample sizes are large. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
Abstract.  A new multiple testing procedure, the generalized augmentation procedure (GAUGE), is introduced. The procedure is shown to control the false discovery exceedance and to be competitive in terms of power. It is also shown how to apply the idea of GAUGE to achieve control of other error measures. Extensions to dependence are discussed, together with a modification valid under arbitrary dependence. We present an application to an original study on prostate cancer and on a benchmark data set on colon cancer.  相似文献   

4.
Simultaneously testing a family of n null hypotheses can arise in many applications. A common problem in multiple hypothesis testing is to control Type-I error. The probability of at least one false rejection referred to as the familywise error rate (FWER) is one of the earliest error rate measures. Many FWER-controlling procedures have been proposed. The ability to control the FWER and achieve higher power is often used to evaluate the performance of a controlling procedure. However, when testing multiple hypotheses, FWER and power are not sufficient for evaluating controlling procedure’s performance. Furthermore, the performance of a controlling procedure is also governed by experimental parameters such as the number of hypotheses, sample size, the number of true null hypotheses and data structure. This paper evaluates, under various experimental settings, the performance of some FWER-controlling procedures in terms of five indices, the FWER, the false discovery rate, the false non-discovery rate, the sensitivity and the specificity. The results can provide guidance on how to select an appropriate FWER-controlling procedure to meet a study’s objective.  相似文献   

5.
ABSTRACT

In recent years, effective monitoring of data quality has increasingly attracted attention of researchers in the area of statistical process control. Among the relevant research on this topic, none used multivariate methods to control the multidimensional data quality process, but instead relied on multiple univariate control charts. Based on a novel one-sided multivariate exponentially weighted moving average (MEWMA) chart, we propose a conditional false discovery rate-adjusted scheme to on-line monitor the data quality of high-dimensional data streams. With thousands of input data streams, the average run length loses its usefulness because one will likely have out-of-control signals at each time period. Hence, we first control the percentage of signals that are false alarms. Then, we compare the power of the proposed MEWMA scheme with that of two alternative methods. Compared with two competitors, numerical results show that the proposed MEWMA scheme has higher average power.  相似文献   

6.
Abstract.  Controlling the false discovery rate (FDR) is a powerful approach to multiple testing, with procedures developed with applications in many areas. Dependence among the test statistics is a common problem, and many attempts have been made to extend the procedures. In this paper, we show that a certain degree of dependence is allowed among the test statistics, when the number of tests is large, with no need for any correction. We then suggest a way to conservatively estimate the proportion of false nulls, both under dependence and independence, and discuss the advantages of using such estimators when controlling the FDR.  相似文献   

7.
Abstract. This paper is concerned with exact control of the false discovery rate (FDR) for step‐up‐down (SUD) tests related to the asymptotically optimal rejection curve (AORC). Since the system of equations and/or constraints for critical values and FDRs is numerically extremely sensitive, existence and computation of valid solutions is a challenging problem. We derive explicit formulas for upper bounds of the FDR and show that under a well‐known monotonicity condition, control of the FDR by a step‐up procedure results in control of the FDR by a corresponding SUD procedure. Various methods for adjusting the AORC to achieve finite FDR control are investigated. Moreover, we introduce alternative FDR bounding curves and study their connection to rejection curves as well as the existence of critical values for exact FDR control with respect to the underlying FDR bounding curve. Finally, we propose an iterative method for the computation of critical values.  相似文献   

8.
Many exploratory studies such as microarray experiments require the simultaneous comparison of hundreds or thousands of genes. It is common to see that most genes in many microarray experiments are not expected to be differentially expressed. Under such a setting, a procedure that is designed to control the false discovery rate (FDR) is aimed at identifying as many potential differentially expressed genes as possible. The usual FDR controlling procedure is constructed based on the number of hypotheses. However, it can become very conservative when some of the alternative hypotheses are expected to be true. The power of a controlling procedure can be improved if the number of true null hypotheses (m 0) instead of the number of hypotheses is incorporated in the procedure [Y. Benjamini and Y. Hochberg, On the adaptive control of the false discovery rate in multiple testing with independent statistics, J. Edu. Behav. Statist. 25(2000), pp. 60–83]. Nevertheless, m 0 is unknown, and has to be estimated. The objective of this article is to evaluate some existing estimators of m 0 and discuss the feasibility of these estimators in incorporating into FDR controlling procedures under various experimental settings. The results of simulations can help the investigator to choose an appropriate procedure to meet the requirement of the study.  相似文献   

9.
A new control chart is developed by using the exponentially weighted moving average (EWMA) statistics and a multiple testing procedure for controlling false discovery rate. The multiple testing procedure considers not only the current EWMA statistic, but also a given number of previous statistics at the same time. Numerical simulations are accomplished to evaluate the performance of the proposed control chart in terms of the average run length and the conditional expected delay. The results are compared with those of the existing control charts including the X-bar chart, EWMA, and cumulative sum control charts. Case studies with real data-sets are also presented.  相似文献   

10.
A false discovery rate (FDR) procedure is often employed in exploratory data analysis to determine which among thousands or millions of attributes are worthy of follow-up analysis. However, these methods tend to discover the most statistically significant attributes, which need not be the most worthy of further exploration. This article provides a new FDR-controlling method that allows for the nature of the exploratory analysis to be considered when determining which attributes are discovered. To illustrate, a study in which the objective is to classify discoveries into one of several clusters is considered, and a new FDR method that minimizes the misclassification rate is developed. It is shown analytically and with simulation that the proposed method performs better than competing methods.  相似文献   

11.
Summary. Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for a single-hypothesis test, a compound error rate is controlled for multiple-hypothesis tests. For example, controlling the false discovery rate FDR traditionally involves intricate sequential p -value rejection methods based on the observed data. Whereas a sequential p -value method fixes the error rate and estimates its corresponding rejection region, we propose the opposite approach—we fix the rejection region and then estimate its corresponding error rate. This new approach offers increased applicability, accuracy and power. We apply the methodology to both the positive false discovery rate pFDR and FDR, and provide evidence for its benefits. It is shown that pFDR is probably the quantity of interest over FDR. Also discussed is the calculation of the q -value, the pFDR analogue of the p -value, which eliminates the need to set the error rate beforehand as is traditionally done. Some simple numerical examples are presented that show that this new approach can yield an increase of over eight times in power compared with the Benjamini–Hochberg FDR method.  相似文献   

12.
The positive false discovery rate was introduced by Storey (2003 Storey , J. D. (2003). The positive false discovery rate: a Bayesian interpretation and the q-value. Ann. Statist. 31:20132035.[Crossref], [Web of Science ®] [Google Scholar]) as an alternative to the family wise error rate for the case in which we are simultaneously testing a large amount of hypotheses. The positive false discovery rate has a very nice Bayesian interpretation (as it was shown by Storey, 2003 Storey , J. D. (2003). The positive false discovery rate: a Bayesian interpretation and the q-value. Ann. Statist. 31:20132035.[Crossref], [Web of Science ®] [Google Scholar]) and its robustness is analyzed. The emphasis is on the ε-contamination class (one of the most used classes of priors for Bayesian robustness) and it is shown that robustness is not obtained when the basic prior concentrates the probability on the null hypothesis.  相似文献   

13.
Case-control studies of genetic polymorphisms and gene-environment interactions are reporting large numbers of statistically significant associations, many of which are likely to be spurious. This problem reflects the low prior probability that any one null hypothesis is false, and the large number of test results reported for a given study. In a Bayesian approach to the low prior probabilities, Wacholder et al. (2004) suggest supplementing the p-value for a hypothesis with its posterior probability given the study data. In a frequentist approach to the test multiplicity problem, Benjamini & Hochberg (1995) propose a hypothesis-rejection rule that provides greater statistical power by controlling the false discovery rate rather than the family-wise error rate controlled by the Bonferroni correction. This paper defines a Bayes false discovery rate and proposes a Bayes-based rejection rule for controlling it. The method, which combines the Bayesian approach of Wacholder et al. with the frequentist approach of Benjamini & Hochberg, is used to evaluate the associations reported in a case-control study of breast cancer risk and genetic polymorphisms of genes involved in the repair of double-strand DNA breaks.  相似文献   

14.
 基于错误发现率(FDR: False Discovery Rate)的多重假设检验(MHT:Multiple Hypothesis Testing),已成为一种有效解决大规模统计推断问题的新方法。本文以错误控制为主线,对多重假设检验问题的错误控制理论、方法、过程和最新进展进行综述,并对多重假设检验方法在经济计量中的应用进行展望。  相似文献   

15.
Summary.  The use of a fixed rejection region for multiple hypothesis testing has been shown to outperform standard fixed error rate approaches when applied to control of the false discovery rate. In this work it is demonstrated that, if the original step-up procedure of Benjamini and Hochberg is modified to exercise adaptive control of the false discovery rate, its performance is virtually identical to that of the fixed rejection region approach. In addition, the dependence of both methods on the proportion of true null hypotheses is explored, with a focus on the difficulties that are involved in the estimation of this quantity.  相似文献   

16.
Multiple Hypotheses Testing with Weights   总被引:2,自引:0,他引:2  
In this paper we offer a multiplicity of approaches and procedures for multiple testing problems with weights. Some rationale for incorporating weights in multiple hypotheses testing are discussed. Various type-I error-rates and different possible formulations are considered, for both the intersection hypothesis testing and the multiple hypotheses testing problems. An optimal per family weighted error-rate controlling procedure a la Spjotvoll (1972) is obtained. This model serves as a vehicle for demonstrating the different implications of the approaches to weighting. Alternative approach es to that of Holm (1979) for family-wise error-rate control with weights are discussed, one involving an alternative procedure for family-wise error-rate control, and the other involving the control of a weighted family-wise error-rate. Extensions and modifications of the procedures based on Simes (1986) are given. These include a test of the overall intersec tion hypothesis with general weights, and weighted sequentially rejective procedures for testing the individual hypotheses. The false discovery rate controlling approach and procedure of Benjamini & Hochberg (1995) are extended to allow for different weights.  相似文献   

17.
18.
In this paper, we study the multi-class differential gene expression detection for microarray data. We propose a likelihood-based approach to estimating an empirical null distribution to incorporate gene interactions and provide a more accurate false-positive control than the commonly used permutation or theoretical null distribution-based approach. We propose to rank important genes by p-values or local false discovery rate based on the estimated empirical null distribution. Through simulations and application to lung transplant microarray data, we illustrate the competitive performance of the proposed method.  相似文献   

19.
Statistical inference in the wavelet domain remains a vibrant area of contemporary statistical research because of desirable properties of wavelet representations and the need of scientific community to process, explore, and summarize massive data sets. Prime examples are biomedical, geophysical, and internet related data. We propose two new approaches to wavelet shrinkage/thresholding.

In the spirit of Efron and Tibshirani's recent work on local false discovery rate, we propose Bayesian Local False Discovery Rate (BLFDR), where the underlying model on wavelet coefficients does not assume known variances. This approach to wavelet shrinkage is shown to be connected with shrinkage based on Bayes factors. The second proposal, Bayesian False Discovery Rate (BaFDR), is based on ordering of posterior probabilities of hypotheses on true wavelets coefficients being null, in Bayesian testing of multiple hypotheses.

We demonstrate that both approaches result in competitive shrinkage methods by contrasting them to some popular shrinkage techniques.  相似文献   

20.
Summary.  The false discovery rate (FDR) is a multiple hypothesis testing quantity that describes the expected proportion of false positive results among all rejected null hypotheses. Benjamini and Hochberg introduced this quantity and proved that a particular step-up p -value method controls the FDR. Storey introduced a point estimate of the FDR for fixed significance regions. The former approach conservatively controls the FDR at a fixed predetermined level, and the latter provides a conservatively biased estimate of the FDR for a fixed predetermined significance region. In this work, we show in both finite sample and asymptotic settings that the goals of the two approaches are essentially equivalent. In particular, the FDR point estimates can be used to define valid FDR controlling procedures. In the asymptotic setting, we also show that the point estimates can be used to estimate the FDR conservatively over all significance regions simultaneously, which is equivalent to controlling the FDR at all levels simultaneously. The main tool that we use is to translate existing FDR methods into procedures involving empirical processes. This simplifies finite sample proofs, provides a framework for asymptotic results and proves that these procedures are valid even under certain forms of dependence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号