首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In multiple hypothesis test, an important problem is estimating the proportion of true null hypotheses. Existing methods are mainly based on the p-values of the single tests. In this paper, we propose two new estimations for this proportion. One is a natural extension of the commonly used methods based on p-values and the other is based on a mixed distribution. Simulations show that the first method is comparable with existing methods and performs better under some cases. And the method based on a mixed distribution can get accurate estimators even if the variance of data is large or the difference between the null hypothesis and alternative hypothesis is very small.  相似文献   

2.
Estimating the proportion of true null hypotheses, π0, has attracted much attention in the recent statistical literature. Besides its apparent relevance for a set of specific scientific hypotheses, an accurate estimate of this parameter is key for many multiple testing procedures. Most existing methods for estimating π0 in the literature are motivated from the independence assumption of test statistics, which is often not true in reality. Simulations indicate that most existing estimators in the presence of the dependence among test statistics can be poor, mainly due to the increase of variation in these estimators. In this paper, we propose several data-driven methods for estimating π0 by incorporating the distribution pattern of the observed p-values as a practical approach to address potential dependence among test statistics. Specifically, we use a linear fit to give a data-driven estimate for the proportion of true-null p-values in (λ, 1] over the whole range [0, 1] instead of using the expected proportion at 1?λ. We find that the proposed estimators may substantially decrease the variance of the estimated true null proportion and thus improve the overall performance.  相似文献   

3.
Multiple hypothesis testing literature has recently experienced a growing development with particular attention to the control of the false discovery rate (FDR) based on p-values. While these are not the only methods to deal with multiplicity, inference with small samples and large sets of hypotheses depends on the specific choice of the p-value used to control the FDR in the presence of nuisance parameters. In this paper we propose to use the partial posterior predictive p-value [Bayarri, M.J., Berger, J.O., 2000. p-values for composite null models. J. Amer. Statist. Assoc. 95, 1127–1142] that overcomes this difficulty. This choice is motivated by theoretical considerations and examples. Finally, an application to a controlled microarray experiment is presented.  相似文献   

4.
High-throughput data analyses are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. False discovery rate (FDR) has been considered a proper type I error rate to control for discovery-based high-throughput data analysis. Various multiple testing procedures have been proposed to control the FDR. The power and stability properties of some commonly used multiple testing procedures have not been extensively investigated yet, however. Simulation studies were conducted to compare power and stability properties of five widely used multiple testing procedures at different proportions of true discoveries for various sample sizes for both independent and dependent test statistics. Storey's two linear step-up procedures showed the best performance among all tested procedures considering FDR control, power, and variance of true discoveries. Leukaemia and ovarian cancer microarray studies were used to illustrate the power and stability characteristics of these five multiple testing procedures with FDR control.  相似文献   

5.
Simultaneously testing a family of n null hypotheses can arise in many applications. A common problem in multiple hypothesis testing is to control Type-I error. The probability of at least one false rejection referred to as the familywise error rate (FWER) is one of the earliest error rate measures. Many FWER-controlling procedures have been proposed. The ability to control the FWER and achieve higher power is often used to evaluate the performance of a controlling procedure. However, when testing multiple hypotheses, FWER and power are not sufficient for evaluating controlling procedure’s performance. Furthermore, the performance of a controlling procedure is also governed by experimental parameters such as the number of hypotheses, sample size, the number of true null hypotheses and data structure. This paper evaluates, under various experimental settings, the performance of some FWER-controlling procedures in terms of five indices, the FWER, the false discovery rate, the false non-discovery rate, the sensitivity and the specificity. The results can provide guidance on how to select an appropriate FWER-controlling procedure to meet a study’s objective.  相似文献   

6.
7.
8.
It is important that the proportion of true null hypotheses be estimated accurately in a multiple hypothesis context. Current estimation methods, however, are not suitable for high-dimensional data such as microarray data. First, they do not consider the (strong) dependence between hypotheses (or genes), thereby resulting in inaccurate estimation. Second, the unknown distribution of false null hypotheses cannot be estimated properly by these methods. Third, the estimation is affected strongly by outliers. In this paper, we find out the optimal procedure for estimating the proportion of true null hypotheses under a (strong) dependence based on the Dirichlet process prior. In addition, by using the minimum Hellinger distance, the estimation should be robust to any model misspecification as well as to any outliers while maintaining efficiency. The results are confirmed by a simulation study, and the newly developed methodology is illustrated by a real microarray data.  相似文献   

9.
The overall Type I error computed based on the traditional means may be inflated if many hypotheses are compared simultaneously. The family-wise error rate (FWER) and false discovery rate (FDR) are some of commonly used error rates to measure Type I error under the multiple hypothesis setting. Many controlling FWER and FDR procedures have been proposed and have the ability to control the desired FWER/FDR under certain scenarios. Nevertheless, these controlling procedures become too conservative when only some hypotheses are from the null. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) proposed an adaptive FDR-controlling procedure that adapts the information of the number of true null hypotheses (m 0) to overcome this problem. Since m 0 is unknown, estimators of m 0 are needed. Benjamini and Hochberg (J. Educ. Behav. Stat. 25:60–83, 2000) suggested a graphical approach to construct an estimator of m 0, which is shown to overestimate m 0 (see Hwang in J. Stat. Comput. Simul. 81:207–220, 2011). Following a similar construction, this paper proposes new estimators of m 0. Monte Carlo simulations are used to evaluate accuracy and precision of new estimators and the feasibility of these new adaptive procedures is evaluated under various simulation settings.  相似文献   

10.
We are considered with the problem of m simultaneous statistical test problems with composite null hypotheses. Usually, marginal p-values are computed under least favorable parameter configurations (LFCs), thus being over-conservative under non-LFCs. Our proposed randomized p-value leads to a tighter exhaustion of the marginal (local) significance level. In turn, it is stochastically larger than the LFC-based p-value under alternatives. While these distributional properties are typically nonsensical for m  =1, the exhaustion of the local significance level is extremely helpful for cases with m>1m>1 in connection with data-adaptive multiple tests as we will demonstrate by considering multiple one-sided tests for Gaussian means.  相似文献   

11.
 基于错误发现率(FDR: False Discovery Rate)的多重假设检验(MHT:Multiple Hypothesis Testing),已成为一种有效解决大规模统计推断问题的新方法。本文以错误控制为主线,对多重假设检验问题的错误控制理论、方法、过程和最新进展进行综述,并对多重假设检验方法在经济计量中的应用进行展望。  相似文献   

12.
This paper aims to connect Bayesian analysis and frequentist theory in the context of multiple comparisons. The authors show that when testing the equality of two sample means, the posterior probability of the one‐sided alternative hypothesis, defined as a half‐space, shares with the frequentist P‐value the property of uniformity under the null hypothesis. Ultimately, the posterior probability may thus be used in the same spirit as a P‐value in the Benjamini‐Hochberg procedure, or in any of its extensions.  相似文献   

13.
Abstract.  A new multiple testing procedure, the generalized augmentation procedure (GAUGE), is introduced. The procedure is shown to control the false discovery exceedance and to be competitive in terms of power. It is also shown how to apply the idea of GAUGE to achieve control of other error measures. Extensions to dependence are discussed, together with a modification valid under arbitrary dependence. We present an application to an original study on prostate cancer and on a benchmark data set on colon cancer.  相似文献   

14.
In this paper, we translate variable selection for linear regression into multiple testing, and select significant variables according to testing result. New variable selection procedures are proposed based on the optimal discovery procedure (ODP) in multiple testing. Due to ODP’s optimality, if we guarantee the number of significant variables included, it will include less non significant variables than marginal p-value based methods. Consistency of our procedures is obtained in theory and simulation. Simulation results suggest that procedures based on multiple testing have improvement over procedures based on selection criteria, and our new procedures have better performance than marginal p-value based procedures.  相似文献   

15.
Abstract. This paper is concerned with exact control of the false discovery rate (FDR) for step‐up‐down (SUD) tests related to the asymptotically optimal rejection curve (AORC). Since the system of equations and/or constraints for critical values and FDRs is numerically extremely sensitive, existence and computation of valid solutions is a challenging problem. We derive explicit formulas for upper bounds of the FDR and show that under a well‐known monotonicity condition, control of the FDR by a step‐up procedure results in control of the FDR by a corresponding SUD procedure. Various methods for adjusting the AORC to achieve finite FDR control are investigated. Moreover, we introduce alternative FDR bounding curves and study their connection to rejection curves as well as the existence of critical values for exact FDR control with respect to the underlying FDR bounding curve. Finally, we propose an iterative method for the computation of critical values.  相似文献   

16.
This paper studies the asymptotic behaviour of the false discovery and non‐discovery proportions of the dynamic adaptive procedure under some dependence structure. A Bahadur‐type representation of the cut point in simultaneously performing a large scale of tests is presented. The asymptotic bias decompositions of the false discovery and non‐discovery proportions are given under some dependence structure. In addition to existing literatures, we find that the randomness due to the dynamic selection of the tuning parameter in estimating the true null rate serves as a source of the approximation error in the Bahadur representation and enters into the asymptotic bias term of the false discovery proportion and those of the false non‐discovery proportion. The theory explains to some extent why some seemingly attractive dynamic adaptive procedures do not outperform the competing fixed adaptive procedures substantially in some situations. Simulations justify our theory and findings.  相似文献   

17.
A new method for estimating the proportion of null effects is proposed for solving large-scale multiple comparison problems. It utilises maximum likelihood estimation of nonparametric mixtures, which also provides a density estimate of the test statistics. It overcomes the problem of the usual nonparametric maximum likelihood estimator that cannot produce a positive probability at the location of null effects in the process of estimating nonparametrically a mixing distribution. The profile likelihood is further used to help produce a range of null proportion values, corresponding to which the density estimates are all consistent. With a proper choice of a threshold function on the profile likelihood ratio, the upper endpoint of this range can be shown to be a consistent estimator of the null proportion. Numerical studies show that the proposed method has an apparently convergent trend in all cases studied and performs favourably when compared with existing methods in the literature.  相似文献   

18.
Microarray studies are now common for human, agricultural plant and animal studies. False discovery rate (FDR) is widely used in the analysis of large-scale microarray data to account for problems associated with multiple testing. A well-designed microarray study should have adequate statistical power to detect the differentially expressed (DE) genes, while keeping the FDR acceptably low. In this paper, we used a mixture model of expression responses involving DE genes and non-DE genes to analyse theoretical FDR and power for simple scenarios where it is assumed that each gene has equal error variance and the gene effects are independent. A simulation study was used to evaluate the empirical FDR and power for more complex scenarios with unequal error variance and gene dependence. Based on this approach, we present a general guide for sample size requirement at the experimental design stage for prospective microarray studies. This paper presented an approach to explicitly connect the sample size with FDR and power. While the methods have been developed in the context of one-sample microarray studies, they are readily applicable to two-sample, and could be adapted to multiple-sample studies.  相似文献   

19.
Abstract.  Controlling the false discovery rate (FDR) is a powerful approach to multiple testing, with procedures developed with applications in many areas. Dependence among the test statistics is a common problem, and many attempts have been made to extend the procedures. In this paper, we show that a certain degree of dependence is allowed among the test statistics, when the number of tests is large, with no need for any correction. We then suggest a way to conservatively estimate the proportion of false nulls, both under dependence and independence, and discuss the advantages of using such estimators when controlling the FDR.  相似文献   

20.
The paper deals with the problem of electronic verification of people on the basis of measurement information of a fingerprint reader and new approaches to its solution. The offered method guaranties the restriction of error probabilities of both type at the desired level at making a decision about permitting or rejecting the request on service in the system. On the basis of investigation of real data obtained in the real biometrical system, the choice of distribution laws is substantiated and the proper estimations of their parameters are obtained. Using chosen distribution laws, the normal distribution for measurement results of characteristics of the people having access to the system and the beta distribution for the people having no such access, the optimal rule based on the Constrained Bayesian Method (CBM) of making a decision about giving a permission of access to the users of the system is justified. The CBM, the Neyman–Pearson and classical Bayes methods are investigated and their good and negative points are examined. Computation results obtained by direct computation, by simulation and using real data completely confirm the suppositions made and the high quality of verification results obtained on their basis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号