首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Statistical inference in the wavelet domain remains a vibrant area of contemporary statistical research because of desirable properties of wavelet representations and the need of scientific community to process, explore, and summarize massive data sets. Prime examples are biomedical, geophysical, and internet related data. We propose two new approaches to wavelet shrinkage/thresholding.

In the spirit of Efron and Tibshirani's recent work on local false discovery rate, we propose Bayesian Local False Discovery Rate (BLFDR), where the underlying model on wavelet coefficients does not assume known variances. This approach to wavelet shrinkage is shown to be connected with shrinkage based on Bayes factors. The second proposal, Bayesian False Discovery Rate (BaFDR), is based on ordering of posterior probabilities of hypotheses on true wavelets coefficients being null, in Bayesian testing of multiple hypotheses.

We demonstrate that both approaches result in competitive shrinkage methods by contrasting them to some popular shrinkage techniques.  相似文献   

2.
Case-control studies of genetic polymorphisms and gene-environment interactions are reporting large numbers of statistically significant associations, many of which are likely to be spurious. This problem reflects the low prior probability that any one null hypothesis is false, and the large number of test results reported for a given study. In a Bayesian approach to the low prior probabilities, Wacholder et al. (2004) suggest supplementing the p-value for a hypothesis with its posterior probability given the study data. In a frequentist approach to the test multiplicity problem, Benjamini & Hochberg (1995) propose a hypothesis-rejection rule that provides greater statistical power by controlling the false discovery rate rather than the family-wise error rate controlled by the Bonferroni correction. This paper defines a Bayes false discovery rate and proposes a Bayes-based rejection rule for controlling it. The method, which combines the Bayesian approach of Wacholder et al. with the frequentist approach of Benjamini & Hochberg, is used to evaluate the associations reported in a case-control study of breast cancer risk and genetic polymorphisms of genes involved in the repair of double-strand DNA breaks.  相似文献   

3.
Selecting predictors to optimize the outcome prediction is an important statistical method. However, it usually ignores the false positives in the selected predictors. In this article, we advocate a conventional stepwise forward variable selection method based on the predicted residual sum of squares, and develop a positive false discovery rate (pFDR) estimate for the selected predictor subset, and a local pFDR estimate to prioritize the selected predictors. This pFDR estimate takes account of the existence of non null predictors, and is proved to be asymptotically conservative. In addition, we propose two views of a variable selection process: an overall and an individual test. An interesting feature of the overall test is that its power of selecting non null predictors increases with the proportion of non null predictors among all candidate predictors. Data analysis is illustrated with an example, in which genetic and clinical predictors were selected to predict the cholesterol level change after four months of tamoxifen treatment, and pFDR was estimated. Our method's performance is evaluated through statistical simulations.  相似文献   

4.
Abstract.  We propose a confidence envelope for false discovery control when testing multiple hypotheses of association simultaneously. The method is valid under arbitrary and unknown dependence between the test statistics and allows for an exploratory approach when choosing suitable rejection regions while still retaining strong control over the proportion of false discoveries.  相似文献   

5.
Abstract.  Controlling the false discovery rate (FDR) is a powerful approach to multiple testing, with procedures developed with applications in many areas. Dependence among the test statistics is a common problem, and many attempts have been made to extend the procedures. In this paper, we show that a certain degree of dependence is allowed among the test statistics, when the number of tests is large, with no need for any correction. We then suggest a way to conservatively estimate the proportion of false nulls, both under dependence and independence, and discuss the advantages of using such estimators when controlling the FDR.  相似文献   

6.
Abstract.  A new multiple testing procedure, the generalized augmentation procedure (GAUGE), is introduced. The procedure is shown to control the false discovery exceedance and to be competitive in terms of power. It is also shown how to apply the idea of GAUGE to achieve control of other error measures. Extensions to dependence are discussed, together with a modification valid under arbitrary dependence. We present an application to an original study on prostate cancer and on a benchmark data set on colon cancer.  相似文献   

7.
In a breakthrough paper, Benjamini and Hochberg (J Roy Stat Soc Ser B 57:289–300, 1995) proposed a new error measure for multiple testing, the FDR; and developed a distribution-free procedure to control it under independence among the test statistics. In this paper we argue by extensive simulation and theoretical considerations that the assumption of independence is not needed. Along the lines of (Ann Stat 32:1035–1061, 2004b), we moreover provide a more powerful method, that exploits an estimator of the number of false nulls among the tests. We propose a whole family of iterative estimators that prove robust under dependence and independence between the test statistics. These estimators can be used to improve also classical multiple testing procedures, and in general to estimate the weight of a known component in a mixture distribution. Innovations are illustrated by simulations.  相似文献   

8.
Abstract. This paper is concerned with exact control of the false discovery rate (FDR) for step‐up‐down (SUD) tests related to the asymptotically optimal rejection curve (AORC). Since the system of equations and/or constraints for critical values and FDRs is numerically extremely sensitive, existence and computation of valid solutions is a challenging problem. We derive explicit formulas for upper bounds of the FDR and show that under a well‐known monotonicity condition, control of the FDR by a step‐up procedure results in control of the FDR by a corresponding SUD procedure. Various methods for adjusting the AORC to achieve finite FDR control are investigated. Moreover, we introduce alternative FDR bounding curves and study their connection to rejection curves as well as the existence of critical values for exact FDR control with respect to the underlying FDR bounding curve. Finally, we propose an iterative method for the computation of critical values.  相似文献   

9.
This paper studies the asymptotic behaviour of the false discovery and non‐discovery proportions of the dynamic adaptive procedure under some dependence structure. A Bahadur‐type representation of the cut point in simultaneously performing a large scale of tests is presented. The asymptotic bias decompositions of the false discovery and non‐discovery proportions are given under some dependence structure. In addition to existing literatures, we find that the randomness due to the dynamic selection of the tuning parameter in estimating the true null rate serves as a source of the approximation error in the Bahadur representation and enters into the asymptotic bias term of the false discovery proportion and those of the false non‐discovery proportion. The theory explains to some extent why some seemingly attractive dynamic adaptive procedures do not outperform the competing fixed adaptive procedures substantially in some situations. Simulations justify our theory and findings.  相似文献   

10.
A new process monitoring scheme is proposed by using the Storey procedure for controlling the positive false discovery rate in multiple testing. For the 2-span control scheme, it is shown numerically that the proposed method performs better than X-bar chart in terms of the average run length. Some simulations are accomplished to evaluate the performance of the proposed scheme in terms of the average run length and the conditional expected delay. The results are compared with those of the existing monitoring schemes including the X-bar chart. The false discovery rate is also estimated and compared with the target control level.  相似文献   

11.
12.
The positive false discovery rate (pFDR) is the average proportion of false rejections given that the overall number of rejections is greater than zero. Assuming that the proportion of true null hypotheses, proportion of false positives, and proportion of true positives all converge pointwise, the pFDR converges to a continuous limit uniformly over all significance levels. We are showing that the uniform convergence still holds given a weaker assumption that the proportion of true positives converges in L 1.  相似文献   

13.
For solving conflicting information between data and prior distributions, Bayesian modelling with heavy-tailed distributions is applied. Exploiting properties of regularly varying functions and distribution functions as well as their relationship with the finiteness of the moments, we establish results for both location and shape parameter structures. And, as a side result, rates of convergence are derived.  相似文献   

14.
Abstract. The modelling process in Bayesian Statistics constitutes the fundamental stage of the analysis, since depending on the chosen probability laws the inferences may vary considerably. This is particularly true when conflicts arise between two or more sources of information. For instance, inference in the presence of an outlier (which conflicts with the information provided by the other observations) can be highly dependent on the assumed sampling distribution. When heavy‐tailed (e.g. t) distributions are used, outliers may be rejected whereas this kind of robust inference is not available when we use light‐tailed (e.g. normal) distributions. A long literature has established sufficient conditions on location‐parameter models to resolve conflict in various ways. In this work, we consider a location–scale parameter structure, which is more complex than the single parameter cases because conflicts can arise between three sources of information, namely the likelihood, the prior distribution for the location parameter and the prior for the scale parameter. We establish sufficient conditions on the distributions in a location–scale model to resolve conflicts in different ways as a single observation tends to infinity. In addition, for each case, we explicitly give the limiting posterior distributions as the conflict becomes more extreme.  相似文献   

15.
Most of current false discovery rate (FDR) procedures in a microarray experiment assume restrictive dependence structures, resulting in being less reliable. FDR controlling procedure under suitable dependence structures based on Poisson distributional approximation is shown. Unlike other procedures, the distribution of false null hypotheses is estimated by using kernel density estimation allowing for dependent structures among the genes. Furthermore, we develop an FDR framework that minimizes the false nondiscovery rate (FNR) with a constraint on the controlled level of the FDR. The performance of the proposed FDR procedure is compared with that of other existing FDR controlling procedures, with an application to the microarray study of simulated data.  相似文献   

16.
Recent approaches to the statistical analysis of adverse event (AE) data in clinical trials have proposed the use of groupings of related AEs, such as by system organ class (SOC). These methods have opened up the possibility of scanning large numbers of AEs while controlling for multiple comparisons, making the comparative performance of the different methods in terms of AE detection and error rates of interest to investigators. We apply two Bayesian models and two procedures for controlling the false discovery rate (FDR), which use groupings of AEs, to real clinical trial safety data. We find that while the Bayesian models are appropriate for the full data set, the error controlling methods only give similar results to the Bayesian methods when low incidence AEs are removed. A simulation study is used to compare the relative performances of the methods. We investigate the differences between the methods over full trial data sets, and over data sets with low incidence AEs and SOCs removed. We find that while the removal of low incidence AEs increases the power of the error controlling procedures, the estimated power of the Bayesian methods remains relatively constant over all data sizes. Automatic removal of low-incidence AEs however does have an effect on the error rates of all the methods, and a clinically guided approach to their removal is needed. Overall we found that the Bayesian approaches are particularly useful for scanning the large amounts of AE data gathered.  相似文献   

17.
When θ is a multidimensional parameter, the issue of prior dependence or independence of coordinates is a serious concern. This is especially true in robust Bayesian analysis; Lavine et al. (J. Amer. Statist. Assoc.86, 964–971 (1991)) show that allowing a wide range of prior dependencies among coordinates can result in near vacuous conclusions. It is sometimes possible, however, to make confidently the judgement that the coordinates of θ are independent a priori and, when this can be done, robust Bayesian conclusions improve dramatically. In this paper, it is shown how to incorporate the independence assumption into robust Bayesian analysis involving -contamination and density band classes of priors. Attention is restricted to the case θ = (θ1, θ2) for clarity, although the ideas generalize.  相似文献   

18.
Robust estimation of parameters, and identification of specific data points that are discordant with an assumed model, are often treated as different statistical problems. The two aims are, however, closely inter-related and in many cases the two analyses are required simultaneously. We present a simple diagnostic plot that connects existing robust estimators with simultaneous outlier detection, and uses the concept of false discovery rates to allow for the multiple comparisons induced by considering each point as a potential outlier. It is straightforward to implement, and applicable in any situation for which robust estimation procedures exist. Several examples are given.  相似文献   

19.
Bayesian Inference Under Partial Prior Information   总被引:1,自引:0,他引:1  
Partial prior information on the marginal distribution of an observable random variable is considered. When this information is incorporated into the statistical analysis of an assumed parametric model, the posterior inference is typically non‐robust so that no inferential conclusion is obtained. To overcome this difficulty a method based on the standard default prior associated to the model and an intrinsic procedure is proposed. Posterior robustness of the resulting inferences is analysed and some illustrative examples are provided.  相似文献   

20.
Summary.  The false discovery rate (FDR) is a multiple hypothesis testing quantity that describes the expected proportion of false positive results among all rejected null hypotheses. Benjamini and Hochberg introduced this quantity and proved that a particular step-up p -value method controls the FDR. Storey introduced a point estimate of the FDR for fixed significance regions. The former approach conservatively controls the FDR at a fixed predetermined level, and the latter provides a conservatively biased estimate of the FDR for a fixed predetermined significance region. In this work, we show in both finite sample and asymptotic settings that the goals of the two approaches are essentially equivalent. In particular, the FDR point estimates can be used to define valid FDR controlling procedures. In the asymptotic setting, we also show that the point estimates can be used to estimate the FDR conservatively over all significance regions simultaneously, which is equivalent to controlling the FDR at all levels simultaneously. The main tool that we use is to translate existing FDR methods into procedures involving empirical processes. This simplifies finite sample proofs, provides a framework for asymptotic results and proves that these procedures are valid even under certain forms of dependence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号