首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12篇
  免费   0篇
统计学   12篇
  2019年   2篇
  2013年   3篇
  2012年   4篇
  2008年   1篇
  2004年   1篇
  1994年   1篇
排序方式: 共有12条查询结果,搜索用时 15 毫秒
1.
An algorithm is presented for computing the finite population parameters and the approximate probability values associated with a recently-developed class of statistical inference techniques termed multi-response randomized block permutation procedures (MRBP).  相似文献   
2.
Summary.  The false discovery rate (FDR) is a multiple hypothesis testing quantity that describes the expected proportion of false positive results among all rejected null hypotheses. Benjamini and Hochberg introduced this quantity and proved that a particular step-up p -value method controls the FDR. Storey introduced a point estimate of the FDR for fixed significance regions. The former approach conservatively controls the FDR at a fixed predetermined level, and the latter provides a conservatively biased estimate of the FDR for a fixed predetermined significance region. In this work, we show in both finite sample and asymptotic settings that the goals of the two approaches are essentially equivalent. In particular, the FDR point estimates can be used to define valid FDR controlling procedures. In the asymptotic setting, we also show that the point estimates can be used to estimate the FDR conservatively over all significance regions simultaneously, which is equivalent to controlling the FDR at all levels simultaneously. The main tool that we use is to translate existing FDR methods into procedures involving empirical processes. This simplifies finite sample proofs, provides a framework for asymptotic results and proves that these procedures are valid even under certain forms of dependence.  相似文献   
3.
Robust Bayesian testing of point null hypotheses is considered for problems involving the presence of nuisance parameters. The robust Bayesian approach seeks answers that hold for a range of prior distributions. Three techniques for handling the nuisance parameter are studied and compared. They are (i) utilize a noninformative prior to integrate out the nuisance parameter; (ii) utilize a test statistic whose distribution does not depend on the nuisance parameter; and (iii) use a class of prior distributions for the nuisance parameter. These approaches are studied in two examples, the univariate normal model with unknown mean and variance, and a multivariate normal example.  相似文献   
4.
The usual concept of robustness is called "criterion" or "non-adaptive" robustness to distinguish it from "inference" or "adaptive" robustness. The former term is appled to describe relative insensitivity to changes in the parent distribution, while the latter specifically implies dependence on and hence adaptation to changes in the parent distribution. It is argued that knowledge of, and sensitivity to the parent distribution is an important aspect of inference, and thus the latter concept of robustness is more relevant than the former. This focuses attention on adaptive procedures that use most of the sample information, that is, are efficient. Maximum likelihood has been criticized as depending critically on knowledge of the exact parent distribution, and hence of lacking criterion or non-adaptive robustness. This might have been justified when computational parameter to allow for uncertainly of shape. then the method of maximim likelihood is hsown to possess the more important requirement of being adaptive and efficent, capable of assessing the more relevant creiterion of inference or adaptive robustness.  相似文献   
5.
6.
Abstract

The present note explores sources of misplaced criticisms of P-values, such as conflicting definitions of “significance levels” and “P-values” in authoritative sources, and the consequent misinterpretation of P-values as error probabilities. It then discusses several properties of P-values that have been presented as fatal flaws: That P-values exhibit extreme variation across samples (and thus are “unreliable”), confound effect size with sample size, are sensitive to sample size, and depend on investigator sampling intentions. These properties are often criticized from a likelihood or Bayesian framework, yet they are exactly the properties P-values should exhibit when they are constructed and interpreted correctly within their originating framework. Other common criticisms are that P-values force users to focus on irrelevant hypotheses and overstate evidence against those hypotheses. These problems are not however properties of P-values but are faults of researchers who focus on null hypotheses and overstate evidence based on misperceptions that p?=?0.05 represents enough evidence to reject hypotheses. Those problems are easily seen without use of Bayesian concepts by translating the observed P-value p into the Shannon information (S-value or surprisal) –log2(p).  相似文献   
7.
8.
ABSTRACT

The current concerns about reproducibility have focused attention on proper use of statistics across the sciences. This gives statisticians an extraordinary opportunity to change what are widely regarded as statistical practices detrimental to the cause of good science. However, how that should be done is enormously complex, made more difficult by the balkanization of research methods and statistical traditions across scientific subdisciplines. Working within those sciences while also allying with science reform movements—operating simultaneously on the micro and macro levels—are the key to making lasting change in applied science.  相似文献   
9.
When thousands of tests are performed simultaneously to detect differentially expressed genes in microarray analysis, the number of Type I errors can be immense if a multiplicity adjustment is not made. However, due to the large scale, traditional adjustment methods require very stringen significance levels for individual tests, which yield low power for detecting alterations. In this work, we describe how two omnibus tests can be used in conjunction with a gene filtration process to circumvent difficulties due to the large scale of testing. These two omnibus tests, the D-test and the modified likelihood ratio test (MLRT), can be used to investigate whether a collection of P-values has arisen from the Uniform(0,1) distribution or whether the Uniform(0,1) distribution contaminated by another Beta distribution is more appropriate. In the former case, attention can be directed to a smaller part of the genome; in the latter event, parameter estimates for the contamination model provide a frame of reference for multiple comparisons. Unlike the likelihood ratio test (LRT), both the D-test and MLRT enjoy simple limiting distributions under the null hypothesis of no contamination, so critical values can be obtained from standard tables. Simulation studies demonstrate that the D-test and MLRT are superior to the AIC, BIC, and Kolmogorov-Smirnov test. A case study illustrates omnibus testing and filtration.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号