首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   576篇
  免费   10篇
管理学   63篇
民族学   3篇
人口学   8篇
丛书文集   15篇
理论方法论   8篇
综合类   60篇
社会学   40篇
统计学   389篇
  2024年   1篇
  2023年   4篇
  2022年   3篇
  2021年   7篇
  2020年   6篇
  2019年   16篇
  2018年   23篇
  2017年   35篇
  2016年   14篇
  2015年   16篇
  2014年   25篇
  2013年   170篇
  2012年   34篇
  2011年   18篇
  2010年   14篇
  2009年   33篇
  2008年   21篇
  2007年   21篇
  2006年   31篇
  2005年   13篇
  2004年   15篇
  2003年   11篇
  2002年   12篇
  2001年   4篇
  2000年   6篇
  1999年   2篇
  1998年   7篇
  1997年   2篇
  1994年   1篇
  1993年   1篇
  1992年   3篇
  1991年   2篇
  1990年   2篇
  1988年   3篇
  1987年   2篇
  1985年   2篇
  1983年   2篇
  1982年   2篇
  1979年   2篇
排序方式: 共有586条查询结果,搜索用时 15 毫秒
571.
The idea of modifying, and potentially improving, classical multiple testing methods controlling the familywise error rate (FWER) via an estimate of the unknown number of true null hypotheses has been around for a long time without a formal answer to the question whether or not such adaptive methods ultimately maintain the strong control of FWER, until Finner and Gontscharuk (2009) and Guo (2009) have offered some answers. A class of adaptive Bonferroni and S?idàk methods larger than considered in those papers is introduced, with the FWER control now proved under a weaker distributional setup. Numerical results show that there are versions of adaptive Bonferroni and S?idàk methods that can perform better under certain positive dependence situations than those previously considered. A different adaptive Holm method and its stepup analog, referred to as an adaptive Hochberg method, are also introduced, and their FWER control is proved asymptotically, as in those papers. These adaptive Holm and Hochberg methods are numerically seen to often outperform the previously considered adaptive Holm method.  相似文献   
572.
We are concerned with the problem of estimating the treatment effects at the effective doses in a dose-finding study. Under monotone dose-response, the effective doses can be identified through the estimation of the minimum effective dose, for which there is an extensive set of statistical tools. In particular, when a fixed-sequence multiple testing procedure is used to estimate the minimum effective dose, Hsu and Berger (1999) show that the confidence lower bounds for the treatment effects can be constructed without the need to adjust for multiplicity. Their method, called the dose-response method, is simple to use, but does not account for the magnitude of the observed treatment effects. As a result, the dose-response method will estimate the treatment effects at effective doses with confidence bounds invariably identical to the hypothesized value. In this paper, we propose an error-splitting method as a variant of the dose-response method to construct confidence bounds at the identified effective doses after a fixed-sequence multiple testing procedure. Our proposed method has the virtue of simplicity as in the dose-response method, preserves the nominal coverage probability, and provides sharper bounds than the dose-response method in most cases.  相似文献   
573.
 基于错误发现率(FDR: False Discovery Rate)的多重假设检验(MHT:Multiple Hypothesis Testing),已成为一种有效解决大规模统计推断问题的新方法。本文以错误控制为主线,对多重假设检验问题的错误控制理论、方法、过程和最新进展进行综述,并对多重假设检验方法在经济计量中的应用进行展望。  相似文献   
574.
We propose a weighted empirical likelihood approach to inference with multiple samples, including stratified sampling, the estimation of a common mean using several independent and non-homogeneous samples and inference on a particular population using other related samples. The weighting scheme and the basic result are motivated and established under stratified sampling. We show that the proposed method can ideally be applied to the common mean problem and problems with related samples. The proposed weighted approach not only provides a unified framework for inference with multiple samples, including two-sample problems, but also facilitates asymptotic derivations and computational methods. A bootstrap procedure is also proposed in conjunction with the weighted approach to provide better coverage probabilities for the weighted empirical likelihood ratio confidence intervals. Simulation studies show that the weighted empirical likelihood confidence intervals perform better than existing ones.  相似文献   
575.
SiZer (SIgnificant ZERo crossing of the derivatives) is a scale-space visualization tool for statistical inferences. In this paper we introduce a graphical device, which is based on SiZer, for the test of the equality of the mean of two time series. The estimation of the quantile in a confidence interval is theoretically justified by advanced distribution theory. The extension of the proposed method to the comparison of more than two time series is also done using residual analysis. A broad numerical study is conducted to demonstrate the sample performance of the proposed tool. In addition, asymptotic properties of SiZer for the comparison of two time series are investigated.  相似文献   
576.
577.
We contrast comparisons of several treatments to control in a single experiment versus separate experiments in terms of Type I error rate and power. It is shown that if no Dunnett correction is applied in the single experiment case with relatively few treatments, the distribution of the number of Type I errors is not that different from what it would be in separate experiments with the same number of subjects in each treatment. The difference becomes more pronounced with a larger number of treatments. Extreme outcomes (either very few or very many rejections) are more likely when comparisons are made in a single experiment. When the total number of subjects is the same in a single versus separate experiments, power is generally higher in a single experiment even if a Dunnett adjustment is made.  相似文献   
578.
Several multiple time series models are developed and applied to the analysis and forecasting of the M1 and M2 money supply aggregates. These models feature a decomposition of the time series into permanent and transient influences or components. This decomposition appears to enhance forecasting accuracy and is associated with a variance-covariance allocation parameter that is also estimated from the data. Conditional maximum likelihood estimates for model parameters are presented as well as a numerical algorithm that is an adaptation of Marquardt's algorithm.  相似文献   
579.
In this paper we propose a latent class based multiple imputation approach for analyzing missing categorical covariate data in a highly stratified data model. In this approach, we impute the missing data assuming a latent class imputation model and we use likelihood methods to analyze the imputed data. Via extensive simulations, we study its statistical properties and make comparisons with complete case analysis, multiple imputation, saturated log-linear multiple imputation and the Expectation–Maximization approach under seven missing data mechanisms (including missing completely at random, missing at random and not missing at random). These methods are compared with respect to bias, asymptotic standard error, type I error, and 95% coverage probabilities of parameter estimates. Simulations show that, under many missingness scenarios, latent class multiple imputation performs favorably when jointly considering these criteria. A data example from a matched case–control study of the association between multiple myeloma and polymorphisms of the Inter-Leukin 6 genes is considered.  相似文献   
580.
By entering the data (y i ,x i ) followed by (–y i ,–x i ), one can obtain an intercept-free regression Y = Xβ + ε from a program package that normally uses an intercept term. There is no bias in the resultant regression coefficients, but a minor postanalysis adjustment is needed to the residual variance and standard errors.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号