首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2268篇
  免费   36篇
  国内免费   7篇
管理学   68篇
民族学   4篇
人口学   13篇
丛书文集   33篇
理论方法论   18篇
综合类   211篇
社会学   38篇
统计学   1926篇
  2024年   3篇
  2023年   16篇
  2022年   22篇
  2021年   26篇
  2020年   40篇
  2019年   73篇
  2018年   80篇
  2017年   178篇
  2016年   49篇
  2015年   55篇
  2014年   75篇
  2013年   734篇
  2012年   200篇
  2011年   75篇
  2010年   72篇
  2009年   69篇
  2008年   60篇
  2007年   53篇
  2006年   48篇
  2005年   47篇
  2004年   31篇
  2003年   36篇
  2002年   31篇
  2001年   41篇
  2000年   32篇
  1999年   24篇
  1998年   29篇
  1997年   13篇
  1996年   9篇
  1995年   10篇
  1994年   7篇
  1993年   8篇
  1992年   3篇
  1991年   8篇
  1990年   6篇
  1989年   5篇
  1988年   3篇
  1987年   4篇
  1986年   3篇
  1985年   1篇
  1984年   6篇
  1983年   5篇
  1982年   7篇
  1981年   6篇
  1980年   2篇
  1979年   2篇
  1978年   2篇
  1977年   1篇
  1975年   1篇
排序方式: 共有2311条查询结果,搜索用时 15 毫秒
101.
The method of ratio estimation for estimating the population mean ? of a characteristic y when we have auxillary information on a characteristic x highly correlated with y, consists in getting an estimator of the population ratio R = ?/X? and then multiplying this estimator by the known population mean X?. Though efficient, ratio estimators are in general biased and in this article we review some of the unbiased ratio estimators and discuss a method of constructing them. Next we present the Jackknife technique for reducing bias and show how the generalized Jackknife could be interpreted by the same method.  相似文献   
102.
Employing certain generalized random permutation models and a general class of linear estimators of a finite population mean, it is shown that many of the conventional estimators are “optimal” in the sense of minimum average mean square error. Simple proofs are provided by using a well-known theorem on UMV estimation. The results also cover certain simple response error situations.  相似文献   
103.
Risk-based cleanup goals or preliminary remediation goals (PRGs) are established at hazardous waste sites when contaminant concentrations in air, soil, surface water, or groundwater exceed specified acceptable risk levels. When derived in accordance with the Environmental Protection Agency's risk assessment guidance, the PRG is intended to represent the average contaminant concentration within an exposure unit area that is left on the site following remediation. The PRG, however, frequently has been used inconsistently at Superfund sites with a number of remediation decisions using the PRG as a not-to-exceed concentration (NTEC). Such misapplications could result in overly conservative and unnecessarily costly remedial actions. The PRG should be applied in remedial actions in the same manner in which it was generated. Statistical methods, such as Bower's Confidence Response Goal, and mathematical methods such as "iterative removal of hot spots," are available to assist in the development of NTECs that ensure the average postremediation contaminant concentration is at or below the PRG. These NTECs can provide the risk manager with a more practical cleanup goal. In addition, an acute PRG can be developed to ensure that contaminant concentrations left on-site following remediation are not so high as to pose an acute or short-term health risk if excessive exposure to small areas of the site should occur. A case study demonstrates cost savings of five to ten times associated with the more scientifically sound use of the PRG as a postremediation site average, and development of a separate NTEC and acute PRG based on the methods referenced in this article.  相似文献   
104.
Abstract

Confidence sets, p values, maximum likelihood estimates, and other results of non-Bayesian statistical methods may be adjusted to favor sampling distributions that are simple compared to others in the parametric family. The adjustments are derived from a prior likelihood function previously used to adjust posterior distributions.  相似文献   
105.
A fast and accurate method of confidence interval construction for the smoothing parameter in penalised spline and partially linear models is proposed. The method is akin to a parametric percentile bootstrap where Monte Carlo simulation is replaced by saddlepoint approximation, and can therefore be viewed as an approximate bootstrap. It is applicable in a quite general setting, requiring only that the underlying estimator be the root of an estimating equation that is a quadratic form in normal random variables. This is the case under a variety of optimality criteria such as those commonly denoted by maximum likelihood (ML), restricted ML (REML), generalized cross validation (GCV) and Akaike's information criteria (AIC). Simulation studies reveal that under the ML and REML criteria, the method delivers a near‐exact performance with computational speeds that are an order of magnitude faster than existing exact methods, and two orders of magnitude faster than a classical bootstrap. Perhaps most importantly, the proposed method also offers a computationally feasible alternative when no known exact or asymptotic methods exist, e.g. GCV and AIC. An application is illustrated by applying the methodology to well‐known fossil data. Giving a range of plausible smoothed values in this instance can help answer questions about the statistical significance of apparent features in the data.  相似文献   
106.
In this paper, we introduce an extension of the generalized exponential (GE) distribution, making it more robust against possible influential observations. The new model is defined as the quotient between a GE random variable and a beta-distributed random variable with one unknown parameter. The resulting distribution is a distribution with greater kurtosis than the GE distribution. Probability properties of the distribution such as moments and asymmetry and kurtosis are studied. Likewise, statistical properties are investigated using the method of moments and the maximum likelihood approach. Two real data analyses are reported illustrating better performance of the new model over the GE model.  相似文献   
107.
In this article, a classification model based on the majority rule sorting (MR‐Sort) method is employed to evaluate the vulnerability of safety‐critical systems with respect to malevolent intentional acts. The model is built on the basis of a (limited‐size) set of data representing (a priori known) vulnerability classification examples. The empirical construction of the classification model introduces a source of uncertainty into the vulnerability analysis process: a quantitative assessment of the performance of the classification model (in terms of accuracy and confidence in the assignments) is thus in order. Three different app oaches are here considered to this aim: (i) a model–retrieval‐based approach, (ii) the bootstrap method, and (iii) the leave‐one‐out cross‐validation technique. The analyses are presented with reference to an exemplificative case study involving the vulnerability assessment of nuclear power plants.  相似文献   
108.
消费者信心是对消费者整体所表现出来的信心程度及其变动的一种测度。使用离散顺序选择模型对2009年第1季度中国大陆消费者信心指数的原始调查数据进行实证分析,结果显示:消费者对未来经济发展的信心受到其对未来就业、收入、生活和投资四个方面信心的显著影响。  相似文献   
109.
110.
十月革命最伟大的历史贡献是为人类社会开辟了一条崭新的发展道路。十月革命虽始于苏俄,但惠及世界。中国特色社会主义道路是十月革命道路的延续与创新。近三十多年来,中国在社会主义建设上取得的震惊世界的伟大成就,进一步夯实了中国特色社会主义道路自信的实践基础,但同时也存在抹黑、否定十月革命、诋毁中国特色社会主义道路的历史虚无主义思潮,对此必须坚决予以驳斥。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号