首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   889篇
  免费   18篇
管理学   144篇
民族学   1篇
人口学   117篇
丛书文集   15篇
理论方法论   79篇
综合类   13篇
社会学   421篇
统计学   117篇
  2020年   9篇
  2019年   7篇
  2018年   19篇
  2017年   20篇
  2016年   14篇
  2015年   10篇
  2014年   22篇
  2013年   156篇
  2012年   24篇
  2011年   24篇
  2010年   20篇
  2009年   22篇
  2008年   25篇
  2007年   13篇
  2006年   14篇
  2005年   42篇
  2004年   20篇
  2003年   18篇
  2002年   27篇
  2001年   16篇
  2000年   17篇
  1999年   16篇
  1998年   11篇
  1997年   20篇
  1996年   10篇
  1995年   9篇
  1994年   8篇
  1993年   14篇
  1992年   16篇
  1991年   21篇
  1990年   16篇
  1989年   16篇
  1988年   22篇
  1987年   10篇
  1986年   11篇
  1985年   14篇
  1984年   12篇
  1983年   9篇
  1982年   16篇
  1981年   11篇
  1980年   7篇
  1979年   12篇
  1978年   9篇
  1977年   14篇
  1976年   9篇
  1975年   12篇
  1974年   7篇
  1973年   7篇
  1970年   3篇
  1969年   4篇
排序方式: 共有907条查询结果,搜索用时 390 毫秒
121.
Since the squared ranks test was first proposed by Taha in 1964 it has been mentioned by several authors as a test that is easy to use, with good power in many situations. It is almost as easy to use as the Wilcoxon rank sum test, and has greater power when two populations differ in their scale parameters rather than in their location parameters. This paper discuss the versatility of the squared ranks test, introduces a test which uses squared ranks, and presents some exact tables  相似文献   
122.
This article develops a new cumulative sum statistic to identify aberrant behavior in a sequentially administered multiple-choice standardized examination. The examination responses can be described as finite Poisson trials, and the statistic can be used for other applications which fit this framework. The standardized examination setting uses a maximum likelihood estimate of examinee ability and an item response theory model. Aberrant and non aberrant probabilities are computed by an odds ratio analogous to risk adjusted CUSUM schemes. The significance level of a hypothesis test, where the null hypothesis is non-aberrant examinee behavior, is computed with Markov chains. A smoothing process is used to spread probabilities across the Markov states. The practicality of the approach to detect aberrant examinee behavior is demonstrated with results from both simulated and empirical data.  相似文献   
123.
These Fortran-77 subroutines provide building blocks for Generalized Cross-Validation (GCV) (Craven and Wahba, 1979) calculations in data analysis and data smoothing including ridge regression (Golub, Heath, and Wahba, 1979), thin plate smoothing splines (Wahba and Wendelberger, 1980), deconvolution (Wahba, 1982d), smoothing of generalized linear models (O'sullivan, Yandell and Raynor 1986, Green 1984 and Green and Yandell 1985), and ill-posed problems (Nychka et al., 1984, O'sullivan and Wahba, 1985). We present some of the types of problems for which GCV is a useful method of choosing a smoothing or regularization parameter and we describe the structure of the subroutines.Ridge Regression: A familiar example of a smoothing parameter is the ridge parameter X in the ridge regression problem which we write.  相似文献   
124.
125.
126.
A graphical procedure for the display of treatment means that enables one to determine the statistical significance of the observed differences is presented. It is shown that the widely used least significant difference and honestly significant difference statistics can be used to construct plots in which any two means whose uncertainty intervals do not overlap are significantly different at the assigned probability level. It is argued that these plots, because of their straightforward decision rules, are more effective than those that show the observed means with standard errors or confidence limits. Several examples of the proposed displays are included to illustrate the procedure.  相似文献   
127.
Statistical hypotheses and test statistics are Boolean functions that can be manipulated using the tools of Boolean algebra. These tools are particularly useful for exploring multiple comparisons or simultaneous inference theory, in which multiparameter hypotheses or multiparameter test statistics may be decomposed into combinations of uniparameter hypotheses or uniparameter tests. These concepts are illustrated with both finite and infinite decompositions of familiar multiparameter hypotheses and tests. The corresponding decompositions of acceptance regions and rejection regions are also shown. Finally, the close relationship between hypothesis and test decompositions and Roy's union—intersection principle is demonstrated by a derivation of the union—intersection test of the univariate general linear hypothesis.  相似文献   
128.
Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X? control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set sampling, double-ranked set sampling, double median-ranked set sampling and median double-ranked set sampling. In this study, we highlight some limitations of the existing ranked set charting structures. Besides, we propose different runs rules-based control charting structures under a variety of sampling strategies. We evaluate the performance of the control charting structures using power curves as a performance criterion. We observe that the proposed merger of varying runs rules schemes with different sampling strategies improve significantly the detection ability of location control charting structures. More specifically, the MRSS performs the best under both single- and double-ranked set strategies with varying runs rules schemes. We also include a real-life example to explain the proposal and highlight its significance for practical data sets.  相似文献   
129.
Common software release procedures based on statistical techniques try to optimize the trade-off between further testing costs and costs due to remaining errors. We propose new software release procedures where the aim is to certify with a certain confidence level that the software does not contain errors. The underlying model is a discrete time model similar to the geometric Moranda model. The decisions are based on a mix of classical and Bayesian approaches to sequential testing and do not require any assumption on the initial number of errors.  相似文献   
130.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号