首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10135篇
  免费   310篇
管理学   1412篇
民族学   60篇
人口学   1034篇
丛书文集   28篇
理论方法论   792篇
综合类   157篇
社会学   4542篇
统计学   2420篇
  2023年   72篇
  2022年   62篇
  2021年   80篇
  2020年   224篇
  2019年   243篇
  2018年   405篇
  2017年   490篇
  2016年   361篇
  2015年   225篇
  2014年   309篇
  2013年   1710篇
  2012年   458篇
  2011年   333篇
  2010年   255篇
  2009年   239篇
  2008年   218篇
  2007年   236篇
  2006年   204篇
  2005年   201篇
  2004年   175篇
  2003年   177篇
  2002年   177篇
  2001年   218篇
  2000年   202篇
  1999年   182篇
  1998年   135篇
  1997年   134篇
  1996年   133篇
  1995年   113篇
  1994年   111篇
  1993年   101篇
  1992年   147篇
  1991年   125篇
  1990年   123篇
  1989年   118篇
  1988年   102篇
  1987年   93篇
  1986年   100篇
  1985年   111篇
  1984年   114篇
  1983年   123篇
  1982年   99篇
  1981年   88篇
  1980年   86篇
  1979年   96篇
  1978年   92篇
  1977年   82篇
  1976年   68篇
  1975年   69篇
  1973年   60篇
排序方式: 共有10000条查询结果,搜索用时 296 毫秒
61.
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic. We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation. For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation study to perform well and to have several distinct advantages over existing methods.  相似文献   
62.
The authors propose graphical and numerical methods for checking the adequacy of the logistic regression model for matched case‐control data. Their approach is based on the cumulative sum of residuals over the covariate or linear predictor. Under the assumed model, the cumulative residual process converges weakly to a centered Gaussian limit whose distribution can be approximated via computer simulation. The observed cumulative residual pattern can then be compared both visually and analytically to a certain number of simulated realizations of the approximate limiting process under the null hypothesis. The proposed techniques allow one to check the functional form of each covariate, the logistic link function as well as the overall model adequacy. The authors assess the performance of the proposed methods through simulation studies and illustrate them using data from a cardiovascular study.  相似文献   
63.
64.
To explore the projection efficiency of a design, Tsai, et al [2000. Projective three-level main effects designs robust to model uncertainty. Biometrika 87, 467–475] introduced the Q criterion to compare three-level main-effects designs for quantitative factors that allow the consideration of interactions in addition to main effects. In this paper, we extend their method and focus on the case in which experimenters have some prior knowledge, in advance of running the experiment, about the probabilities of effects being non-negligible. A criterion which incorporates experimenters’ prior beliefs about the importance of each effect is introduced to compare orthogonal, or nearly orthogonal, main effects designs with robustness to interactions as a secondary consideration. We show that this criterion, exploiting prior information about model uncertainty, can lead to more appropriate designs reflecting experimenters’ prior beliefs.  相似文献   
65.
The paper and the special issue focus on the activity of statistical consulting and its varieties. This includes academic consulting, consulting to and in industry as well as statistics in public media.  相似文献   
66.
The authors provide an overview of optimal scaling results for the Metropolis algorithm with Gaussian proposal distribution. They address in more depth the case of high‐dimensional target distributions formed of independent, but not identically distributed components. They attempt to give an intuitive explanation as to why the well‐known optimal acceptance rate of 0.234 is not always suitable. They show how to find the asymptotically optimal acceptance rate when needed, and they explain why it is sometimes necessary to turn to inhomogeneous proposal distributions. Their results are illustrated with a simple example.  相似文献   
67.
Consider a website and the surfers visiting its pages. A typical issue of interest, for example while monitoring an advertising campaign, concerns whether a specific page has been designed successfully, i.e. is able to attract surfers or address them to other pages within the site. We assume that the surfing behaviour is fully described by the transition probabilities from one page to another, so that a clickstream (sequence of consecutively visited pages) can be viewed as a finite-state-space Markov chain. We then implement a variety of hierarchical prior distributions on the multivariate logits of the transition probabilities and define, for each page, a content effect and a link effect. The former measures the attractiveness of the page due to its contents, while the latter signals its ability to suggest further interesting links within the site. Moreover, we define an additional effect, representing overall page success, which incorporates both effects previously described. Using WinBUGS, we provide estimates and credible intervals for each of the above effects and rank pages accordingly.  相似文献   
68.
Minorities and females are underrepresented in the top-income quintile of law school graduates. Employing a binary logistic regression model, I examine whether this is due to a“glass ceiling” (an invisible barrier erected by third parties) or a“sticky floor” (self-imposed limitations regarding employment). My major finding is that being female, a minority, or disabled did not significantly reduce one's probability of making the top-income quintile once hours of work, experience, and other factors are taken into account. My findings directly contradict the large body of glass-ceiling literature and support the sticky-floor model. I thank the Law School Admission Council for funding this research. Helpful comments and suggestions were received from Robert Nelson of Northwestern University and the American Bar Foundation, Steven Conroy of the University of West Florida, and R. Kim Craft and Douglas Bonzo of Southern Utah University. The views expressed here are solely those of the author and do not necessarily reflect those of the institutions or persons listed above.  相似文献   
69.
An important deficiency in Harberger's [1962] model of corporate income taxation is its inability to consider both corporate and noncorporate production of the same good. Within-industry substitution has potentially major implications for both the excess burden and incidence of the corporate tax.
We analyze this within-industry substitution using a model in which each industry/sector contains corporate and noncorporate firms (with identical production functions) which produce goods that are close substitutes. The scope for considerable within-industry substitution of noncorporate for corporate capital leads to a very much larger excess burden than that in the Harberger model.  相似文献   
70.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号