首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   942篇
  免费   23篇
管理学   68篇
民族学   34篇
人口学   73篇
丛书文集   5篇
理论方法论   84篇
综合类   15篇
社会学   627篇
统计学   59篇
  2021年   5篇
  2020年   23篇
  2019年   30篇
  2018年   31篇
  2017年   32篇
  2016年   31篇
  2015年   20篇
  2014年   20篇
  2013年   206篇
  2012年   28篇
  2011年   28篇
  2010年   25篇
  2009年   22篇
  2008年   25篇
  2007年   18篇
  2006年   28篇
  2005年   29篇
  2004年   27篇
  2003年   20篇
  2002年   25篇
  2001年   24篇
  2000年   11篇
  1999年   10篇
  1998年   12篇
  1997年   13篇
  1996年   10篇
  1995年   11篇
  1994年   16篇
  1993年   7篇
  1992年   16篇
  1991年   8篇
  1990年   8篇
  1989年   8篇
  1988年   12篇
  1987年   9篇
  1986年   10篇
  1985年   6篇
  1984年   9篇
  1983年   10篇
  1982年   9篇
  1981年   5篇
  1979年   6篇
  1978年   9篇
  1977年   8篇
  1976年   6篇
  1975年   4篇
  1974年   5篇
  1973年   7篇
  1966年   3篇
  1965年   3篇
排序方式: 共有965条查询结果,搜索用时 0 毫秒
581.
This article provides a unified methodology of meta-analysis that synthesizes medical evidence by using both available individual patient data (IPD) and published summary statistics within the framework of likelihood principle. Most up-to-date scientific evidence on medicine is crucial information not only to consumers but also to decision makers, and can only be obtained when existing evidence from the literature and the most recent individual patient data are optimally synthesized. We propose a general linear mixed effects model to conduct meta-analyses when individual patient data are only available for some of the studies and summary statistics have to be used for the rest of the studies. Our approach includes both the traditional meta-analyses in which only summary statistics are available for all studies and the other extreme case in which individual patient data are available for all studies as special examples. We implement the proposed model with statistical procedures from standard computing packages. We provide measures of heterogeneity based on the proposed model. Finally, we demonstrate the proposed methodology through a real life example studying the cerebrospinal fluid biomarkers to identify individuals with high risk of developing Alzheimer's disease when they are still cognitively normal.  相似文献   
582.
This quarter's column features reports from the New England Library Association Annual Conference; the Pennsylvania Library Association Annual Conference; the Potomac Technical Processing Librarians Annual Meeting; the Charleston Conference; the Brick and Click Libraries Symposium; a NISO Webinar entitled “The Case of the Disappearing Journal: Solving the Title Transfer and Online Display Mystery;” and the American Library Association Midwinter Meeting.  相似文献   
583.
This paper proposes a new approach to equilibrium selection in repeated games with transfers, supposing that in each period the players bargain over how to play. Although the bargaining phase is cheap talk (following a generalized alternating‐offer protocol), sharp predictions arise from three axioms. Two axioms allow the players to meaningfully discuss whether to deviate from their plan; the third embodies a “theory of disagreement”—that play under disagreement should not vary with the manner in which bargaining broke down. Equilibria that satisfy these axioms exist for all discount factors and are simple to construct; all equilibria generate the same welfare. Optimal play under agreement generally requires suboptimal play under disagreement. Whether patient players attain efficiency depends on both the stage game and the bargaining protocol. The theory extends naturally to games with imperfect public monitoring and heterogeneous discount factors, and yields new insights into classic relational contracting questions.  相似文献   
584.
The stepwise regression algorithm that is widely used is due to Efroymson. He stated that the F-to-remove value had to be not greater than the F-to-enter value, but did not show that the algorithm could not cycle. Until now nobody appears to have shown this. To prove that the algorithm does converge, an objective function is introduced. It is shown that this objective function decreases or can occasionally remain constant at each step in the algorithm, and hence the algorithm cannot cycle provided that Efroymson's condition is satisfied.  相似文献   
585.
586.
Results from a power study of six statistics for testing that a sample is from a uniform distribution on the unit interval (0,1) are reported. The test statistics are all well-known and each of them was originally proposed because they should have high power against some alternative distributions. The tests considered are the Pearson probability product test, the Neyman smooth test, the Sukhatme test, the Durbin-Kolmogorov test, the Kuiper test, and the Sherman test. Results are given for each of these tests against each of four classes of alternatives. Also, the most powerful test against each member of the first three alternatives is obtained, and the powers of these tests are given for the same sample sizes as for the six general "omnibus" test statistics. These values constitute a "power envelope" against which all tests can be compared. The Neyman smooth tests with 2nd and 4th degree polynomials are found to have good power and are recommended as general tests for uniformity.  相似文献   
587.
The Box–Jenkins methodology for modeling and forecasting from univariate time series models has long been considered a standard to which other forecasting techniques have been compared. To a Bayesian statistician, however, the method lacks an important facet—a provision for modeling uncertainty about parameter estimates. We present a technique called sampling the future for including this feature in both the estimation and forecasting stages. Although it is relatively easy to use Bayesian methods to estimate the parameters in an autoregressive integrated moving average (ARIMA) model, there are severe difficulties in producing forecasts from such a model. The multiperiod predictive density does not have a convenient closed form, so approximations are needed. In this article, exact Bayesian forecasting is approximated by simulating the joint predictive distribution. First, parameter sets are randomly generated from the joint posterior distribution. These are then used to simulate future paths of the time series. This bundle of many possible realizations is used to project the future in several ways. Highest probability forecast regions are formed and portrayed with computer graphics. The predictive density's shape is explored. Finally, we discuss a method that allows the analyst to subjectively modify the posterior distribution on the parameters and produce alternate forecasts.  相似文献   
588.
589.
590.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号