首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1694篇
  免费   31篇
管理学   281篇
民族学   9篇
人口学   112篇
丛书文集   6篇
理论方法论   198篇
综合类   16篇
社会学   830篇
统计学   273篇
  2023年   12篇
  2022年   5篇
  2021年   10篇
  2020年   31篇
  2019年   52篇
  2018年   44篇
  2017年   60篇
  2016年   43篇
  2015年   45篇
  2014年   58篇
  2013年   274篇
  2012年   67篇
  2011年   58篇
  2010年   54篇
  2009年   45篇
  2008年   59篇
  2007年   54篇
  2006年   52篇
  2005年   52篇
  2004年   48篇
  2003年   44篇
  2002年   43篇
  2001年   40篇
  2000年   33篇
  1999年   37篇
  1998年   30篇
  1997年   31篇
  1996年   15篇
  1995年   22篇
  1994年   24篇
  1993年   11篇
  1992年   11篇
  1991年   17篇
  1990年   15篇
  1989年   7篇
  1988年   24篇
  1987年   15篇
  1986年   17篇
  1985年   15篇
  1984年   18篇
  1983年   15篇
  1982年   11篇
  1981年   22篇
  1980年   16篇
  1979年   13篇
  1978年   11篇
  1977年   6篇
  1976年   8篇
  1975年   9篇
  1974年   7篇
排序方式: 共有1725条查询结果,搜索用时 10 毫秒
31.
Fixed sample size approximately similar tests for the Behrens-Fisher problem are studied and compared with various other tests suggested in current sttistical methodelogy texts. Several fourmoment approxiamtely similar tests are developed and offered as alternatives. These tests are shown to be good practical solutions which are easily implemented in practice.  相似文献   
32.
In order for predictive regression tests to deliver asymptotically valid inference, account has to be taken of the degree of persistence of the predictors under test. There is also a maintained assumption that any predictability in the variable of interest is purely attributable to the predictors under test. Violation of this assumption by the omission of relevant persistent predictors renders the predictive regression invalid, and potentially also spurious, as both the finite sample and asymptotic size of the predictability tests can be significantly inflated. In response, we propose a predictive regression invalidity test based on a stationarity testing approach. To allow for an unknown degree of persistence in the putative predictors, and for heteroscedasticity in the data, we implement our proposed test using a fixed regressor wild bootstrap procedure. We demonstrate the asymptotic validity of the proposed bootstrap test by proving that the limit distribution of the bootstrap statistic, conditional on the data, is the same as the limit null distribution of the statistic computed on the original data, conditional on the predictor. This corrects a long-standing error in the bootstrap literature whereby it is incorrectly argued that for strongly persistent regressors and test statistics akin to ours the validity of the fixed regressor bootstrap obtains through equivalence to an unconditional limit distribution. Our bootstrap results are therefore of interest in their own right and are likely to have applications beyond the present context. An illustration is given by reexamining the results relating to U.S. stock returns data in Campbell and Yogo (2006 Campbell, J. Y. and Yogo, M. (2006), “Efficient Tests of Stock Return Predictability,” Journal of Financial Economics, 81, 2760.[Crossref], [Web of Science ®] [Google Scholar]). Supplementary materials for this article are available online.  相似文献   
33.
Some studies generate data that can be grouped into clusters in more than one way. Consider for instance a smoking prevention study in which responses on smoking status are collected over several years in a cohort of students from a number of different schools. This yields longitudinal data, also cross‐sectionaliy clustered in schools. The authors present a model for analyzing binary data of this type, combining generalized estimating equations and estimation of random effects to address the longitudinal and cross‐sectional dependence, respectively. The estimation procedure for this model is discussed, as are the results of a simulation study used to investigate the properties of its estimates. An illustration using data from a smoking prevention trial is given.  相似文献   
34.
There are now three essentially separate literatures on the topics of multiple systems estimation, record linkage, and missing data. But in practice the three are intimately intertwined. For example, record linkage involving multiple data sources for human populations is often carried out with the expressed goal of developing a merged database for multiple system estimation (MSE). Similarly, one way to view both the record linkage and MSE problems is as ones involving the estimation of missing data. This presentation highlights the technical nature of these interrelationships and provides a preliminary effort at their integration.  相似文献   
35.
Summary.  The paper proposes two Bayesian approaches to non-parametric monotone function estimation. The first approach uses a hierarchical Bayes framework and a characterization of smooth monotone functions given by Ramsay that allows unconstrained estimation. The second approach uses a Bayesian regression spline model of Smith and Kohn with a mixture distribution of constrained normal distributions as the prior for the regression coefficients to ensure the monotonicity of the resulting function estimate. The small sample properties of the two function estimators across a range of functions are provided via simulation and compared with existing methods. Asymptotic results are also given that show that Bayesian methods provide consistent function estimators for a large class of smooth functions. An example is provided involving economic demand functions that illustrates the application of the constrained regression spline estimator in the context of a multiple-regression model where two functions are constrained to be monotone.  相似文献   
36.
Pre‐study sample size calculations for clinical trial research protocols are now mandatory. When an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a pre‐determined difference (effect size) in the outcome variable, at a given level of statistical significance. Frequently studies will recruit fewer patients than the initial pre‐study sample size calculation suggested. Investigators are faced with the fact that their study may be inadequately powered to detect the pre‐specified treatment effect and the statistical analysis of the collected outcome data may or may not report a statistically significant result. If the data produces a “non‐statistically significant result” then investigators are frequently tempted to ask the question “Given the actual final study size, what is the power of the study, now, to detect a treatment effect or difference?” The aim of this article is to debate whether or not it is desirable to answer this question and to undertake a power calculation, after the data have been collected and analysed. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
37.
This paper considers the estimation of Cobb-Douglas production functions using panel data covering a large sample of companies observed for a small number of time periods. GMM estimatorshave been found to produce large finite-sample biases when using the standard first-differenced estimator. These biases can be dramatically reduced by exploiting reasonable stationarity restrictions on the initial conditions process. Using data for a panel of R&Dperforming US manufacturing companies we find that the additional instruments used in our extended GMM estimator yield much more reasonable parameter estimates.  相似文献   
38.
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error.  相似文献   
39.
This article seeks an understanding of sleeping and waking life in bed in relation to the use of handheld screen devices. The bedtime use of these devices has come to public attention through the involvement of this practice in the construction of two contemporary crises of sleep: Sleep science’s crisis of chronic sleep deprivation and the wider cultural crisis of the invasion of sleep by fast capitalism, as proposed by Jonathan Crary. Visual art is employed to gain insights into the two related understandings of sleep: one concerned with individual sleep self-regulation and the other with the corporeal commonality of sleep. The sociology of sleep, based on Michele Foucault’s concept of ‘biopower’, is augmented by philosophical insights from Jean-Luc Nancy and Gilles Deleuze to reconfigure the sleeping body beyond the bounds of disciplinary regimes and permit a reassessment of the affective potential of sleep. Works of digital media art, together with ethnographic and cultural studies on the use of mobile devices, are employed to tease out the entanglement of these screen devices with waking and sleeping bedroom life. An ontology of sleep as a pre-individual affective state is proposed as an alternative basis for resisting the appropriation of sleep by the always-waking world accessed through the use of smartphones and tablets.  相似文献   
40.
In nonregular problems where the conventional \(n\) out of \(n\) bootstrap is inconsistent, the \(m\) out of \(n\) bootstrap provides a useful remedy to restore consistency. Conventionally, optimal choice of the bootstrap sample size \(m\) is taken to be the minimiser of a frequentist error measure, estimation of which has posed a major difficulty hindering practical application of the \(m\) out of \(n\) bootstrap method. Relatively little attention has been paid to a stronger, stochastic, version of the optimal bootstrap sample size, defined as the minimiser of an error measure calculated directly from the observed sample. Motivated by this stronger notion of optimality, we develop procedures for calculating the stochastically optimal value of \(m\). Our procedures are shown to work under special forms of Edgeworth-type expansions which are typically satisfied by statistics of the shrinkage type. Theoretical and empirical properties of our methods are illustrated with three examples, namely the James–Stein estimator, the ridge regression estimator and the post-model-selection regression estimator.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号