全文获取类型
收费全文 | 1946篇 |
免费 | 38篇 |
专业分类
管理学 | 323篇 |
民族学 | 9篇 |
人口学 | 144篇 |
丛书文集 | 6篇 |
理论方法论 | 217篇 |
综合类 | 21篇 |
社会学 | 953篇 |
统计学 | 311篇 |
出版年
2023年 | 12篇 |
2021年 | 11篇 |
2020年 | 39篇 |
2019年 | 61篇 |
2018年 | 49篇 |
2017年 | 73篇 |
2016年 | 50篇 |
2015年 | 53篇 |
2014年 | 64篇 |
2013年 | 305篇 |
2012年 | 73篇 |
2011年 | 69篇 |
2010年 | 60篇 |
2009年 | 49篇 |
2008年 | 70篇 |
2007年 | 64篇 |
2006年 | 63篇 |
2005年 | 59篇 |
2004年 | 58篇 |
2003年 | 53篇 |
2002年 | 50篇 |
2001年 | 46篇 |
2000年 | 40篇 |
1999年 | 43篇 |
1998年 | 39篇 |
1997年 | 32篇 |
1996年 | 24篇 |
1995年 | 24篇 |
1994年 | 28篇 |
1993年 | 15篇 |
1992年 | 14篇 |
1991年 | 18篇 |
1990年 | 19篇 |
1989年 | 8篇 |
1988年 | 26篇 |
1987年 | 17篇 |
1986年 | 18篇 |
1985年 | 15篇 |
1984年 | 21篇 |
1983年 | 17篇 |
1982年 | 13篇 |
1981年 | 23篇 |
1980年 | 18篇 |
1979年 | 13篇 |
1978年 | 11篇 |
1977年 | 8篇 |
1976年 | 8篇 |
1975年 | 11篇 |
1974年 | 8篇 |
1973年 | 5篇 |
排序方式: 共有1984条查询结果,搜索用时 15 毫秒
21.
Stephen Gourlay 《英国管理杂志》2004,15(S1):S96-S99
22.
In the Brazilian Amazon during the 1980s,urban population growth outstripped rural growth, andby 1991, most of the region's population resided inurban areas. Much of this urban growth involvedestablishment of unplanned housing with inadequateinfrastructure, which resulted in rising pollution. This paper compares indicators of environmentalquality in urban populations of the Amazon in 1980 and1991, and among different kinds of urban populationsin 1991. The results show that environmental qualityin the region deteriorated during the 1980s as theproduction of and exposure to environmental hazardsrose while resources to ward off hazards eroded. Thefindings also show that environmental quality wasparticularly poor in more rapidly growing urbancenters. The urban Amazon may not afford an adequatestandard of living and this may generate out-migrationfrom the region. 相似文献
23.
This paper proposes an interpretation of the pure capital rationing problem as it is faced by many managers in decentralized firms in which budgets serve as the principal means of control. It is argued that the appropriate objective for situations such as these is the maximization of either undiscounted earnings over the planning horizon or total value of the investments at the horizon. When either objective function is used in conjunction with the frequently encountered linear programming models for the capital rationing problem, shadow prices result which give rise to discount rates that will reproduce the optimal solution using discounted cash flow as a criterion. These results are then used as a means for clarifying several confusing and misleading statements that have appeared in the literature. 相似文献
24.
25.
26.
Yongtao Guan Roland Fleißner Paul Joyce Stephen M. Krone 《Statistics and Computing》2006,16(2):193-202
As the number of applications for Markov Chain Monte Carlo (MCMC) grows, the power of these methods as well as their shortcomings
become more apparent. While MCMC yields an almost automatic way to sample a space according to some distribution, its implementations
often fall short of this task as they may lead to chains which converge too slowly or get trapped within one mode of a multi-modal
space. Moreover, it may be difficult to determine if a chain is only sampling a certain area of the space or if it has indeed
reached stationarity.
In this paper, we show how a simple modification of the proposal mechanism results in faster convergence of the chain and
helps to circumvent the problems described above. This mechanism, which is based on an idea from the field of “small-world”
networks, amounts to adding occasional “wild” proposals to any local proposal scheme. We demonstrate through both theory and
extensive simulations, that these new proposal distributions can greatly outperform the traditional local proposals when it
comes to exploring complex heterogenous spaces and multi-modal distributions. Our method can easily be applied to most, if
not all, problems involving MCMC and unlike many other remedies which improve the performance of MCMC it preserves the simplicity
of the underlying algorithm. 相似文献
27.
Peter Hall Stephen M.-S. Lee & G. Alastair Young 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):479-491
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error. 相似文献
28.
This paper considers the estimation of Cobb-Douglas production functions using panel data covering a large sample of companies observed for a small number of time periods. GMM estimatorshave been found to produce large finite-sample biases when using the standard first-differenced estimator. These biases can be dramatically reduced by exploiting reasonable stationarity restrictions on the initial conditions process. Using data for a panel of R&Dperforming US manufacturing companies we find that the additional instruments used in our extended GMM estimator yield much more reasonable parameter estimates. 相似文献
29.
Walters SJ 《Pharmaceutical statistics》2009,8(2):163-169
Pre‐study sample size calculations for clinical trial research protocols are now mandatory. When an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a pre‐determined difference (effect size) in the outcome variable, at a given level of statistical significance. Frequently studies will recruit fewer patients than the initial pre‐study sample size calculation suggested. Investigators are faced with the fact that their study may be inadequately powered to detect the pre‐specified treatment effect and the statistical analysis of the collected outcome data may or may not report a statistically significant result. If the data produces a “non‐statistically significant result” then investigators are frequently tempted to ask the question “Given the actual final study size, what is the power of the study, now, to detect a treatment effect or difference?” The aim of this article is to debate whether or not it is desirable to answer this question and to undertake a power calculation, after the data have been collected and analysed. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
30.
There are now three essentially separate literatures on the topics of multiple systems estimation, record linkage, and missing
data. But in practice the three are intimately intertwined. For example, record linkage involving multiple data sources for
human populations is often carried out with the expressed goal of developing a merged database for multiple system estimation
(MSE). Similarly, one way to view both the record linkage and MSE problems is as ones involving the estimation of missing
data. This presentation highlights the technical nature of these interrelationships and provides a preliminary effort at their
integration. 相似文献