全文获取类型
收费全文 | 1380篇 |
免费 | 25篇 |
专业分类
管理学 | 248篇 |
民族学 | 8篇 |
人口学 | 83篇 |
丛书文集 | 6篇 |
理论方法论 | 160篇 |
综合类 | 13篇 |
社会学 | 657篇 |
统计学 | 230篇 |
出版年
2024年 | 6篇 |
2023年 | 11篇 |
2021年 | 12篇 |
2020年 | 29篇 |
2019年 | 46篇 |
2018年 | 34篇 |
2017年 | 47篇 |
2016年 | 40篇 |
2015年 | 39篇 |
2014年 | 49篇 |
2013年 | 223篇 |
2012年 | 58篇 |
2011年 | 48篇 |
2010年 | 42篇 |
2009年 | 37篇 |
2008年 | 51篇 |
2007年 | 41篇 |
2006年 | 46篇 |
2005年 | 43篇 |
2004年 | 37篇 |
2003年 | 33篇 |
2002年 | 33篇 |
2001年 | 32篇 |
2000年 | 25篇 |
1999年 | 26篇 |
1998年 | 20篇 |
1997年 | 29篇 |
1996年 | 14篇 |
1995年 | 20篇 |
1994年 | 14篇 |
1993年 | 7篇 |
1992年 | 8篇 |
1991年 | 13篇 |
1990年 | 13篇 |
1989年 | 6篇 |
1988年 | 17篇 |
1987年 | 10篇 |
1986年 | 12篇 |
1985年 | 8篇 |
1984年 | 16篇 |
1983年 | 13篇 |
1982年 | 7篇 |
1981年 | 14篇 |
1980年 | 15篇 |
1979年 | 12篇 |
1978年 | 8篇 |
1977年 | 5篇 |
1976年 | 6篇 |
1975年 | 7篇 |
1974年 | 6篇 |
排序方式: 共有1405条查询结果,搜索用时 0 毫秒
131.
Stephen G. Walker 《统计学通讯:模拟与计算》2013,42(3):520-527
This article presents a novel and simple approach to the estimation of a marginal likelihood, in a Bayesian context. The estimate is based on a Markov chain output which provides samples from the posterior distribution and an additional latent variable. It is the mean of this latent variable which provides the estimate for the value of the marginal likelihood. 相似文献
132.
Fixed sample size approximately similar tests for the Behrens-Fisher problem are studied and compared with various other tests suggested in current sttistical methodelogy texts. Several fourmoment approxiamtely similar tests are developed and offered as alternatives. These tests are shown to be good practical solutions which are easily implemented in practice. 相似文献
133.
Walters SJ 《Pharmaceutical statistics》2009,8(2):163-169
Pre‐study sample size calculations for clinical trial research protocols are now mandatory. When an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a pre‐determined difference (effect size) in the outcome variable, at a given level of statistical significance. Frequently studies will recruit fewer patients than the initial pre‐study sample size calculation suggested. Investigators are faced with the fact that their study may be inadequately powered to detect the pre‐specified treatment effect and the statistical analysis of the collected outcome data may or may not report a statistically significant result. If the data produces a “non‐statistically significant result” then investigators are frequently tempted to ask the question “Given the actual final study size, what is the power of the study, now, to detect a treatment effect or difference?” The aim of this article is to debate whether or not it is desirable to answer this question and to undertake a power calculation, after the data have been collected and analysed. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
134.
This article is addressed to those interested in how Bayesian approaches can be brought to bear on research and development planning and management issues. It provides a conceptual framework for estimating the value of information to environmental policy decisions. The methodology is applied to assess the expected value of research concerning the effects of acidic deposition on forests. To calculate the expected value of research requires modeling the possible actions of policymakers under conditions of uncertainty. Information is potentially valuable only if it leads to actions that differ from the actions that would be taken without the information. The relevant issue is how research on forest effects would change choices of emissions controls from those that would be made in the absence of such research. The approach taken is to model information with a likelihood function embedded in a decision tree describing possible policy options. The value of information is then calculated as a function of information accuracy. The results illustrate how accurate the information must be to have an impact on the choice of policy options. The results also illustrate situations in which additional research can have a negative value. 相似文献
135.
This article reviews Bayesian inference from the perspective that the designated model is misspecified. This misspecification has implications in interpretation of objects, such as the prior distribution, which has been the cause of recent questioning of the appropriateness of Bayesian inference in this scenario. The main focus of this article is to establish the suitability of applying the Bayes update to a misspecified model, and relies on representation theorems for sequences of symmetric distributions; the identification of parameter values of interest; and the construction of sequences of distributions which act as the guesses as to where the next observation is coming from. A conclusion is that a clear identification of the fundamental starting point for the Bayesian is described. 相似文献
136.
The use of the correlation coefficient is suggested as a technique for summarizing and objectively evaluating the information contained in probability plots. Goodness-of-fit tests are constructed using this technique for several commonly used plotting positions for the normal distribution. Empirical sampling methods are used to construct the null distribution for these tests, which are then compared on the basis of power against certain nonnormal alternatives. Commonly used regression tests of fit are also included in the comparisons. The results indicate that use of the plotting position pi = (i - .375)/(n + .25) yields a competitive regression test of fit for normality. 相似文献
137.
This paper considers the estimation of Cobb-Douglas production functions using panel data covering a large sample of companies observed for a small number of time periods. GMM estimatorshave been found to produce large finite-sample biases when using the standard first-differenced estimator. These biases can be dramatically reduced by exploiting reasonable stationarity restrictions on the initial conditions process. Using data for a panel of R&Dperforming US manufacturing companies we find that the additional instruments used in our extended GMM estimator yield much more reasonable parameter estimates. 相似文献
138.
Peter Hall Stephen M.-S. Lee & G. Alastair Young 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):479-491
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error. 相似文献
139.
140.