全文获取类型
收费全文 | 1746篇 |
免费 | 37篇 |
专业分类
管理学 | 284篇 |
民族学 | 9篇 |
人口学 | 120篇 |
丛书文集 | 6篇 |
理论方法论 | 203篇 |
综合类 | 16篇 |
社会学 | 848篇 |
统计学 | 297篇 |
出版年
2023年 | 12篇 |
2022年 | 5篇 |
2021年 | 10篇 |
2020年 | 34篇 |
2019年 | 55篇 |
2018年 | 48篇 |
2017年 | 59篇 |
2016年 | 45篇 |
2015年 | 49篇 |
2014年 | 59篇 |
2013年 | 277篇 |
2012年 | 72篇 |
2011年 | 59篇 |
2010年 | 56篇 |
2009年 | 49篇 |
2008年 | 61篇 |
2007年 | 56篇 |
2006年 | 56篇 |
2005年 | 56篇 |
2004年 | 50篇 |
2003年 | 46篇 |
2002年 | 43篇 |
2001年 | 42篇 |
2000年 | 34篇 |
1999年 | 38篇 |
1998年 | 31篇 |
1997年 | 31篇 |
1996年 | 15篇 |
1995年 | 23篇 |
1994年 | 24篇 |
1993年 | 11篇 |
1992年 | 11篇 |
1991年 | 18篇 |
1990年 | 16篇 |
1989年 | 8篇 |
1988年 | 24篇 |
1987年 | 15篇 |
1986年 | 17篇 |
1985年 | 15篇 |
1984年 | 18篇 |
1983年 | 15篇 |
1982年 | 11篇 |
1981年 | 21篇 |
1980年 | 16篇 |
1979年 | 13篇 |
1978年 | 11篇 |
1977年 | 6篇 |
1976年 | 8篇 |
1975年 | 9篇 |
1974年 | 8篇 |
排序方式: 共有1783条查询结果,搜索用时 15 毫秒
21.
22.
Yongtao Guan Roland Fleißner Paul Joyce Stephen M. Krone 《Statistics and Computing》2006,16(2):193-202
As the number of applications for Markov Chain Monte Carlo (MCMC) grows, the power of these methods as well as their shortcomings
become more apparent. While MCMC yields an almost automatic way to sample a space according to some distribution, its implementations
often fall short of this task as they may lead to chains which converge too slowly or get trapped within one mode of a multi-modal
space. Moreover, it may be difficult to determine if a chain is only sampling a certain area of the space or if it has indeed
reached stationarity.
In this paper, we show how a simple modification of the proposal mechanism results in faster convergence of the chain and
helps to circumvent the problems described above. This mechanism, which is based on an idea from the field of “small-world”
networks, amounts to adding occasional “wild” proposals to any local proposal scheme. We demonstrate through both theory and
extensive simulations, that these new proposal distributions can greatly outperform the traditional local proposals when it
comes to exploring complex heterogenous spaces and multi-modal distributions. Our method can easily be applied to most, if
not all, problems involving MCMC and unlike many other remedies which improve the performance of MCMC it preserves the simplicity
of the underlying algorithm. 相似文献
23.
Peter Hall Stephen M.-S. Lee & G. Alastair Young 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):479-491
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error. 相似文献
24.
John Whitehead Susan Todd & W. J. Hall 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(4):731-745
In sequential studies, formal interim analyses are usually restricted to a consideration of a single null hypothesis concerning a single parameter of interest. Valid frequentist methods of hypothesis testing and of point and interval estimation for the primary parameter have already been devised for use at the end of such a study. However, the completed data set may warrant a more detailed analysis, involving the estimation of parameters corresponding to effects that were not used to determine when to stop, and yet correlated with those that were. This paper describes methods for setting confidence intervals for secondary parameters in a way which provides the correct coverage probability in repeated frequentist realizations of the sequential design used. The method assumes that information accumulates on the primary and secondary parameters at proportional rates. This requirement will be valid in many potential applications, but only in limited situations in survival analysis. 相似文献
25.
This paper considers the estimation of Cobb-Douglas production functions using panel data covering a large sample of companies observed for a small number of time periods. GMM estimatorshave been found to produce large finite-sample biases when using the standard first-differenced estimator. These biases can be dramatically reduced by exploiting reasonable stationarity restrictions on the initial conditions process. Using data for a panel of R&Dperforming US manufacturing companies we find that the additional instruments used in our extended GMM estimator yield much more reasonable parameter estimates. 相似文献
26.
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail. 相似文献
27.
Walters SJ 《Pharmaceutical statistics》2009,8(2):163-169
Pre‐study sample size calculations for clinical trial research protocols are now mandatory. When an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a pre‐determined difference (effect size) in the outcome variable, at a given level of statistical significance. Frequently studies will recruit fewer patients than the initial pre‐study sample size calculation suggested. Investigators are faced with the fact that their study may be inadequately powered to detect the pre‐specified treatment effect and the statistical analysis of the collected outcome data may or may not report a statistically significant result. If the data produces a “non‐statistically significant result” then investigators are frequently tempted to ask the question “Given the actual final study size, what is the power of the study, now, to detect a treatment effect or difference?” The aim of this article is to debate whether or not it is desirable to answer this question and to undertake a power calculation, after the data have been collected and analysed. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
28.
There are now three essentially separate literatures on the topics of multiple systems estimation, record linkage, and missing
data. But in practice the three are intimately intertwined. For example, record linkage involving multiple data sources for
human populations is often carried out with the expressed goal of developing a merged database for multiple system estimation
(MSE). Similarly, one way to view both the record linkage and MSE problems is as ones involving the estimation of missing
data. This presentation highlights the technical nature of these interrelationships and provides a preliminary effort at their
integration. 相似文献
29.
Thomas S. Shively Thomas W. Sager Stephen G. Walker 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(1):159-175
Summary. The paper proposes two Bayesian approaches to non-parametric monotone function estimation. The first approach uses a hierarchical Bayes framework and a characterization of smooth monotone functions given by Ramsay that allows unconstrained estimation. The second approach uses a Bayesian regression spline model of Smith and Kohn with a mixture distribution of constrained normal distributions as the prior for the regression coefficients to ensure the monotonicity of the resulting function estimate. The small sample properties of the two function estimators across a range of functions are provided via simulation and compared with existing methods. Asymptotic results are also given that show that Bayesian methods provide consistent function estimators for a large class of smooth functions. An example is provided involving economic demand functions that illustrates the application of the constrained regression spline estimator in the context of a multiple-regression model where two functions are constrained to be monotone. 相似文献
30.
This article reviews Bayesian inference from the perspective that the designated model is misspecified. This misspecification has implications in interpretation of objects, such as the prior distribution, which has been the cause of recent questioning of the appropriateness of Bayesian inference in this scenario. The main focus of this article is to establish the suitability of applying the Bayes update to a misspecified model, and relies on representation theorems for sequences of symmetric distributions; the identification of parameter values of interest; and the construction of sequences of distributions which act as the guesses as to where the next observation is coming from. A conclusion is that a clear identification of the fundamental starting point for the Bayesian is described. 相似文献