全文获取类型
收费全文 | 12332篇 |
免费 | 223篇 |
国内免费 | 6篇 |
专业分类
管理学 | 1687篇 |
劳动科学 | 1篇 |
民族学 | 77篇 |
人才学 | 2篇 |
人口学 | 1213篇 |
丛书文集 | 306篇 |
教育普及 | 1篇 |
理论方法论 | 993篇 |
现状及发展 | 1篇 |
综合类 | 616篇 |
社会学 | 5393篇 |
统计学 | 2271篇 |
出版年
2023年 | 83篇 |
2021年 | 79篇 |
2020年 | 179篇 |
2019年 | 236篇 |
2018年 | 271篇 |
2017年 | 387篇 |
2016年 | 312篇 |
2015年 | 209篇 |
2014年 | 262篇 |
2013年 | 2012篇 |
2012年 | 436篇 |
2011年 | 384篇 |
2010年 | 329篇 |
2009年 | 331篇 |
2008年 | 337篇 |
2007年 | 340篇 |
2006年 | 322篇 |
2005年 | 335篇 |
2004年 | 274篇 |
2003年 | 240篇 |
2002年 | 278篇 |
2001年 | 308篇 |
2000年 | 302篇 |
1999年 | 288篇 |
1998年 | 192篇 |
1997年 | 175篇 |
1996年 | 171篇 |
1995年 | 176篇 |
1994年 | 143篇 |
1993年 | 160篇 |
1992年 | 174篇 |
1991年 | 150篇 |
1990年 | 162篇 |
1989年 | 182篇 |
1988年 | 154篇 |
1987年 | 136篇 |
1986年 | 149篇 |
1985年 | 168篇 |
1984年 | 166篇 |
1983年 | 156篇 |
1982年 | 122篇 |
1981年 | 106篇 |
1980年 | 102篇 |
1979年 | 137篇 |
1978年 | 89篇 |
1977年 | 89篇 |
1976年 | 88篇 |
1975年 | 80篇 |
1974年 | 81篇 |
1973年 | 73篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
581.
582.
A reversible jump algorithm for Bayesian model determination among generalised linear models, under relatively diffuse prior
distributions for the model parameters, is proposed. Orthogonal projections of the current linear predictor are used so that
knowledge from the current model parameters is used to make effective proposals. This idea is generalised to moves of a reversible
jump algorithm for model determination among generalised linear mixed models. Therefore, this algorithm exploits the full
flexibility available in the reversible jump method. The algorithm is demonstrated via two examples and compared to existing
methods. 相似文献
583.
This paper discusses a novel strategy for simulating rare events and an associated Monte Carlo estimation of tail probabilities. Our method uses a system of interacting particles and exploits a Feynman-Kac representation of that system to analyze their fluctuations. Our precise analysis of the variance of a standard multilevel splitting algorithm reveals an opportunity for improvement. This leads to a novel method that relies on adaptive levels and produces, in the limit of an idealized version of the algorithm, estimates with optimal variance. The motivation for this theoretical work comes from problems occurring in watermarking and fingerprinting of digital contents, which represents a new field of applications of rare event simulation techniques. Some numerical results show performance close to the idealized version of our technique for these practical applications. 相似文献
584.
Marcus C. Christiansen 《AStA Advances in Statistical Analysis》2012,96(2):155-186
We illustrate how multistate Markov and semi-Markov models can be used for the actuarial modeling of health insurance policies,
focusing on health insurances that are pursued on a similar technical basis to that of life insurance. In the first part,
we give an overview of the basic modeling frameworks that are commonly used and explain the calculation of prospective reserves
and net premiums. In the second part, we discuss the biometric insurance risk, focusing on the calculation of implicit safety
margins. We present new results on implicit margins in the semi-Markov model and on biometric estimation risk in the Markov
model, and we explain why there is a need for future research concerning the systematic biometric risk. 相似文献
585.
C. Devon Lin Randy R. SitterBoxin Tang 《Journal of statistical planning and inference》2012,142(2):445-456
We consider the problem of constructing good two-level nonregular fractional factorial designs. The criteria of minimum G and G2 aberration are used to rank designs. A general design structure is utilized to provide a solution to this practical, yet challenging, problem. With the help of this design structure, we develop an efficient algorithm for obtaining a collection of good designs based on the aforementioned two criteria. Finally, we present some results for designs of 32 and 40 runs obtained from applying this algorithmic approach. 相似文献
586.
587.
The cross-entropy (CE) method is an adaptive importance sampling procedure that has been successfully applied to a diverse range of complicated simulation problems. However, recent research has shown that in some high-dimensional settings, the likelihood ratio degeneracy problem becomes severe and the importance sampling estimator obtained from the CE algorithm becomes unreliable. We consider a variation of the CE method whose performance does not deteriorate as the dimension of the problem increases. We then illustrate the algorithm via a high-dimensional estimation problem in risk management. 相似文献
588.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small. 相似文献
589.
A finite mixture model using the multivariate t distribution has been shown as a robust extension of normal mixtures. In this paper, we present a Bayesian approach for inference about parameters of t-mixture models. The specifications of prior distributions are weakly informative to avoid causing nonintegrable posterior distributions. We present two efficient EM-type algorithms for computing the joint posterior mode with the observed data and an incomplete future vector as the sample. Markov chain Monte Carlo sampling schemes are also developed to obtain the target posterior distribution of parameters. The advantages of Bayesian approach over the maximum likelihood method are demonstrated via a set of real data. 相似文献
590.
S. Landau I. C. Ellison-Wright E. T. Bullmore 《Journal of the Royal Statistical Society. Series C, Applied statistics》2004,53(1):63-82
Summary. Functional magnetic resonance imaging (FMRI) measures the physiological response of the human brain to experimentally controlled stimulation. In a periodically designed experiment it is of interest to test for a difference in the timing (phase shift) of the response between two anatomically distinct brain regions. We suggest two tests for an interregional difference in phase shift: one based on asymptotic theory and one based on bootstrapping. Whilst the two procedures differ in some of their assumptions, both tests rely on employing the large number of voxels (three-dimensional pixels) in non-activated brain regions to take account of spatial autocorrelation between voxelwise phase shift observations within the activated regions of interest. As an example we apply both tests, and their counterparts assuming spatial independence, to FMRI phase shift data that were acquired from a normal young woman during performance of a periodically designed covert verbal fluency task. We conclude that it is necessary to take account of spatial autocovariance between voxelwise FMRI time series parameter estimates such as the phase shift, and that the most promising way of achieving this is by modelling the spatial autocorrelation structure from a suitably defined base region of the image slice. 相似文献