全文获取类型
收费全文 | 4373篇 |
免费 | 68篇 |
国内免费 | 18篇 |
专业分类
管理学 | 224篇 |
劳动科学 | 1篇 |
民族学 | 7篇 |
人口学 | 40篇 |
丛书文集 | 131篇 |
理论方法论 | 32篇 |
综合类 | 1298篇 |
社会学 | 49篇 |
统计学 | 2677篇 |
出版年
2024年 | 1篇 |
2023年 | 16篇 |
2022年 | 20篇 |
2021年 | 30篇 |
2020年 | 60篇 |
2019年 | 106篇 |
2018年 | 132篇 |
2017年 | 222篇 |
2016年 | 84篇 |
2015年 | 95篇 |
2014年 | 156篇 |
2013年 | 1109篇 |
2012年 | 317篇 |
2011年 | 143篇 |
2010年 | 154篇 |
2009年 | 124篇 |
2008年 | 156篇 |
2007年 | 155篇 |
2006年 | 147篇 |
2005年 | 139篇 |
2004年 | 124篇 |
2003年 | 100篇 |
2002年 | 81篇 |
2001年 | 97篇 |
2000年 | 100篇 |
1999年 | 81篇 |
1998年 | 64篇 |
1997年 | 59篇 |
1996年 | 72篇 |
1995年 | 59篇 |
1994年 | 40篇 |
1993年 | 31篇 |
1992年 | 38篇 |
1991年 | 12篇 |
1990年 | 29篇 |
1989年 | 17篇 |
1988年 | 21篇 |
1987年 | 3篇 |
1986年 | 4篇 |
1985年 | 11篇 |
1984年 | 9篇 |
1983年 | 9篇 |
1982年 | 7篇 |
1981年 | 3篇 |
1980年 | 6篇 |
1979年 | 6篇 |
1978年 | 5篇 |
1977年 | 1篇 |
1976年 | 1篇 |
1975年 | 3篇 |
排序方式: 共有4459条查询结果,搜索用时 0 毫秒
51.
高分辨定位技术在近几年得到人们的广泛注意,在这方面的研究工作相当活跃。本文提出一种新的多目标阵列接收信号协方差矩阵的去噪方法,并对结果进行了计算机模拟。 相似文献
52.
为了明确核电厂人因失误机理并为后期人因失误的精准防控提供理论依据,文章采用“组织定向人因失误分析技术”对2010—2020年核电厂208件人因失误事件报告进行分析,获得样本数据,经统计辨识主要的人因失误类型(知识型、规则型和技能型)和影响因素(如教育培训、组织设计、规程、人机界面等),结合影响因素之间的相关性及因子分析构建主要人因失误场景(如知识/技能经验、注意力、压力和态度等)。由于各因素之间存在相互影响,因而采用QAP方法(Quadratic Assignment Procedure,二次指派程序)对人因失误场景进行相关分析和回归分析,建立知识型、规则型、技能型人因失误因果模型,辨识不同的人因失误场景,揭示人因失误机理。结果表明,不同的人因失误类型其产生机理以及场景显著性不同,为人因失误的精准防控提供理论依据。 相似文献
53.
中国狭义货币及其各组合分量的需求模型 总被引:1,自引:0,他引:1
主要研究流通中现金、活期存款和狭义货币的需求函数.实证结果表明现金、活期存款和狭义货币与收入、价格水平、利率和货币化进程存在长期均衡关系.中国的货币供给增长率大于经济增长和通货膨胀率之和,原因在于中国存在货币化现象,货币存量的增加除了满足经济增长的需要还要满足经济货币化的那一部分 相似文献
54.
中小板ETF的价格发现能力研究 总被引:1,自引:0,他引:1
使用日内5分钟交易高频数据,通过误差修正模型和方差分解等技术研究中小板ETF与其标的指数间的价格发现,进而探讨信息传递过程。实证结果显示:中小板ETF价格与中小板P指数间存在协整关系,达到了长期均衡;价格发现能力上,中小板P指数领先中小板ETF;中小板P指数受到新信息影响所产生的冲击大于中小板ETF价格所产生的冲击,中小板P指数对预测误差方差的解释能力强于中小板ETF价格,中小板P指数为信息传递的领先指标。我国ETF市场的有效性有待提升。 相似文献
55.
基于非参数回归提出了同时适用于横截面和时间序列数据的遗漏变量检验统计量.与现有文献相比,该统计量不仅避免了模型设定偏误问题,而且具有更高的局部检验功效,能够识别出速度更快的收敛到原假设的局部备择假设.该文选择单一带宽估计条件联合期望和条件边际期望,允许二者的非参数估计误差共同决定统计量的渐近分布,不仅改善了统计量的有限样本性质,而且避免了选择多个带宽和计算多个偏差项产生的繁杂工作.蒙特卡洛模拟结果表明该统计量具有良好的有限样本性质以及比Ait-Sahalia等更高的检验功效.实证分析采用该统计量捕获了F统计量无法识别的产出缺口与通胀之间关系,验证了非线性“产出一通胀”型菲利普斯曲线在中国的适用性. 相似文献
56.
Peter Hall Stephen M.-S. Lee & G. Alastair Young 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):479-491
We show that, in the context of double-bootstrap confidence intervals, linear interpolation at the second level of the double bootstrap can reduce the simulation error component of coverage error by an order of magnitude. Intervals that are indistinguishable in terms of coverage error with theoretical, infinite simulation, double-bootstrap confidence intervals may be obtained at substantially less computational expense than by using the standard Monte Carlo approximation method. The intervals retain the simplicity of uniform bootstrap sampling and require no special analysis or computational techniques. Interpolation at the first level of the double bootstrap is shown to have a relatively minor effect on the simulation error. 相似文献
57.
M. Jamshidian & R. I. Jennrich 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):257-270
The EM algorithm is a popular method for computing maximum likelihood estimates. One of its drawbacks is that it does not produce standard errors as a by-product. We consider obtaining standard errors by numerical differentiation. Two approaches are considered. The first differentiates the Fisher score vector to yield the Hessian of the log-likelihood. The second differentiates the EM operator and uses an identity that relates its derivative to the Hessian of the log-likelihood. The well-known SEM algorithm uses the second approach. We consider three additional algorithms: one that uses the first approach and two that use the second. We evaluate the complexity and precision of these three and the SEM in algorithm seven examples. The first is a single-parameter example used to give insight. The others are three examples in each of two areas of EM application: Poisson mixture models and the estimation of covariance from incomplete data. The examples show that there are algorithms that are much simpler and more accurate than the SEM algorithm. Hopefully their simplicity will increase the availability of standard error estimates in EM applications. It is shown that, as previously conjectured, a symmetry diagnostic can accurately estimate errors arising from numerical differentiation. Some issues related to the speed of the EM algorithm and algorithms that differentiate the EM operator are identified. 相似文献
58.
In 1960 Levene suggested a potentially robust test of homogeneity of variance based on an ordinary least squares analysis of variance of the absolute values of mean-based residuals. Levene's test has since been shown to have inflated levels of significance when based on the F-distribution, and tests a hypothesis other than homogeneity of variance when treatments are unequally replicated, but the incorrect formulation is now standard output in several statistical packages. This paper develops a weighted least squares analysis of variance of the absolute values of both mean-based and median-based residuals. It shows how to adjust the residuals so that tests using the F -statistic focus on homogeneity of variance for both balanced and unbalanced designs. It shows how to modify the F -statistics currently produced by statistical packages so that the distribution of the resultant test statistic is closer to an F-distribution than is currently the case. The weighted least squares approach also produces component mean squares that are unbiased irrespective of which variable is used in Levene's test. To complete this aspect of the investigation the paper derives exact second-order moments of the component sums of squares used in the calculation of the mean-based test statistic. It shows that, for large samples, both ordinary and weighted least squares test statistics are equivalent; however they are over-dispersed compared to an F variable. 相似文献
59.
In this paper we investigate the asymptotic critical value behaviour of certain multiple decision procedures as e.g. simultaneous confidence intervals and simultaneous as well as stepwise multiple test procedures. Supposing that n hypotheses or parameters of interest are under consideration we investigate the critical value behaviour when n increases. More specifically, we answer e.g. the question by which amount the lengths of confidence intervals increase when an additional parameter is added to the statistical analysis. Furthermore, critical values of different multiple decision procedures as for instance step-down and step-up procedures will be compared. Some general theoretic results are derived and applied for various distributions. 相似文献
60.
Anirban Dasgupta George Casella Mohan Delampady Christian Genest William E. Strawderman Herman Rubin 《Revue canadienne de statistique》2000,28(4):675-687
The authors consider the correlation between two arbitrary functions of the data and a parameter when the parameter is regarded as a random variable with given prior distribution. They show how to compute such a correlation and use closed form expressions to assess the dependence between parameters and various classical or robust estimators thereof, as well as between p‐values and posterior probabilities of the null hypothesis in the one‐sided testing problem. Other applications involve the Dirichlet process and stationary Gaussian processes. Using this approach, the authors also derive a general nonparametric upper bound on Bayes risks. 相似文献