首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2458篇
  免费   48篇
  国内免费   5篇
管理学   61篇
民族学   3篇
人口学   8篇
丛书文集   3篇
理论方法论   5篇
综合类   57篇
社会学   7篇
统计学   2367篇
  2023年   9篇
  2022年   4篇
  2021年   18篇
  2020年   44篇
  2019年   82篇
  2018年   123篇
  2017年   183篇
  2016年   69篇
  2015年   63篇
  2014年   85篇
  2013年   936篇
  2012年   206篇
  2011年   57篇
  2010年   63篇
  2009年   60篇
  2008年   56篇
  2007年   45篇
  2006年   32篇
  2005年   43篇
  2004年   40篇
  2003年   28篇
  2002年   32篇
  2001年   26篇
  2000年   21篇
  1999年   26篇
  1998年   34篇
  1997年   20篇
  1996年   10篇
  1995年   7篇
  1994年   8篇
  1993年   5篇
  1992年   5篇
  1991年   3篇
  1990年   10篇
  1989年   4篇
  1988年   9篇
  1987年   3篇
  1986年   2篇
  1985年   9篇
  1984年   4篇
  1983年   12篇
  1982年   3篇
  1981年   1篇
  1980年   3篇
  1979年   2篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   1篇
  1973年   1篇
排序方式: 共有2511条查询结果,搜索用时 15 毫秒
91.
The main objective of this work is to evaluate the performance of confidence intervals, built using the deviance statistic, for the hyperparameters of state space models. The first procedure is a marginal approximation to confidence regions, based on the likelihood test, and the second one is based on the signed root deviance profile. Those methods are computationally efficient and are not affected by problems such as intervals with limits outside the parameter space, which can be the case when the focus is on the variances of the errors. The procedures are compared to the usual approaches existing in the literature, which includes the method based on the asymptotic distribution of the maximum likelihood estimator, as well as bootstrap confidence intervals. The comparison is performed via a Monte Carlo study, in order to establish empirically the advantages and disadvantages of each method. The results show that the methods based on the deviance statistic possess a better coverage rate than the asymptotic and bootstrap procedures.  相似文献   
92.
In this paper, we analytically derive the exact formula for the mean squared error (MSE) of two weighted average (WA) estimators for each individual regression coefficient. Further, we execute numerical evaluations to investigate small sample properties of the WA estimators, and compare the MSE performance of the WA estimators with the other shrinkage estimators and the usual OLS estimator. Our numerical results show that (1) the WA estimators have smaller MSE than the other shrinkage estimators and the OLS estimator over a wide region of parameter space; (2) the range where the relative MSE of the WA estimator is smaller than that of the OLS estimator gets narrower as the number of explanatory variables k increases.  相似文献   
93.
李毅等 《统计研究》2019,36(4):95-105
随着大数据背景下抽样环境复杂化,特别是3S技术(遥感技术、地理信息系统和全球定位系统)日趋成熟,越来越多社会经济问题涉及空间抽样,其样本呈现出规模相对稀少、分布不均匀、局部聚集的特征,使得传统抽样调查面临着严重挑战。本文介绍了适应性抽样技术应用于空间网络环境的基本原理、主要操作步骤和马尔可夫链蒙特卡罗估计推断,并以广州市天河区的商户抽样为例讨论实际操作中应注意的问题,以期为流动人口、环境污染、区域经济研究等方面调查提供理论支撑和实证方法参考。  相似文献   
94.
给出了正态回归模型中未知参数的无偏估计,最小二乘估计与极大似然估计及其求解过程,并通过数值模拟说明了异同点,验证了相应的结论。  相似文献   
95.
利用贵州省纳雍县两个贫困行政村跨期十数年的农户追踪调查数据,从贫困脆弱性视角量化分析、评价不同时期参与式社区综合发展的“防贫”效应及其精准性。结果表明,欠发达地区农户的贫困脆弱性在1999-2011年期间降幅巨大(约下降99%),抗风险冲击能力得到极大提升。总体上,参与式社区综合发展的“防贫”即期效应显著,可使农户贫困脆弱性指数降低5个百分点以上,然其“防贫”时滞效应却并不突出。分不同群体考察,参与式社区综合发展“防贫”虽存在一定“漏出效应”和“溢出效应”,但包容性较强、瞄准精度尚可,能惠及大多数“重度脆弱户”和“中度、轻度脆弱户”;换言之,除“微度脆弱户”、“极度脆弱户”及部分“重度脆弱户”外,其间各贫困脆弱组农户均能从中得到保障,然贫困脆弱性强度越高,所受保障程度愈小。不仅如此,参与式社区综合发展此种“防贫”的精准度可持续性差,无明显时滞效应。  相似文献   
96.
本文基于期望效用最大化和L1-中位数估计研究了在线投资组合选择问题。与EG(Exponential Gradient)策略仅利用单期价格信息估计价格趋势不同,本文将利用多期价格信息估计价格趋势,以提高在线策略的性能。首先,基于多期价格数据,利用L1-中位数估计得到预期价格趋势。然后,通过期望效用最大化,提出一个新的具有线型时间复杂度的在线策略,EGLM(Exponential Gradient via L1-Median)。并通过相对熵函数定义资产权重向量的距离,进而证明了EGLM策略具有泛证券投资组合性质。最后,利用国内外6个证券市场的历史数据进行实证分析,结果表明相较于UP(Universal Portfolio)策略和EG策略,EGLM策略有更好的竞争性能。  相似文献   
97.
在瞬时波动率的各种估计量中,非参数估计量因其能准确地度量瞬时波动率,一直是学者们的研究热点。然而,这类估计量在实际应用中都面临着最优窗宽的确定问题。由于最优窗宽中往往携带一些难以估计的未知参数,使得在实际应用过程中确定最优窗宽的具体数值存在困难。本文以瞬时波动率的核估计量为例,借鉴非参数回归分析中窗宽选择的思想,构建了一种能从数据中准确计算出最优窗宽具体值的算法。理论的分析和数值上的验证表明:文中所构建的算法具有良好的稳定性、适应性和收敛速度。算法的提出为瞬时波动率的后续应用研究铺平道路。  相似文献   
98.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
99.
Huber's estimator has had a long lasting impact, particularly on robust statistics. It is well known that under certain conditions, Huber's estimator is asymptotically minimax. A moderate generalization in rederiving Huber's estimator shows that Huber's estimator is not the only choice. We develop an alternative asymptotic minimax estimator and name it regression with stochastically bounded noise (RSBN). Simulations demonstrate that RSBN is slightly better in performance, although it is unclear how to justify such an improvement theoretically. We propose two numerical solutions: an iterative numerical solution, which is extremely easy to implement and is based on the proximal point method; and a solution by applying state-of-the-art nonlinear optimization software packages, e.g., SNOPT. Contribution: the generalization of the variational approach is interesting and should be useful in deriving other asymptotic minimax estimators in other problems.  相似文献   
100.
It is often the case that high-dimensional data consist of only a few informative components. Standard statistical modeling and estimation in such a situation is prone to inaccuracies due to overfitting, unless regularization methods are practiced. In the context of classification, we propose a class of regularization methods through shrinkage estimators. The shrinkage is based on variable selection coupled with conditional maximum likelihood. Using Stein's unbiased estimator of the risk, we derive an estimator for the optimal shrinkage method within a certain class. A comparison of the optimal shrinkage methods in a classification context, with the optimal shrinkage method when estimating a mean vector under a squared loss, is given. The latter problem is extensively studied, but it seems that the results of those studies are not completely relevant for classification. We demonstrate and examine our method on simulated data and compare it to feature annealed independence rule and Fisher's rule.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号