首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3133篇
  免费   75篇
  国内免费   10篇
管理学   117篇
人口学   17篇
丛书文集   27篇
理论方法论   19篇
综合类   287篇
社会学   23篇
统计学   2728篇
  2023年   20篇
  2022年   21篇
  2021年   27篇
  2020年   69篇
  2019年   114篇
  2018年   159篇
  2017年   226篇
  2016年   98篇
  2015年   89篇
  2014年   132篇
  2013年   1005篇
  2012年   268篇
  2011年   91篇
  2010年   77篇
  2009年   83篇
  2008年   68篇
  2007年   79篇
  2006年   50篇
  2005年   61篇
  2004年   61篇
  2003年   43篇
  2002年   54篇
  2001年   43篇
  2000年   28篇
  1999年   39篇
  1998年   46篇
  1997年   30篇
  1996年   18篇
  1995年   15篇
  1994年   7篇
  1993年   9篇
  1992年   11篇
  1991年   7篇
  1990年   8篇
  1989年   5篇
  1988年   10篇
  1987年   3篇
  1986年   2篇
  1985年   11篇
  1984年   4篇
  1983年   11篇
  1982年   3篇
  1980年   3篇
  1979年   2篇
  1978年   2篇
  1977年   3篇
  1976年   1篇
  1975年   1篇
  1973年   1篇
排序方式: 共有3218条查询结果,搜索用时 194 毫秒
121.
利用贵州省纳雍县两个贫困行政村跨期十数年的农户追踪调查数据,从贫困脆弱性视角量化分析、评价不同时期参与式社区综合发展的“防贫”效应及其精准性。结果表明,欠发达地区农户的贫困脆弱性在1999-2011年期间降幅巨大(约下降99%),抗风险冲击能力得到极大提升。总体上,参与式社区综合发展的“防贫”即期效应显著,可使农户贫困脆弱性指数降低5个百分点以上,然其“防贫”时滞效应却并不突出。分不同群体考察,参与式社区综合发展“防贫”虽存在一定“漏出效应”和“溢出效应”,但包容性较强、瞄准精度尚可,能惠及大多数“重度脆弱户”和“中度、轻度脆弱户”;换言之,除“微度脆弱户”、“极度脆弱户”及部分“重度脆弱户”外,其间各贫困脆弱组农户均能从中得到保障,然贫困脆弱性强度越高,所受保障程度愈小。不仅如此,参与式社区综合发展此种“防贫”的精准度可持续性差,无明显时滞效应。  相似文献   
122.
本文基于期望效用最大化和L1-中位数估计研究了在线投资组合选择问题。与EG(Exponential Gradient)策略仅利用单期价格信息估计价格趋势不同,本文将利用多期价格信息估计价格趋势,以提高在线策略的性能。首先,基于多期价格数据,利用L1-中位数估计得到预期价格趋势。然后,通过期望效用最大化,提出一个新的具有线型时间复杂度的在线策略,EGLM(Exponential Gradient via L1-Median)。并通过相对熵函数定义资产权重向量的距离,进而证明了EGLM策略具有泛证券投资组合性质。最后,利用国内外6个证券市场的历史数据进行实证分析,结果表明相较于UP(Universal Portfolio)策略和EG策略,EGLM策略有更好的竞争性能。  相似文献   
123.
在瞬时波动率的各种估计量中,非参数估计量因其能准确地度量瞬时波动率,一直是学者们的研究热点。然而,这类估计量在实际应用中都面临着最优窗宽的确定问题。由于最优窗宽中往往携带一些难以估计的未知参数,使得在实际应用过程中确定最优窗宽的具体数值存在困难。本文以瞬时波动率的核估计量为例,借鉴非参数回归分析中窗宽选择的思想,构建了一种能从数据中准确计算出最优窗宽具体值的算法。理论的分析和数值上的验证表明:文中所构建的算法具有良好的稳定性、适应性和收敛速度。算法的提出为瞬时波动率的后续应用研究铺平道路。  相似文献   
124.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
125.
Huber's estimator has had a long lasting impact, particularly on robust statistics. It is well known that under certain conditions, Huber's estimator is asymptotically minimax. A moderate generalization in rederiving Huber's estimator shows that Huber's estimator is not the only choice. We develop an alternative asymptotic minimax estimator and name it regression with stochastically bounded noise (RSBN). Simulations demonstrate that RSBN is slightly better in performance, although it is unclear how to justify such an improvement theoretically. We propose two numerical solutions: an iterative numerical solution, which is extremely easy to implement and is based on the proximal point method; and a solution by applying state-of-the-art nonlinear optimization software packages, e.g., SNOPT. Contribution: the generalization of the variational approach is interesting and should be useful in deriving other asymptotic minimax estimators in other problems.  相似文献   
126.
It is often the case that high-dimensional data consist of only a few informative components. Standard statistical modeling and estimation in such a situation is prone to inaccuracies due to overfitting, unless regularization methods are practiced. In the context of classification, we propose a class of regularization methods through shrinkage estimators. The shrinkage is based on variable selection coupled with conditional maximum likelihood. Using Stein's unbiased estimator of the risk, we derive an estimator for the optimal shrinkage method within a certain class. A comparison of the optimal shrinkage methods in a classification context, with the optimal shrinkage method when estimating a mean vector under a squared loss, is given. The latter problem is extensively studied, but it seems that the results of those studies are not completely relevant for classification. We demonstrate and examine our method on simulated data and compare it to feature annealed independence rule and Fisher's rule.  相似文献   
127.
In many diagnostic studies, multiple diagnostic tests are performed on each subject or multiple disease markers are available. Commonly, the information should be combined to improve the diagnostic accuracy. We consider the problem of comparing the discriminatory abilities between two groups of biomarkers. Specifically, this article focuses on confidence interval estimation of the difference between paired AUCs based on optimally combined markers under the assumption of multivariate normality. Simulation studies demonstrate that the proposed generalized variable approach provides confidence intervals with satisfying coverage probabilities at finite sample sizes. The proposed method can also easily provide P-values for hypothesis testing. Application to analysis of a subset of data from a study on coronary heart disease illustrates the utility of the method in practice.  相似文献   
128.
The maximum likelihood estimator (MLE) and the likelihood ratio test (LRT) will be considered for making inference about the scale parameter of the exponential distribution in case of moving extreme ranked set sampling (MERSS). The MLE and LRT can not be written in closed form. Therefore, a modification of the MLE using the technique suggested by Maharota and Nanda (Biometrika 61:601–606, 1974) will be considered and this modified estimator will be used to modify the LRT to get a test in closed form for testing a simple hypothesis against one sided alternatives. The same idea will be used to modify the most powerful test (MPT) for testing a simple hypothesis versus a simple hypothesis to get a test in closed form for testing a simple hypothesis against one sided alternatives. Then it appears that the modified estimator is a good competitor of the MLE and the modified tests are good competitors of the LRT using MERSS and simple random sampling (SRS).  相似文献   
129.
Randomized response techniques are widely employed in surveys dealing with sensitive questions to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Since Warner’s (J Am Stat Assoc 60:63–69, 1965) pioneering work, many ingenious devices have been suggested to increase respondent’s privacy protection and to better estimate the proportion of people, π A , bearing a sensitive attribute. In spite of the massive use of auxiliary information in the estimation of non-sensitive parameters, very few attempts have been made to improve randomization strategy performance when auxiliary variables are available. Moving from Zaizai’s (Model Assist Stat Appl 1:125–130, 2006) recent work, in this paper we provide a class of estimators for π A , for a generic randomization scheme, when the mean of a supplementary non-sensitive variable is known. The minimum attainable variance bound of the class is obtained and the best estimator is also identified. We prove that the best estimator acts as a regression-type estimator which is at least as efficient as the corresponding estimator evaluated without allowing for the auxiliary variable. The general results are then applied to Warner and Simmons’ model.  相似文献   
130.
基于多变量动态模型的产出缺口估算   总被引:2,自引:0,他引:2       下载免费PDF全文
张成思 《统计研究》2009,26(7):27-33
 本文运用多变量动态模型系统下的Beveridge-Nelson分解方法和贝叶斯Gibbs抽样估计,估算了1985年1季度至2008年2季度期间中国的产出缺口,并且与传统的单变量估计方法测算的结果在统计属性和对货币政策调节的预测效果方面进行了比较。实证结果表明,不同产出缺口的统计属性存在差别,并且只有基于多变量系统测算的产出缺口对货币政策具有显著预测效果。这说明多变量模型估计出的产出缺口更全面地考虑了经济产出与其他相关变量的互动效应,含有的信息更为丰富,从而对宏观政策调整具有更重要的参考价值。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号