首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2116篇
  免费   43篇
  国内免费   3篇
管理学   32篇
人口学   6篇
丛书文集   1篇
理论方法论   3篇
综合类   25篇
统计学   2095篇
  2023年   8篇
  2022年   3篇
  2021年   16篇
  2020年   38篇
  2019年   70篇
  2018年   105篇
  2017年   166篇
  2016年   60篇
  2015年   56篇
  2014年   71篇
  2013年   827篇
  2012年   178篇
  2011年   47篇
  2010年   48篇
  2009年   48篇
  2008年   40篇
  2007年   36篇
  2006年   26篇
  2005年   34篇
  2004年   31篇
  2003年   23篇
  2002年   30篇
  2001年   22篇
  2000年   13篇
  1999年   20篇
  1998年   31篇
  1997年   19篇
  1996年   10篇
  1995年   7篇
  1994年   4篇
  1993年   4篇
  1992年   4篇
  1991年   5篇
  1990年   7篇
  1989年   3篇
  1988年   9篇
  1987年   3篇
  1986年   1篇
  1985年   9篇
  1984年   4篇
  1983年   11篇
  1982年   3篇
  1980年   3篇
  1979年   3篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   1篇
  1973年   1篇
排序方式: 共有2162条查询结果,搜索用时 46 毫秒
81.
在瞬时波动率的各种估计量中,非参数估计量因其能准确地度量瞬时波动率,一直是学者们的研究热点。然而,这类估计量在实际应用中都面临着最优窗宽的确定问题。由于最优窗宽中往往携带一些难以估计的未知参数,使得在实际应用过程中确定最优窗宽的具体数值存在困难。本文以瞬时波动率的核估计量为例,借鉴非参数回归分析中窗宽选择的思想,构建了一种能从数据中准确计算出最优窗宽具体值的算法。理论的分析和数值上的验证表明:文中所构建的算法具有良好的稳定性、适应性和收敛速度。算法的提出为瞬时波动率的后续应用研究铺平道路。  相似文献   
82.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
83.
Huber's estimator has had a long lasting impact, particularly on robust statistics. It is well known that under certain conditions, Huber's estimator is asymptotically minimax. A moderate generalization in rederiving Huber's estimator shows that Huber's estimator is not the only choice. We develop an alternative asymptotic minimax estimator and name it regression with stochastically bounded noise (RSBN). Simulations demonstrate that RSBN is slightly better in performance, although it is unclear how to justify such an improvement theoretically. We propose two numerical solutions: an iterative numerical solution, which is extremely easy to implement and is based on the proximal point method; and a solution by applying state-of-the-art nonlinear optimization software packages, e.g., SNOPT. Contribution: the generalization of the variational approach is interesting and should be useful in deriving other asymptotic minimax estimators in other problems.  相似文献   
84.
It is often the case that high-dimensional data consist of only a few informative components. Standard statistical modeling and estimation in such a situation is prone to inaccuracies due to overfitting, unless regularization methods are practiced. In the context of classification, we propose a class of regularization methods through shrinkage estimators. The shrinkage is based on variable selection coupled with conditional maximum likelihood. Using Stein's unbiased estimator of the risk, we derive an estimator for the optimal shrinkage method within a certain class. A comparison of the optimal shrinkage methods in a classification context, with the optimal shrinkage method when estimating a mean vector under a squared loss, is given. The latter problem is extensively studied, but it seems that the results of those studies are not completely relevant for classification. We demonstrate and examine our method on simulated data and compare it to feature annealed independence rule and Fisher's rule.  相似文献   
85.
The maximum likelihood estimator (MLE) and the likelihood ratio test (LRT) will be considered for making inference about the scale parameter of the exponential distribution in case of moving extreme ranked set sampling (MERSS). The MLE and LRT can not be written in closed form. Therefore, a modification of the MLE using the technique suggested by Maharota and Nanda (Biometrika 61:601–606, 1974) will be considered and this modified estimator will be used to modify the LRT to get a test in closed form for testing a simple hypothesis against one sided alternatives. The same idea will be used to modify the most powerful test (MPT) for testing a simple hypothesis versus a simple hypothesis to get a test in closed form for testing a simple hypothesis against one sided alternatives. Then it appears that the modified estimator is a good competitor of the MLE and the modified tests are good competitors of the LRT using MERSS and simple random sampling (SRS).  相似文献   
86.
Randomized response techniques are widely employed in surveys dealing with sensitive questions to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Since Warner’s (J Am Stat Assoc 60:63–69, 1965) pioneering work, many ingenious devices have been suggested to increase respondent’s privacy protection and to better estimate the proportion of people, π A , bearing a sensitive attribute. In spite of the massive use of auxiliary information in the estimation of non-sensitive parameters, very few attempts have been made to improve randomization strategy performance when auxiliary variables are available. Moving from Zaizai’s (Model Assist Stat Appl 1:125–130, 2006) recent work, in this paper we provide a class of estimators for π A , for a generic randomization scheme, when the mean of a supplementary non-sensitive variable is known. The minimum attainable variance bound of the class is obtained and the best estimator is also identified. We prove that the best estimator acts as a regression-type estimator which is at least as efficient as the corresponding estimator evaluated without allowing for the auxiliary variable. The general results are then applied to Warner and Simmons’ model.  相似文献   
87.
基于多变量动态模型的产出缺口估算   总被引:2,自引:0,他引:2       下载免费PDF全文
张成思 《统计研究》2009,26(7):27-33
 本文运用多变量动态模型系统下的Beveridge-Nelson分解方法和贝叶斯Gibbs抽样估计,估算了1985年1季度至2008年2季度期间中国的产出缺口,并且与传统的单变量估计方法测算的结果在统计属性和对货币政策调节的预测效果方面进行了比较。实证结果表明,不同产出缺口的统计属性存在差别,并且只有基于多变量系统测算的产出缺口对货币政策具有显著预测效果。这说明多变量模型估计出的产出缺口更全面地考虑了经济产出与其他相关变量的互动效应,含有的信息更为丰富,从而对宏观政策调整具有更重要的参考价值。  相似文献   
88.
人口普查不可能100%计数每一个人。世界上许多国家都在人口普查后组织事后调查,使用双系统估计量另行求得一个全国人口真实数的估计数,并以此为标准估计人口普查的净遗漏率。我国历次人口普查后都进行了事后调查,其主要缺陷是未对抽取的样本事后分层,未估计“全国真实的人口数”。建议我国2010年事后调查方案在克服这两个缺陷的基础上科学确定全国的样本总量。实行两步抽样等。  相似文献   
89.
The randomized response technique (RRT) is an important tool, commonly used to avoid biased answers in survey on sensitive issues by preserving the respondents’ privacy. In this paper, we introduce a data collection method for survey on sensitive issues combining both the unrelated-question RRT and the direct question design. The direct questioning method is utilized to obtain responses to a non sensitive question that is related to the innocuous question from the unrelated-question RRT. These responses serve as additional information that can be used to improve the estimation of the prevalence of the sensitive behavior. Furthermore, we propose two new methods for the estimation of the proportion of respondents possessing the sensitive attribute under a missing data setup. More specifically, we develop the weighted estimator and the weighted conditional likelihood estimator. The performances of our estimators are studied numerically and compared with that of an existing one. Both proposed estimators are more efficient than the Greenberg's estimator. We illustrate our methods using real data from a survey study on illegal use of cable TV service in Taiwan.  相似文献   
90.
In this article, a new consistent estimator of Veram’s entropy is introduced. We establish the entropy test based on the new information namely Verma Kullback–Leibler discrimination methodology. The results are used to introduce goodness-of-fit tests for normal and exponential distributions. The root of mean square errors, critical values, and powers for some alternatives are obtained by simulation. The proposed test is compared with other tests.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号