首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2346篇
  免费   47篇
  国内免费   3篇
管理学   45篇
人口学   7篇
丛书文集   1篇
理论方法论   3篇
综合类   24篇
统计学   2316篇
  2023年   8篇
  2022年   4篇
  2021年   16篇
  2020年   42篇
  2019年   73篇
  2018年   112篇
  2017年   185篇
  2016年   73篇
  2015年   62篇
  2014年   77篇
  2013年   907篇
  2012年   194篇
  2011年   54篇
  2010年   54篇
  2009年   59篇
  2008年   50篇
  2007年   42篇
  2006年   30篇
  2005年   38篇
  2004年   38篇
  2003年   25篇
  2002年   31篇
  2001年   22篇
  2000年   14篇
  1999年   23篇
  1998年   35篇
  1997年   21篇
  1996年   11篇
  1995年   7篇
  1994年   5篇
  1993年   5篇
  1992年   4篇
  1991年   4篇
  1990年   8篇
  1989年   3篇
  1988年   9篇
  1987年   4篇
  1986年   2篇
  1985年   9篇
  1984年   6篇
  1983年   13篇
  1982年   4篇
  1980年   4篇
  1979年   2篇
  1978年   2篇
  1977年   2篇
  1976年   1篇
  1975年   1篇
  1973年   1篇
排序方式: 共有2396条查询结果,搜索用时 31 毫秒
91.
Robust tests for the common principal components model   总被引:1,自引:0,他引:1  
When dealing with several populations, the common principal components (CPC) model assumes equal principal axes but different variances along them. In this paper, a robust log-likelihood ratio statistic allowing to test the null hypothesis of a CPC model versus no restrictions on the scatter matrices is introduced. The proposal plugs into the classical log-likelihood ratio statistic robust scatter estimators. Using the same idea, a robust log-likelihood ratio and a robust Wald-type statistic for testing proportionality against a CPC model are considered. Their asymptotic distributions under the null hypothesis and their partial influence functions are derived. A small simulation study allows to compare the behavior of the classical and robust tests, under normal and contaminated data.  相似文献   
92.
It is often the case that high-dimensional data consist of only a few informative components. Standard statistical modeling and estimation in such a situation is prone to inaccuracies due to overfitting, unless regularization methods are practiced. In the context of classification, we propose a class of regularization methods through shrinkage estimators. The shrinkage is based on variable selection coupled with conditional maximum likelihood. Using Stein's unbiased estimator of the risk, we derive an estimator for the optimal shrinkage method within a certain class. A comparison of the optimal shrinkage methods in a classification context, with the optimal shrinkage method when estimating a mean vector under a squared loss, is given. The latter problem is extensively studied, but it seems that the results of those studies are not completely relevant for classification. We demonstrate and examine our method on simulated data and compare it to feature annealed independence rule and Fisher's rule.  相似文献   
93.
Huber's estimator has had a long lasting impact, particularly on robust statistics. It is well known that under certain conditions, Huber's estimator is asymptotically minimax. A moderate generalization in rederiving Huber's estimator shows that Huber's estimator is not the only choice. We develop an alternative asymptotic minimax estimator and name it regression with stochastically bounded noise (RSBN). Simulations demonstrate that RSBN is slightly better in performance, although it is unclear how to justify such an improvement theoretically. We propose two numerical solutions: an iterative numerical solution, which is extremely easy to implement and is based on the proximal point method; and a solution by applying state-of-the-art nonlinear optimization software packages, e.g., SNOPT. Contribution: the generalization of the variational approach is interesting and should be useful in deriving other asymptotic minimax estimators in other problems.  相似文献   
94.
In this paper, under a nonparametric regression model, we introduce two families of robust procedures to estimate the regression function when missing data occur in the response. The first proposal is based on a local MM-functional applied to the conditional distribution function estimate adapted to the presence of missing data. The second proposal imputes the missing responses using the local MM-smoother based on the observed sample and then estimates the regression function with the completed sample. We show that the robust procedures considered are consistent and asymptotically normally distributed. A robust procedure to select the smoothing parameter is also discussed.  相似文献   
95.
基于多变量动态模型的产出缺口估算   总被引:2,自引:0,他引:2       下载免费PDF全文
张成思 《统计研究》2009,26(7):27-33
 本文运用多变量动态模型系统下的Beveridge-Nelson分解方法和贝叶斯Gibbs抽样估计,估算了1985年1季度至2008年2季度期间中国的产出缺口,并且与传统的单变量估计方法测算的结果在统计属性和对货币政策调节的预测效果方面进行了比较。实证结果表明,不同产出缺口的统计属性存在差别,并且只有基于多变量系统测算的产出缺口对货币政策具有显著预测效果。这说明多变量模型估计出的产出缺口更全面地考虑了经济产出与其他相关变量的互动效应,含有的信息更为丰富,从而对宏观政策调整具有更重要的参考价值。  相似文献   
96.
The maximum likelihood estimator (MLE) and the likelihood ratio test (LRT) will be considered for making inference about the scale parameter of the exponential distribution in case of moving extreme ranked set sampling (MERSS). The MLE and LRT can not be written in closed form. Therefore, a modification of the MLE using the technique suggested by Maharota and Nanda (Biometrika 61:601–606, 1974) will be considered and this modified estimator will be used to modify the LRT to get a test in closed form for testing a simple hypothesis against one sided alternatives. The same idea will be used to modify the most powerful test (MPT) for testing a simple hypothesis versus a simple hypothesis to get a test in closed form for testing a simple hypothesis against one sided alternatives. Then it appears that the modified estimator is a good competitor of the MLE and the modified tests are good competitors of the LRT using MERSS and simple random sampling (SRS).  相似文献   
97.
Randomized response techniques are widely employed in surveys dealing with sensitive questions to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Since Warner’s (J Am Stat Assoc 60:63–69, 1965) pioneering work, many ingenious devices have been suggested to increase respondent’s privacy protection and to better estimate the proportion of people, π A , bearing a sensitive attribute. In spite of the massive use of auxiliary information in the estimation of non-sensitive parameters, very few attempts have been made to improve randomization strategy performance when auxiliary variables are available. Moving from Zaizai’s (Model Assist Stat Appl 1:125–130, 2006) recent work, in this paper we provide a class of estimators for π A , for a generic randomization scheme, when the mean of a supplementary non-sensitive variable is known. The minimum attainable variance bound of the class is obtained and the best estimator is also identified. We prove that the best estimator acts as a regression-type estimator which is at least as efficient as the corresponding estimator evaluated without allowing for the auxiliary variable. The general results are then applied to Warner and Simmons’ model.  相似文献   
98.
Summary.  A general method for exploring multivariate data by comparing different estimates of multivariate scatter is presented. The method is based on the eigenvalue–eigenvector decomposition of one scatter matrix relative to another. In particular, it is shown that the eigenvectors can be used to generate an affine invariant co-ordinate system for the multivariate data. Consequently, we view this method as a method for invariant co-ordinate selection . By plotting the data with respect to this new invariant co-ordinate system, various data structures can be revealed. For example, under certain independent components models, it is shown that the invariant co- ordinates correspond to the independent components. Another example pertains to mixtures of elliptical distributions. In this case, it is shown that a subset of the invariant co-ordinates corresponds to Fisher's linear discriminant subspace, even though the class identifications of the data points are unknown. Some illustrative examples are given.  相似文献   
99.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
100.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号