首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2158篇
  免费   38篇
  国内免费   3篇
管理学   33篇
民族学   2篇
人口学   6篇
丛书文集   2篇
理论方法论   3篇
综合类   22篇
社会学   2篇
统计学   2129篇
  2023年   8篇
  2022年   5篇
  2021年   18篇
  2020年   39篇
  2019年   68篇
  2018年   105篇
  2017年   167篇
  2016年   59篇
  2015年   57篇
  2014年   74篇
  2013年   835篇
  2012年   179篇
  2011年   48篇
  2010年   49篇
  2009年   50篇
  2008年   46篇
  2007年   36篇
  2006年   26篇
  2005年   34篇
  2004年   32篇
  2003年   26篇
  2002年   31篇
  2001年   24篇
  2000年   13篇
  1999年   23篇
  1998年   31篇
  1997年   20篇
  1996年   11篇
  1995年   7篇
  1994年   5篇
  1993年   4篇
  1992年   4篇
  1991年   3篇
  1990年   7篇
  1989年   3篇
  1988年   9篇
  1987年   3篇
  1986年   1篇
  1985年   9篇
  1984年   5篇
  1983年   11篇
  1982年   3篇
  1980年   3篇
  1979年   2篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   1篇
  1973年   1篇
排序方式: 共有2199条查询结果,搜索用时 15 毫秒
11.
Randomized response techniques are widely employed in surveys dealing with sensitive questions to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Since Warner’s (J Am Stat Assoc 60:63–69, 1965) pioneering work, many ingenious devices have been suggested to increase respondent’s privacy protection and to better estimate the proportion of people, π A , bearing a sensitive attribute. In spite of the massive use of auxiliary information in the estimation of non-sensitive parameters, very few attempts have been made to improve randomization strategy performance when auxiliary variables are available. Moving from Zaizai’s (Model Assist Stat Appl 1:125–130, 2006) recent work, in this paper we provide a class of estimators for π A , for a generic randomization scheme, when the mean of a supplementary non-sensitive variable is known. The minimum attainable variance bound of the class is obtained and the best estimator is also identified. We prove that the best estimator acts as a regression-type estimator which is at least as efficient as the corresponding estimator evaluated without allowing for the auxiliary variable. The general results are then applied to Warner and Simmons’ model.  相似文献   
12.
Summary.  Estimation of the number or proportion of true null hypotheses in multiple-testing problems has become an interesting area of research. The first important work in this field was performed by Schweder and Spjøtvoll. Among others, they proposed to use plug-in estimates for the proportion of true null hypotheses in multiple-test procedures to improve the power. We investigate the problem of controlling the familywise error rate FWER when such estimators are used as plug-in estimators in single-step or step-down multiple-test procedures. First we investigate the case of independent p -values under the null hypotheses and show that a suitable choice of plug-in estimates leads to control of FWER in single-step procedures. We also investigate the power and study the asymptotic behaviour of the number of false rejections. Although step-down procedures are more difficult to handle we briefly consider a possible solution to this problem. Anyhow, plug-in step-down procedures are not recommended here. For dependent p -values we derive a condition for asymptotic control of FWER and provide some simulations with respect to FWER and power for various models and hypotheses.  相似文献   
13.
It is often the case that high-dimensional data consist of only a few informative components. Standard statistical modeling and estimation in such a situation is prone to inaccuracies due to overfitting, unless regularization methods are practiced. In the context of classification, we propose a class of regularization methods through shrinkage estimators. The shrinkage is based on variable selection coupled with conditional maximum likelihood. Using Stein's unbiased estimator of the risk, we derive an estimator for the optimal shrinkage method within a certain class. A comparison of the optimal shrinkage methods in a classification context, with the optimal shrinkage method when estimating a mean vector under a squared loss, is given. The latter problem is extensively studied, but it seems that the results of those studies are not completely relevant for classification. We demonstrate and examine our method on simulated data and compare it to feature annealed independence rule and Fisher's rule.  相似文献   
14.
Huber's estimator has had a long lasting impact, particularly on robust statistics. It is well known that under certain conditions, Huber's estimator is asymptotically minimax. A moderate generalization in rederiving Huber's estimator shows that Huber's estimator is not the only choice. We develop an alternative asymptotic minimax estimator and name it regression with stochastically bounded noise (RSBN). Simulations demonstrate that RSBN is slightly better in performance, although it is unclear how to justify such an improvement theoretically. We propose two numerical solutions: an iterative numerical solution, which is extremely easy to implement and is based on the proximal point method; and a solution by applying state-of-the-art nonlinear optimization software packages, e.g., SNOPT. Contribution: the generalization of the variational approach is interesting and should be useful in deriving other asymptotic minimax estimators in other problems.  相似文献   
15.
For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow‐up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, λ, as well as age‐specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds) = (incidence rate) × (mean duration) (P/[1 ? P] = λ × µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of λ is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1 ? P] = λ × µ. Following‐up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   
16.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
17.
The maximum likelihood estimator (MLE) and the likelihood ratio test (LRT) will be considered for making inference about the scale parameter of the exponential distribution in case of moving extreme ranked set sampling (MERSS). The MLE and LRT can not be written in closed form. Therefore, a modification of the MLE using the technique suggested by Maharota and Nanda (Biometrika 61:601–606, 1974) will be considered and this modified estimator will be used to modify the LRT to get a test in closed form for testing a simple hypothesis against one sided alternatives. The same idea will be used to modify the most powerful test (MPT) for testing a simple hypothesis versus a simple hypothesis to get a test in closed form for testing a simple hypothesis against one sided alternatives. Then it appears that the modified estimator is a good competitor of the MLE and the modified tests are good competitors of the LRT using MERSS and simple random sampling (SRS).  相似文献   
18.
利用收入指标对股票超额收益率进行解释构成了理解"定价异常"的重要方面。为此,基于盈余公告后漂移的理论分析框架,以上证A股2008年1季度至2011年4季度的相关数据为基础,利用标准化预期外收入估计量(SURE)和分类检验模型方法对中国股票市场公告期内股票价格的收入公告后漂移现象进行实证检验,研究发现:在盈余公告期内,预期外收入与股票超额收益率呈现出负相关或是不显著的关系,即中国股票市场的收入公告后漂移效应不显著。之后的稳健性分析也同样证实了负相关或是不显著关系的存在,而这种异常可能与中国股市的弱有效率相关。  相似文献   
19.
This paper focuses on a novel method of developing one-sample confidence bands for survival functions from right censored data. The approach is model-based, relying on a parametric model for the conditional expectation of the censoring indicator given the observed minimum, and derives its strength from easy access to a good-fitting model among a plethora of choices available for binary response data. The substantive methodological contribution is in exploiting a semiparametric estimator of the survival function to produce improved simultaneous confidence bands. To obtain critical values for computing the confidence bands, a two-stage bootstrap approach that combines the classical bootstrap with the more recent model-based regeneration of censoring indicators is proposed and a justification of its asymptotic validity is also provided. Several different confidence bands are studied using the proposed approach. Numerical studies, including robustness of the proposed bands to misspecification, are carried out to check efficacy. The method is illustrated using two lung cancer data sets.  相似文献   
20.
In regression, detecting anomalous observations is a significant step for model-building process. Various influence measures based on different motivational arguments are designed to measure the influence of observations through different aspects of various regression models. The presence of influential observations in the data is complicated by the existence of multicollinearity. The purpose of this paper is to assess the influence of observations in the Liu [9] and modified Liu [15] estimators by using the method of approximate case deletion formulas suggested by Walker and Birch [14]. A numerical example using a real data set used by Longley [10] and a Monte Carlo simulation are given to illustrate the theoretical results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号