首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   970篇
  免费   37篇
  国内免费   2篇
管理学   64篇
民族学   5篇
人口学   18篇
丛书文集   11篇
理论方法论   9篇
综合类   81篇
社会学   15篇
统计学   806篇
  2024年   2篇
  2023年   4篇
  2022年   12篇
  2021年   7篇
  2020年   25篇
  2019年   47篇
  2018年   59篇
  2017年   77篇
  2016年   38篇
  2015年   27篇
  2014年   50篇
  2013年   264篇
  2012年   103篇
  2011年   39篇
  2010年   38篇
  2009年   31篇
  2008年   28篇
  2007年   12篇
  2006年   10篇
  2005年   15篇
  2004年   10篇
  2003年   14篇
  2002年   13篇
  2001年   5篇
  2000年   10篇
  1999年   12篇
  1998年   12篇
  1997年   3篇
  1996年   5篇
  1995年   5篇
  1994年   3篇
  1993年   1篇
  1992年   1篇
  1990年   2篇
  1989年   1篇
  1988年   3篇
  1987年   1篇
  1985年   2篇
  1984年   5篇
  1982年   5篇
  1981年   5篇
  1979年   2篇
  1978年   1篇
排序方式: 共有1009条查询结果,搜索用时 15 毫秒
61.
In the present paper, we introduce and study a class of distributions that has the linear mean residual quantile function. Various distributional properties and reliability characteristics of the class are studied. Some characterizations of the class of distributions are presented. We then present generalizations of this class of distributions using the relationship between various quantile based reliability measures. The method of L-moments is employed to estimate parameters of the class of distributions. Finally, we apply the proposed class of distributions to a real data set.  相似文献   
62.
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic. We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation. For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation study to perform well and to have several distinct advantages over existing methods.  相似文献   
63.
In two observational studies, one investigating the effects of minimum wage laws on employment and the other of the effects of exposures to lead, an estimated treatment effect's sensitivity to hidden bias is examined. The estimate uses the combined quantile averages that were introduced in 1981 by B. M. Brown as simple, efficient, robust estimates of location admitting both exact and approximate confidence intervals and significance tests. Closely related to Gastwirth's estimate and Tukey's trimean, the combined quantile average has asymptotic efficiency for normal data that is comparable with that of a 15% trimmed mean, and higher efficiency than the trimean, but it has resistance to extreme observations or breakdown comparable with that of the trimean and better than the 15% trimmed mean. Combined quantile averages provide consistent estimates of an additive treatment effect in a matched randomized experiment. Sensitivity analyses are discussed for combined quantile averages when used in a matched observational study in which treatments are not randomly assigned. In a sensitivity analysis in an observational study, subjects are assumed to differ with respect to an unobserved covariate that was not adequately controlled by the matching, so that treatments are assigned within pairs with probabilities that are unequal and unknown. The sensitivity analysis proposed here uses significance levels, point estimates and confidence intervals based on combined quantile averages and examines how these inferences change under a range of assumptions about biases due to an unobserved covariate. The procedures are applied in the studies of minimum wage laws and exposures to lead. The first example is also used to illustrate sensitivity analysis with an instrumental variable.  相似文献   
64.
We propose kernel density estimators based on prebinned data. We use generalized binning schemes based on the quantiles points of a certain auxiliary distribution function. Therein the uniform distribution corresponds to usual binning. The statistical accuracy of the resulting kernel estimators is studied, i.e. we derive mean squared error results for the closeness of these estimators to both the true function and the kernel estimator based on the original data set. Our results show the influence of the choice of the auxiliary density on the binned kernel estimators and they reveal that non-uniform binning can be worthwhile.  相似文献   
65.
As a result of lessons learnt from the 1991 census, a research programme was set up to seek improvements in census methodology. Underenumeration has been placed top of the agenda in this programme, and every effort is being made to achieve as high a coverage as possible in the 2001 census. In recognition, however, that 100% coverage will never be achieved, the one-number census (ONC) project was established to measure the degree of underenumeration in the 2001 census and, if possible, to adjust fully the outputs from the census for that undercount. A key component of this adjustment process is a census coverage survey (CCS). This paper presents an overview of the ONC project, focusing on the design and analysis methodology for the CCS. It also presents results that allow the reader to evaluate the robustness of this methodology.  相似文献   
66.
Contamination of a sampled distribution, for example by a heavy-tailed distribution, can degrade the performance of a statistical estimator. We suggest a general approach to alleviating this problem, using a version of the weighted bootstrap. The idea is to 'tilt' away from the contaminated distribution by a given (but arbitrary) amount, in a direction that minimizes a measure of the new distribution's dispersion. This theoretical proposal has a simple empirical version, which results in each data value being assigned a weight according to an assessment of its influence on dispersion. Importantly, distance can be measured directly in terms of the likely level of contamination, without reference to an empirical measure of scale. This makes the procedure particularly attractive for use in multivariate problems. It has several forms, depending on the definitions taken for dispersion and for distance between distributions. Examples of dispersion measures include variance and generalizations based on high order moments. Practicable measures of the distance between distributions may be based on power divergence, which includes Hellinger and Kullback–Leibler distances. The resulting location estimator has a smooth, redescending influence curve and appears to avoid computational difficulties that are typically associated with redescending estimators. Its breakdown point can be located at any desired value ε∈ (0, ½) simply by 'trimming' to a known distance (depending only on ε and the choice of distance measure) from the empirical distribution. The estimator has an affine equivariant multivariate form. Further, the general method is applicable to a range of statistical problems, including regression.  相似文献   
67.
68.
I consider nonparametric identification of nonseparable instrumental variables models with continuous endogenous variables. If both the outcome and first stage equations are strictly increasing in a scalar unobservable, then many kinds of continuous, discrete, and even binary instruments can be used to point‐identify the levels of the outcome equation. This contrasts sharply with related work by Imbens and Newey, 2009 that requires continuous instruments with large support. One implication is that assumptions about the dimension of heterogeneity can provide nonparametric point‐identification of the distribution of treatment response for a continuous treatment in a randomized controlled experiment with partial compliance.  相似文献   
69.
本文首次将Elastic Net这种用于高度相关变量的惩罚方法用于面板数据的贝叶斯分位数回归,并基于非对称Laplace先验分布推导所有参数的后验分布,进而构建Gibbs抽样。为了验证模型的有效性,本文将面板数据的贝叶斯Elastic Net分位数回归方法(BQR. EN)与面板数据的贝叶斯分位数回归方法(BQR)、面板数据的贝叶斯Lasso分位数回归方法(BLQR)、面板数据的贝叶斯自适应Lasso分位数回归方法(BALQR)进行了多种情形下的全方位比较,结果表明BQR. EN方法适用于具有高度相关性、数据维度很高和尖峰厚尾分布特征的数据。进一步地,本文就BQR. EN方法在不同扰动项假设、不同样本量的情形展开模拟比较,验证了新方法的稳健性和小样本特性。最后,本文选取互联网金融类上市公司经济增加值(EVA)作为实证研究对象,检验新方法在实际问题中的参数估计与变量选择能力,实证结果符合预期。  相似文献   
70.
在工资差距分解问题中,研究者经常会遇到样本选择偏差问题,直接忽略会导致最终估计结果产生严重偏差,同时在众多工资差距分解方法中,相比于均值分解,分布分解方法更受研究者青睐。针对参数分位回归,本文首次提出可加形式与非可加形式的样本选择参数分位回归(SSPQR)模型,并基于这两类样本选择参数分位回归模型给出修正样本选择偏差后的参数分位回归工资差距分布分解方法。运用上述方法及已有的工资分布分解方法,借助CHNS2015年度城镇数据,本文研究了我国城镇男女工资差距及差距分解问题,得出以下结论:①男女工资差距主要来源是性别歧视问题;②经过样本选择偏差修正后,实际的工资差距更大,歧视问题更严重;③男女工资差距程度在不同分位点上结果不同,换句话说,我们不能简单地仅从平均水平来判断工资差距程度;④与其他已有方法计算结果比较发现,SSPQR计算的工资差距程度更大。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号