首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4708篇
  免费   167篇
  国内免费   61篇
管理学   404篇
劳动科学   3篇
民族学   37篇
人才学   1篇
人口学   87篇
丛书文集   396篇
理论方法论   145篇
综合类   3245篇
社会学   130篇
统计学   488篇
  2024年   29篇
  2023年   53篇
  2022年   96篇
  2021年   97篇
  2020年   129篇
  2019年   116篇
  2018年   119篇
  2017年   118篇
  2016年   135篇
  2015年   131篇
  2014年   250篇
  2013年   273篇
  2012年   305篇
  2011年   336篇
  2010年   239篇
  2009年   236篇
  2008年   277篇
  2007年   314篇
  2006年   294篇
  2005年   255篇
  2004年   238篇
  2003年   244篇
  2002年   183篇
  2001年   169篇
  2000年   93篇
  1999年   34篇
  1998年   22篇
  1997年   19篇
  1996年   16篇
  1995年   28篇
  1994年   15篇
  1993年   15篇
  1992年   14篇
  1991年   10篇
  1990年   9篇
  1989年   12篇
  1988年   4篇
  1987年   4篇
  1985年   2篇
  1984年   1篇
  1982年   1篇
  1977年   1篇
排序方式: 共有4936条查询结果,搜索用时 86 毫秒
11.
我国生产用能源消费变动的分解分析   总被引:28,自引:0,他引:28       下载免费PDF全文
高振宇  王益 《统计研究》2007,24(3):52-57
摘  要:能源消费分解是探讨能源消费变动影响因素的一种常用方法。在本文中介绍了目前研究中较为合理的一种分解方法——对数平均D氏指数法,并借助这一方法来对我国“六五”时期以来的生产用能源消费情况进行分解分析,探讨产业结构变动和产业内效率提高对能源消费和总体单位能耗的影响。根据测算结果,笔者认为产业内能源效率的提高是我国能源节约的主要因素;进一步建议政府构建“能源分解指数体系”作为制定能源政策的依据。  相似文献   
12.
高梦滔 《统计研究》2007,24(9):69-76
本文基于中国西部三个城市,7949个住户的微观数据,利用内生的处理效应模型测算了城市家庭20-35岁青年的高等教育投资回报率。经验研究的结果显示:1. 现阶段青年人的高等教育投资内部报酬率大致在7%左右;按照30年工作时间计算的收入增加现值大约在8万元上下,高等教育使得月收入期望值平均增加80%。高等教育还能够使青年就业的可能性增加8%左右;2. 从性别的视角观察,高等教育对于女性的回报率高于男性,女青年内部报酬率大约在8.3%(男青年7.6%);就业概率增加15.9%(男青年4%),每月工资期望值增加122%(男青年67.5%);3. 如果高等教育的全部花费增加到平均6万元,则从单纯的现金流意义上说,高等教育没有增加收入的价值了。  相似文献   
13.
书序具有重要的传播作用,以北宋诗文集序为对象考察,书序作者在选择推动传播的因素时,对作品艺术性、知识性等文本因素的关注不如对作家身份、生平轶事等非文本因素普遍。其中原因,从书序作者的操作层面来看,是因为在书序中进行文艺批评是很难把握的;从面对读者的传播效果层面来分析,是因为非文本因素较之文本相关的评论因素可以产生显著的传播效果。于是,非文本因素在文学传播中变成了一位无形的操纵者,虽然它不是长远的传播之道,但却常常起着奇妙的传播作用,从而给书序这一文体增加活力。  相似文献   
14.
Estimated associations between an outcome variable and misclassified covariates tend to be biased when the methods of estimation that ignore the classification error are applied. Available methods to account for misclassification often require the use of a validation sample (i.e. a gold standard). In practice, however, such a gold standard may be unavailable or impractical. We propose a Bayesian approach to adjust for misclassification in a binary covariate in the random effect logistic model when a gold standard is not available. This Markov Chain Monte Carlo (MCMC) approach uses two imperfect measures of a dichotomous exposure under the assumptions of conditional independence and non-differential misclassification. A simulated numerical example and a real clinical example are given to illustrate the proposed approach. Our results suggest that the estimated log odds of inpatient care and the corresponding standard deviation are much larger in our proposed method compared with the models ignoring misclassification. Ignoring misclassification produces downwardly biased estimates and underestimate uncertainty.  相似文献   
15.
An evaluation is described of two UK Government programmes for the long-term unemployed in Great Britain, Employment Training and Employment Action, using discrete time hazard modelling of event histories. The study design employed a closely matched comparison group and carefully chosen control variables to minimize the effect of selection bias on conclusions. The effect of unobserved heterogeneity is investigated by using some standard random effect model formulations.  相似文献   
16.
One of the main advantages of factorial experiments is the information that they can offer on interactions. When there are many factors to be studied, some or all of this information is often sacrificed to keep the size of an experiment economically feasible. Two strategies for group screening are presented for a large number of factors, over two stages of experimentation, with particular emphasis on the detection of interactions. One approach estimates only main effects at the first stage (classical group screening), whereas the other new method (interaction group screening) estimates both main effects and key two-factor interactions at the first stage. Three criteria are used to guide the choice of screening technique, and also the size of the groups of factors for study in the first-stage experiment. The criteria seek to minimize the expected total number of observations in the experiment, the probability that the size of the experiment exceeds a prespecified target and the proportion of active individual factorial effects which are not detected. To implement these criteria, results are derived on the relationship between the grouped and individual factorial effects, and the probability distributions of the numbers of grouped factors whose main effects or interactions are declared active at the first stage. Examples are used to illustrate the methodology, and some issues and open questions for the practical implementation of the results are discussed.  相似文献   
17.
18.
Modeling data that are non-normally distributed with random effects is the major challenge in analyzing binomial data in split-plot designs. Seven methods for analyzing such data using mixed, generalized linear, or generalized linear mixed models are compared for the size and power of the tests. This study shows that analyzing random effects properly is more important than adjusting the analysis for non-normality. Methods based on mixed and generalized linear mixed models hold Type I error rates better than generalized linear models. Mixed model methods tend to have higher power than generalized linear mixed models when the sample size is small.  相似文献   
19.
This article is concerned with the effect of the methods for handling missing values in multivariate control charts. We discuss the complete case, mean substitution, regression, stochastic regression, and the expectation–maximization algorithm methods for handling missing values. Estimates of mean vector and variance–covariance matrix from the treated data set are used to build the multivariate exponentially weighted moving average (MEWMA) control chart. Based on a Monte Carlo simulation study, the performance of each of the five methods is investigated in terms of its ability to obtain the nominal in-control and out-of-control average run length (ARL). We consider three sample sizes, five levels of the percentage of missing values, and three types of variable numbers. Our simulation results show that imputation methods produce better performance than case deletion methods. The regression-based imputation methods have the best overall performance among all the competing methods.  相似文献   
20.
In this paper, we study the effect of estimating the vector of means and the variance–covariance matrix on the performance of two of the most widely used multivariate cumulative sum (CUSUM) control charts, the MCUSUM chart proposed by Crosier [Multivariate generalizations of cumulative sum quality-control schemes, Technometrics 30 (1988), pp. 291–303] and the MC1 chart proposed by Pignatiello and Runger [Comparisons of multivariate CUSUM charts, J. Qual. Technol. 22 (1990), pp. 173–186]. Using simulation, we investigate and compare the in-control and out-of-control performances of the competing charts in terms of the average run length measure. The in-control and out-of-control performances of the competing charts deteriorate significantly if the estimated parameters are used with control limits intended for known parameters, especially when only a few Phase I samples are used to estimate the parameters. We recommend the use of the MC1 chart over that of the MCUSUM chart if the parameters are estimated from a small number of Phase I samples.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号