首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1344篇
  免费   114篇
  国内免费   3篇
管理学   316篇
民族学   2篇
人才学   1篇
人口学   37篇
丛书文集   29篇
理论方法论   32篇
综合类   164篇
社会学   65篇
统计学   815篇
  2024年   1篇
  2023年   5篇
  2022年   5篇
  2021年   29篇
  2020年   37篇
  2019年   64篇
  2018年   49篇
  2017年   69篇
  2016年   39篇
  2015年   56篇
  2014年   62篇
  2013年   281篇
  2012年   99篇
  2011年   65篇
  2010年   48篇
  2009年   57篇
  2008年   85篇
  2007年   35篇
  2006年   41篇
  2005年   36篇
  2004年   34篇
  2003年   17篇
  2002年   25篇
  2001年   16篇
  2000年   12篇
  1999年   12篇
  1998年   10篇
  1997年   9篇
  1996年   6篇
  1995年   12篇
  1994年   9篇
  1993年   12篇
  1992年   18篇
  1991年   16篇
  1990年   18篇
  1989年   12篇
  1988年   10篇
  1987年   1篇
  1986年   5篇
  1985年   7篇
  1984年   8篇
  1983年   2篇
  1982年   10篇
  1981年   9篇
  1980年   7篇
  1978年   1篇
排序方式: 共有1461条查询结果,搜索用时 15 毫秒
1.
针对高职学生网上评教存在的问题,采用评教验证机制设置不合理评教的限制。对学生评教数据先剔除异常值,再分别对不同班级、不同课程和不同院系之间的学生评教数据进行修正与优化处理,得出最终的修正分值,降低了因班级、课程和院系的不同而导致的评教数据的差异性,使学生的网上评教能更准确有效的反应出教师的教学水平。  相似文献   
2.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
3.
This research note reflects on the gaps and limitations confronting the development of ethical principles regarding the accessibility of large-scale data for civil society organizations (CSOs). Drawing upon a systematic scoping study on the use of data in the United Kingdom (UK) civil society, it finds that there are twin needs to conceptualize accessibility as more than mere availability of data, as well as examine the use of data among CSOs more generally. In order to deal with the apparent “digital divide” in UK civil society – where, despite extensive government rhetoric about data openness, organizations face not only the barriers of limited time, funds, and expertise to harness data but also the lack of representation within existing data – we present a working model in which ethical concerns accompanying data utilization by civil society may be better accounted. This suggests there is a need for further research into the nexus of civil society and data upon which interdisciplinary discussion about the ethical dimensions of engagement with data, particularly informed by insight from the social sciences, can be predicated.  相似文献   
4.
贺建风  李宏煜 《统计研究》2021,38(4):131-144
数字经济时代,社交网络作为数字化平台经济的重要载体,受到了国内外学者的广泛关注。大数据背景下,社交网络的商业应用价值巨大,但由于其网络规模空前庞大,传统的网络分析方法 因计算成本过高而不再适用。而通过网络抽样算法获取样本网络,再推断整体网络,可节约计算资源, 因此抽样算法的好坏将直接影响社交网络分析结论的准确性。现有社交网络抽样算法存在忽略网络内部拓扑结构、容易陷入局部网络、抽样效率过低等缺陷。为了弥补现有社交网络抽样算法的缺陷,本文结合大数据社交网络的社区特征,提出了一种聚类随机游走抽样算法。该方法首先使用社区聚类算法将原始网络节点进行社区划分,得到多个社区网络,然后分别对每个社区进行随机游走抽样获取样本网 络。数值模拟和案例应用的结果均表明,聚类随机游走抽样算法克服了传统网络抽样算法的缺点,能够在降低网络规模的同时较好地保留原始网络的结构特征。此外,该抽样算法还可以并行运算,有效提升抽样效率,对于大数据背景下大规模社交网络的抽样实践具有重大现实意义。  相似文献   
5.
Empirical applications of poverty measurement often have to deal with a stochastic weighting variable such as household size. Within the framework of a bivariate distribution function defined over income and weight, I derive the limiting distributions of the decomposable poverty measures and of the ordinates of stochastic dominance curves. The poverty line is allowed to depend on the income distribution. It is shown how the results can be used to test hypotheses concerning changes in poverty. The inference procedures are briefly illustrated using Belgian data. An erratum to this article can be found at  相似文献   
6.
Bayesian analysis of discrete time warranty data   总被引:1,自引:0,他引:1  
Summary.  The analysis of warranty claim data, and their use for prediction, has been a topic of active research in recent years. Field data comprising numbers of units returned under guarantee are examined, covering both situations in which the ages of the failed units are known and in which they are not. The latter case poses particular computational problems for likelihood-based methods because of the large number of feasible failure patterns that must be included as contributions to the likelihood function. For prediction of future warranty exposure, which is of central concern to the manufacturer, the Bayesian approach is adopted. For this, Markov chain Monte Carlo methodology is developed.  相似文献   
7.
统计执法的博弈分析   总被引:1,自引:0,他引:1  
针对目前中国统计数据失真相当严重并引起社会各界普遍关注的现象,运用博弈论作为分析工具,引入重复博弈研究了统计执法中数据报方与查方的利益冲突关系,从统计执法的角度揭示了统计数据失真的主要原因,并提出了相应的五项对策。  相似文献   
8.
Summary.  As a part of the EUREDIT project new methods to detect multivariate outliers in incomplete survey data have been developed. These methods are the first to work with sampling weights and to be able to cope with missing values. Two of these methods are presented here. The epidemic algorithm simulates the propagation of a disease through a population and uses extreme infection times to find outlying observations. Transformed rank correlations are robust estimates of the centre and the scatter of the data. They use a geometric transformation that is based on the rank correlation matrix. The estimates are used to define a Mahalanobis distance that reveals outliers. The two methods are applied to a small data set and to one of the evaluation data sets of the EUREDIT project.  相似文献   
9.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small.  相似文献   
10.
Summary.  We consider a Bayesian forecasting system to predict the dispersal of contamination on a large scale grid in the event of an accidental release of radioactivity. The statistical model is built on a physical model for atmospheric dispersion and transport called MATCH. Our spatiotemporal model is a dynamic linear model where the state parameters are the (essentially, deterministic) predictions of MATCH; the distributions of these are updated sequentially in the light of monitoring data. One of the distinguishing features of the model is that the number of these parameters is very large (typically several hundreds of thousands) and we discuss practical issues arising in its implementation as a realtime model. Our procedures have been checked against a variational approach which is used widely in the atmospheric sciences. The results of the model are applied to test data from a tracer experiment.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号