首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8147篇
  免费   286篇
  国内免费   51篇
管理学   616篇
劳动科学   2篇
民族学   29篇
人才学   1篇
人口学   33篇
丛书文集   799篇
理论方法论   233篇
综合类   4192篇
社会学   529篇
统计学   2050篇
  2024年   8篇
  2023年   87篇
  2022年   76篇
  2021年   89篇
  2020年   129篇
  2019年   180篇
  2018年   201篇
  2017年   283篇
  2016年   193篇
  2015年   214篇
  2014年   356篇
  2013年   923篇
  2012年   531篇
  2011年   469篇
  2010年   403篇
  2009年   452篇
  2008年   444篇
  2007年   512篇
  2006年   535篇
  2005年   454篇
  2004年   457篇
  2003年   396篇
  2002年   372篇
  2001年   277篇
  2000年   147篇
  1999年   62篇
  1998年   45篇
  1997年   41篇
  1996年   21篇
  1995年   17篇
  1994年   23篇
  1993年   12篇
  1992年   17篇
  1991年   14篇
  1990年   6篇
  1989年   5篇
  1988年   7篇
  1987年   3篇
  1986年   3篇
  1985年   4篇
  1984年   3篇
  1983年   3篇
  1982年   5篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
  1975年   1篇
排序方式: 共有8484条查询结果,搜索用时 31 毫秒
1.
Damage models for natural hazards are used for decision making on reducing and transferring risk. The damage estimates from these models depend on many variables and their complex sometimes nonlinear relationships with the damage. In recent years, data‐driven modeling techniques have been used to capture those relationships. The available data to build such models are often limited. Therefore, in practice it is usually necessary to transfer models to a different context. In this article, we show that this implies the samples used to build the model are often not fully representative for the situation where they need to be applied on, which leads to a “sample selection bias.” In this article, we enhance data‐driven damage models by applying methods, not previously applied to damage modeling, to correct for this bias before the machine learning (ML) models are trained. We demonstrate this with case studies on flooding in Europe, and typhoon wind damage in the Philippines. Two sample selection bias correction methods from the ML literature are applied and one of these methods is also adjusted to our problem. These three methods are combined with stochastic generation of synthetic damage data. We demonstrate that for both case studies, the sample selection bias correction techniques reduce model errors, especially for the mean bias error this reduction can be larger than 30%. The novel combination with stochastic data generation seems to enhance these techniques. This shows that sample selection bias correction methods are beneficial for damage model transfer.  相似文献   
2.
ABSTRACT

The cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making.  相似文献   
3.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
4.
The argument is made for having a positive error culture in child protection to improve decision‐making and risk management. This requires organizations to accept that mistakes are likely and to treat them as opportunities for learning and improving. In contrast, in many organizations, a punitive reaction to errors leads to workers hiding them and developing a defensive approach to their practice with children and families. The safety management literature has shown how human error is generally not simply due to a “bad apple” but made more or less likely by the work context that helps or hinders good performance. Improving safety requires learning about the weaknesses in the organization that contribute to poor performance. To create a learning culture, people need to feel that when they talk about mistakes or weak practice, there will be a constructive response from their organization. One aspect of reducing the blame culture is to develop a shared understanding of how practice will be judged and how those appraising practice will avoid the hindsight bias. To facilitate a positive error culture, a set of risk principles are presented that offer a set of criteria by which practice should be appraised.  相似文献   
5.
Keisuke Himoto 《Risk analysis》2020,40(6):1124-1138
Post-earthquake fires are high-consequence events with extensive damage potential. They are also low-frequency events, so their nature remains underinvestigated. One difficulty in modeling post-earthquake ignition probabilities is reducing the model uncertainty attributed to the scarce source data. The data scarcity problem has been resolved by pooling the data indiscriminately collected from multiple earthquakes. However, this approach neglects the inter-earthquake heterogeneity in the regional and seasonal characteristics, which is indispensable for risk assessment of future post-earthquake fires. Thus, the present study analyzes the post-earthquake ignition probabilities of five major earthquakes in Japan from 1995 to 2016 (1995 Kobe, 2003 Tokachi-oki, 2004 Niigata–Chuetsu, 2011 Tohoku, and 2016 Kumamoto earthquakes) by a hierarchical Bayesian approach. As the ignition causes of earthquakes share a certain commonality, common prior distributions were assigned to the parameters, and samples were drawn from the target posterior distribution of the parameters by a Markov chain Monte Carlo simulation. The results of the hierarchical model were comparatively analyzed with those of pooled and independent models. Although the pooled and hierarchical models were both robust in comparison with the independent model, the pooled model underestimated the ignition probabilities of earthquakes with few data samples. Among the tested models, the hierarchical model was least affected by the source-to-source variability in the data. The heterogeneity of post-earthquake ignitions with different regional and seasonal characteristics has long been desired in the modeling of post-earthquake ignition probabilities but has not been properly considered in the existing approaches. The presented hierarchical Bayesian approach provides a systematic and rational framework to effectively cope with this problem, which consequently enhances the statistical reliability and stability of estimating post-earthquake ignition probabilities.  相似文献   
6.
Can universities be agents of progressive social change? How would we know if a university was acting as an agent of social change? Drawing on four case studies, I raise a number of questions to problematize our understanding of the university as an agent of social change. I outline a number of contributing factors that appear to explain successful cases. I conclude by arguing the relevancy of these cases for larger, and more traditional, sociological projects.  相似文献   
7.
Service-learning programs are not free from challenges brought about by lack of financial support, lack of widespread commitment from professors, community agencies, and recipients of service, and lack of knowledge and insight in students directly involved in such programs. While service-learning initiatives and programs serve positive functions for organizations and individuals, rhetorical accolades for service learning can distort or omit the realities of program implementation and sustained delivery. This paper specifically explores the following challenges connected to service-learning programs: (1) pedagogical difficulties; (2) student limitations; (3) time constraints; and (4) community cooperation.  相似文献   
8.
关于高校办公室网络化管理的思考   总被引:4,自引:0,他引:4  
随着科学技术和高等教育的发展,网络也广泛运用于高校办公室的管理工作,并在其中起着重要的作用。  相似文献   
9.
构建现代远程开放教育考试体系初探   总被引:2,自引:0,他引:2  
现代远程开放教育是基于以学习者为中心的开放学习和个别化学习。我们应从素质教育的培养目标出发 ,依据现代考试测量理论 ,利用现代科技手段 ,建立现代远程开放教育考试体系 ,客观、准确地评价学习者的进步状况以及在学识、能力等方面达到的水平 ,使学习者在学习过程中不断自我完善 ,培养他们积极进取的精神  相似文献   
10.
为了实现教育资源的优化组合 ,近年来国家对许多高等院校进行了合并、划转和调整。在合并后的高校各校区之间出现大量的拨入、拨出经费 ,往来结算等财务核算工作。从而产生了一项新型的、类似于企业集团的会计工作———期末会计报表的合并及调整工作。本文拟就高校合并后的期末会计报表的合并及调整工作 ,结合笔者在相关业务工作中的实际 ,作一些探讨  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号