全文获取类型
收费全文 | 1156篇 |
免费 | 83篇 |
国内免费 | 3篇 |
专业分类
管理学 | 209篇 |
民族学 | 4篇 |
人才学 | 1篇 |
人口学 | 49篇 |
丛书文集 | 54篇 |
理论方法论 | 30篇 |
综合类 | 304篇 |
社会学 | 62篇 |
统计学 | 529篇 |
出版年
2024年 | 2篇 |
2023年 | 7篇 |
2022年 | 6篇 |
2021年 | 28篇 |
2020年 | 43篇 |
2019年 | 46篇 |
2018年 | 48篇 |
2017年 | 54篇 |
2016年 | 42篇 |
2015年 | 50篇 |
2014年 | 63篇 |
2013年 | 168篇 |
2012年 | 80篇 |
2011年 | 74篇 |
2010年 | 52篇 |
2009年 | 66篇 |
2008年 | 72篇 |
2007年 | 34篇 |
2006年 | 58篇 |
2005年 | 39篇 |
2004年 | 39篇 |
2003年 | 23篇 |
2002年 | 33篇 |
2001年 | 27篇 |
2000年 | 16篇 |
1999年 | 10篇 |
1998年 | 12篇 |
1997年 | 9篇 |
1996年 | 7篇 |
1995年 | 9篇 |
1994年 | 4篇 |
1993年 | 3篇 |
1992年 | 5篇 |
1991年 | 2篇 |
1990年 | 3篇 |
1989年 | 2篇 |
1988年 | 3篇 |
1987年 | 1篇 |
1982年 | 1篇 |
1980年 | 1篇 |
排序方式: 共有1242条查询结果,搜索用时 62 毫秒
971.
在我国“双循环”发展新格局下,中国企业参与“一带一路”合作不仅要与联合国《2030年可持续发展议程》接轨,还要打造新的全球化“双循环”枢纽节点。面对中美贸易战的升级和后疫情时代的新形势,“一带一路”倡议下的企业需要采用新的国际化理念,提升生态位,适应新的发展格局,转变发展理念,优化国际与国内供应链的生态系统,保证供应链的安全、稳定、可控,实现可持续发展。“双循环”发展新格局下,“一带一路”可持续发展的关键是建立国际与国内双循环的枢纽机制,从“人类命运共同体”的理念出发,创新制度化“共生”发展模式。研究企业生态位优化算法是分析“双循环”供应链合作生态系统有效性的方法,针对目前“一带一路”企业供应链合作存在的“孤岛”问题与不足,对企业生态位的重叠度和宽度进行算法优化。通过算法优化与验证,创新国际“双循环”供应链合作新模式,适应国际国内变化新形势,建设“一带一路”供应链的“双循环”生态系统,以对接联合国可持续发展目标,发挥国际国内大循环产业的枢纽节点作用,提升“一带一路”我国产业生态位和企业生态位,从而推动我国“一带一路”产业“双循环”供应链合作生态系统的新发展,形成“一带一路”发展新格局。 相似文献
972.
在现有动态DEA模型DSBM的基础上,构建了时期效率最优的DtSBM模型,并以省(市)为决策单元(DMU)运用该模型对中国31省(市)(除港、澳、台)2008-2015年的医疗卫生服务效率进行动态评价。通过对各省(市)医疗卫生服务各时期的最优效率值(即时期效率)和所有时期的整体效率值(即整体效率)的测算,从医疗卫生服务效率的角度验证了中国自2009年开始正式实施的"新医改"具有显著效果;按东、中、西部区域划分时,东部医疗卫生服务效率最高,西部次之,中部最低;通过对决策单元各项投入产出指标需改进百分比的分析,提出了相对效率值较低的无效决策单元——山西省、黑龙江省、吉林省、辽宁省和陕西省改进其医疗卫生服务效率的方向和目标。 相似文献
973.
Over the past decade there have been considerable developments in the use of data in the field of child and family homelessness. The development of high-quality data collection processes—including Housing Management Information Systems (HMIS), community point-in-time counts, and school district data and evaluation infrastructure—has given nonprofit and social sector leaders unprecedented access to client-level data. However, it remains a challenge for nonprofits and community-based organizations to engage in work with families experiencing homelessness and demonstrate meaningful impact across a variety of outcomes. In this policy brief, the authors discuss (1) challenges facing the field of child and family homelessness with respect to data use, (2) recent advancements in the use of data, and (3) strategies to create an organizational culture of data that makes use of recent advancements in data use and addresses current challenges facing the field. The brief makes the argument that fostering a data culture at the organizational level has the capacity to operate as an organizational change agent that improves programs. 相似文献
974.
T. C. Hsieh Yu-Hsiu Wang Yi-Shan Hsieh Jing-Tai Ke Chai-Kai Liu Sue-Chuan Chen 《Journal of Technology in Human Services》2018,36(1):56-68
ABSTRACTThe prevention of domestic violence (DV) have aroused serious concerns in Taiwan because of the disparity between the increasing amount of reported DV cases that doubled over the past decade and the scarcity of social workers. However, most common collaborations for DV prevention, between academic researchers and advocacy groups or governments outsourcing, often fail to produce effective prevention strategies. Hence, Data for Social Good Initiative (D4SG) worked with Taipei City Government to improve the efficiency of DV prevention and risk management on two levels—project collaboration level and data level. On the project collaboration level, we adopted a platform strategy and utilize public-private partnership (PPP) to connect and empower change agents across data silos from pilot runs to actual project execution. On the data level, we helped social workers differentiate the risk level of new cases by building a repeat victimization risk prediction model using random forest method with the 2015 data from Taipei City government’s DV database. The accuracy and F1-measure of our model were 96.3% and 62.8%. This projects’ PPP approach and quantification method successfully improved DV prevention process. These methodologies have also been applied to other work fields including firework prediction, emergency healthcare management as a paradigm. 相似文献
975.
Dorris Scott Jihwan Oh Miriam Chappelka Mizzani Walker-Holmes Carl DiSalvo 《Journal of Technology in Human Services》2018,36(1):37-47
ABSTRACTThis project explores public opinion on the Supplemental Nutrition Assistance Program (SNAP) in news and social media outlets, and tracks elected representatives’ voting records on issues relating to SNAP and food insecurity. We used machine learning, sentiment analysis, and text mining to analyze national and state level coverage of SNAP in order to gauge perceptions of the program over time across these outlets. Results indicate that the majority of news coverage has negative sentiment, more partisan news outlets have more extreme sentiment, and that clustering of negative reporting on SNAP occurs in the Midwest. Our final results and tools will be displayed in an online application that the ACFB Advocacy team can use to inform their communication to relevant stakeholders. 相似文献
976.
《Omega》2017
The conventional Malmquist productivity index (MPI), which ignores the internal structure of a production system when measuring changes in performance between two periods, may produce misleading results. This paper thus takes the operations of the component processes into account in investigating the MPI of parallel production systems. A relational data envelopment analysis (DEA) model is developed to measure the biennial MPIs of the system and internal processes at the same time, and it is shown that the former is a linear combination of the latter. This decomposition helps identify the processes that cause the decline in performance of the system. An example of 39 branches of a commercial bank, with deposits, sales, and services as the three major functions operating in parallel, is used to illustrate this approach. 相似文献
977.
《Omega》2017
Determining the least distance to the efficient frontier for estimating technical inefficiency, with the consequent determination of closest targets, has been one of the relevant issues in recent Data Envelopment Analysis literature. This new paradigm contrasts with traditional approaches, which yield furthest targets. In this respect, some techniques have been proposed in order to implement the new paradigm. A group of these techniques is based on identifying all the efficient faces of the polyhedral production possibility set and, therefore, is associated with the resolution of a NP-hard problem. In contrast, a second group proposes different models and particular algorithms to solve the problem avoiding the explicit identification of all these faces. These techniques have been applied more or less successfully. Nonetheless, the new paradigm is still unsatisfactory and incomplete to a certain extent. One of these challenges is that related to measuring technical inefficiency in the context of oriented models, i.e., models that aim at changing inputs or outputs but not both. In this paper, we show that existing specific techniques for determining the least distance without identifying explicitly the frontier structure for graph measures, which change inputs and outputs at the same time, do not work for oriented models. Consequently, a new methodology for satisfactorily implementing these situations is proposed. Finally, the new approach is empirically checked by using a recent PISA database consisting of 902 schools. 相似文献
978.
《Omega》2017
How to determine weights for attributes is one of the key issues in multiple attribute decision making (MADM). This paper aims to investigate a new approach for determining attribute weights based on a data envelopment analysis (DEA) model without explicit inputs (DEA-WEI) and minimax reference point optimisation. This new approach first considers a set of preliminary weights and the most favourite set of weights for each alternative or decision making unit (DMU) and then aggregates these weight sets to find the best compromise weights for attributes with the interests of all DMUs taken into account fairly and simultaneously. This approach is intended to support the solution of such MADM problems as performance assessment and policy analysis where (a) the preferences of decision makers (DMs) are either unclear and partial or difficult to acquire and (b) there is a need to consider the best "will" of each DMU. Two case studies are conducted to show the property of this new proposed approach and how to use it to determine weights for attributes in practice. The first case is about the assessment of research strengths of EU-28 member countries under certain measures and the second is for analysing the performances of Chinese Project 985 universities, where the weights of the attributes need to be assigned in a fair and unbiased manner. 相似文献
979.
Box–Cox power transformation is a commonly used methodology to transform the distribution of the data into a normal distribution. The methodology relies on a single transformation parameter. In this study, we focus on the estimation of this parameter. For this purpose, we employ seven popular goodness-of-fit tests for normality, namely Shapiro–Wilk, Anderson–Darling, Cramer-von Mises, Pearson Chi-square, Shapiro-Francia, Lilliefors and Jarque–Bera tests, together with a searching algorithm. The searching algorithm is based on finding the argument of the minimum or maximum depending on the test, i.e., maximum for the Shapiro–Wilk and Shapiro–Francia, minimum for the rest. The artificial covariate method of Dag et al. (2014) is also included for comparison purposes. Simulation studies are implemented to compare the performances of the methods. Results show that Shapiro–Wilk and the artificial covariate method are more effective than the others and Pearson Chi-square is the worst performing method. The methods are also applied to two real-life datasets. The R package AID is proposed for implementation of the aforementioned methods. 相似文献
980.
The purpose of this study was to utilize simulated data based on an ongoing randomized clinical trial (RCT) to evaluate the effects of treatment switching with randomization as an instrumental variable (IV) at differing levels of treatment crossovers, for continuous and binary outcomes. Data were analyzed using IV, intent-to-treat (ITT), and per protocol (PP) methods. The IV method performed the best, since it provided the most unbiased point estimates, and it had equal or higher power and higher coverage probabilities compared to the ITT estimates, and because a PP analysis can be biased due to its exclusion of non-compliant patients. 相似文献