首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   790篇
  免费   72篇
  国内免费   3篇
管理学   184篇
民族学   2篇
人才学   1篇
人口学   26篇
丛书文集   21篇
理论方法论   20篇
综合类   155篇
社会学   45篇
统计学   411篇
  2024年   1篇
  2023年   4篇
  2022年   4篇
  2021年   22篇
  2020年   29篇
  2019年   34篇
  2018年   34篇
  2017年   44篇
  2016年   35篇
  2015年   41篇
  2014年   50篇
  2013年   120篇
  2012年   62篇
  2011年   50篇
  2010年   35篇
  2009年   41篇
  2008年   54篇
  2007年   18篇
  2006年   34篇
  2005年   26篇
  2004年   22篇
  2003年   9篇
  2002年   20篇
  2001年   11篇
  2000年   10篇
  1999年   9篇
  1998年   8篇
  1997年   8篇
  1996年   6篇
  1995年   8篇
  1994年   2篇
  1993年   1篇
  1992年   4篇
  1991年   2篇
  1990年   3篇
  1989年   1篇
  1988年   2篇
  1982年   1篇
排序方式: 共有865条查询结果,搜索用时 421 毫秒
111.
Access management, which systematically limits opportunities for egress and ingress of vehicles to highway lanes, is critical to protect trillions of dollars of current investment in transportation. This article addresses allocating resources for access management with incomplete and partially relevant data on crash rates, travel speeds, and other factors. While access management can be effective to avoid crashes, reduce travel times, and increase route capacities, the literature suggests a need for performance metrics to guide investments in resource allocation across large corridor networks and several time horizons. In this article, we describe a quantitative decision model to support an access management program via risk‐cost‐benefit analysis under data uncertainties from diverse sources of data and expertise. The approach quantifies potential benefits, including safety improvement and travel time savings, and costs of access management through functional relationships of input parameters including crash rates, corridor access point densities, and traffic volumes. Parameter uncertainties, which vary across locales and experts, are addressed via numerical interval analyses. This approach is demonstrated at several geographic scales across 7,000 kilometers of highways in a geographic region and several subregions. The demonstration prioritizes route segments that would benefit from risk management, including (i) additional data or elicitation, (ii) right‐of‐way purchases, (iii) restriction or closing of access points, (iv) new alignments, (v) developer proffers, and (vi) etc. The approach ought to be of wide interest to analysts, planners, policymakers, and stakeholders who rely on heterogeneous data and expertise for risk management.  相似文献   
112.
Although sustainability recently became a key factor in social science, little progress has been made toward improving the measurement of sustainability performance. This paper proposes composite corporate sustainability performance indicators using a meta-frontier generalized directional distance function. This advanced approach can measure the efficiency of corporate social responsibility activities by benchmarking, while simultaneously considering industrial heterogeneities, using the meta-frontier approach. First, we propose the concept of a meta-frontier generalized directional distance function. Second, several standardized composite indicators related to corporate sustainability performance are developed. The meta-frontier directional distance function is estimated by solving a series of data envelopment analysis models. Chinese state-owned listed enterprises are then empirically examined using the proposed model. We find significant group heterogeneities in terms of corporate sustainability performance. We also derive some policy implications using the empirical results.  相似文献   
113.
In this paper we consider the use of data envelopment analysis (DEA) for the assessment of efficiency of units whose output profiles exhibit specialisation. An example of this is found in agriculture where a large number of different crops may be produced in a particular region, but only a few farms actually produce each particular crop. Because of the large number of outputs, the use of conventional DEA models in such applications results in a poor efficiency discrimination. We overcome this problem by specifying production trade-offs between different outputs, relying on the methodology of Podinovski (J Oper Res Soc 2004;55:1311–22). The main idea of our approach is to relate various outputs to the production of the main output. We illustrate this methodology by an application of DEA involving agricultural farms in different regions of Turkey. An integral part of this application is the elicitation of expert judgements in order to formulate the required production trade-offs. Their use in DEA models results in a significant improvement of the efficiency discrimination. The proposed methodology should also be of interest to other applications of DEA where units may exhibit specialization, such as applications involving hospitals or bank branches.  相似文献   
114.
As environment constraints on economic growth are strengthening, Carbon Emissions Abatement (CEA) allocation becomes a significant issue that draws academia׳s attention. In the literature, the Data Envelopment Analysis (DEA) technique has been applied to obtain CEA allocation with centralized models. Nevertheless, a centralized allocation plan suffers from an implementation difficulty in persuading decision-making units (DMUs) into an agreement. In this paper, we propose a new two-step method to mitigate this side effect. In the first step, we provide improved DEA-based centralized allocation models under the assumptions of constant returns-to-scale (CRS) and variable returns-to-scale (VRS) respectively and in the second step, two compensation schemes are developed for centralized allocation plans. An empirical application to the countries in Organization for Economic Co-operation and Development (OECD) is presented to elaborate the main idea.  相似文献   
115.
随着社会问题复杂程度的不断增加,大数据应用为政府解决棘手问题、提升治理能力、应对挑战提供了新思路,但也带来了一定程度的适应难题。因此,探析大数据在我国的政府治理实践中如何得到合理应用、有效推动治理能力提升需要结合国内外成功经验,从机遇和挑战两个维度进行理解。分析发现,大数据可以推动政府治理协同化、政府决策科学化、数据管理规范化,并会遇到数据壁垒、专业能力欠缺以及政策法规滞后等方面的阻碍。研究意义在于分析大数据对政府治理能力提升的作用,为未来我国政府在治理实践中更好地利用大数据技术提供参考。  相似文献   
116.
There are no practical and effective mechanisms to share high-dimensional data including sensitive information in various fields like health financial intelligence or socioeconomics without compromising either the utility of the data or exposing private personal or secure organizational information. Excessive scrambling or encoding of the information makes it less useful for modelling or analytical processing. Insufficient preprocessing may compromise sensitive information and introduce a substantial risk for re-identification of individuals by various stratification techniques. To address this problem, we developed a novel statistical obfuscation method (DataSifter) for on-the-fly de-identification of structured and unstructured sensitive high-dimensional data such as clinical data from electronic health records (EHR). DataSifter provides complete administrative control over the balance between risk of data re-identification and preservation of the data information. Simulation results suggest that DataSifter can provide privacy protection while maintaining data utility for different types of outcomes of interest. The application of DataSifter on a large autism dataset provides a realistic demonstration of its promise practical applications.  相似文献   
117.
平卫英等 《统计研究》2021,38(12):19-29
近年来,越来越多的企业选择以低廉或免费的价格为居民提供互联网服务,但产出价值和消费行为却无法在GDP核算中体现。服务业产出被低估一直为学术界所讨论,在此背景下,互联网经济下创新的免费商业模式对传统核算理论的挑战成为本文研究切入点。本文在“互联网免费服务与顾客价值”的易货交易框架下对其价值核算展开研究,将互联网免费服务价值核算与数据资产核算联系在一起,使数据成为连接互联网免费服务与国民经济核算中生产、消费、收入核算的桥梁。文章最后通过模拟核算案例表现了互联网免费服务核算对不同账户的影响,建议对住户部门的生产、消费与收入各增加一笔虚拟处理,对企业的生产、收入及资产也各增加一笔虚拟处理,且这6笔虚拟价值相等。  相似文献   
118.
王芝皓等 《统计研究》2021,38(7):127-139
在实际数据分析中经常会遇到零膨胀计数数据作为响应变量与函数型随机变量和随机向量作为预测变量相关联。本文考虑函数型部分变系数零膨胀模型 (FPVCZIM),模型中无穷维的斜率函数用函数型主成分基逼近,系数函数用B-样条进行拟合。通过EM 算法得到估计量,讨论其理论性质,在一些正则条件下获得了斜率函数和系数函数估计量的收敛速度。有限样本的Monte Carlo 模拟研究和真实数据分析被用来解释本文提出的方法。  相似文献   
119.
This paper draws on empirical research with NEET populations (16–24-year-olds not in education, employment or training) in the U.K. in order to engage with issues around identification, data and metrics produced through datalogical systems. Our aim is to bridge contemporary discourses around data, digital bureaucracy and datalogical systems with empirical material drawn from a long-term ethnographic project with NEET groups in Leeds, U.K. in order to highlight the way datalogical systems ideologically and politically shape people’s lives. We argue that NEET is a long-standing data category that does work and has resonance within wider datalogical systems. Secondly, that these systems are decision-making and far from benign. They have real impact on people’s lives – not just in a straightforwardly, but in obscure, complex and uneven ways which makes the potential for disruption or intervention increasingly problematic. Finally, these datalogical systems also implicate and are generated by us, even as we seek to critique them.  相似文献   
120.
Performance rating and comparison of a group of entities is frequently based on the values of several attributes. Such evaluations are often complicated by the absence of a natural or obvious way to weight the importance of the individual dimensions of the performance. This paper proposes a framework based on nonparametric frontiers to rate and classify entities described by multiple performance attributes into ‘performers’ and ‘underperformers’. The method is equivalent to Data Envelopment Analysis (DEA) with entities defined only by outputs. In the spirit of DEA, the weights for each attribute are selected to maximize each entity’s performance score. This approach, however, results in a new linear program that is more direct and intuitive than traditional DEA formulations. The model can be easily understood and interpreted by practitioners since it conforms better to the practice of evaluating and comparing performance using standard specifications. We illustrate the model’s use with two examples. The first evaluates the performance of employees. The second is an application in manufacturing where multiple quality attributes are used to assess and compare performance of different manufacturing processes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号