首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2890篇
  免费   528篇
管理学   686篇
民族学   6篇
人口学   58篇
理论方法论   578篇
社会学   1790篇
统计学   300篇
  2023年   1篇
  2021年   67篇
  2020年   135篇
  2019年   297篇
  2018年   121篇
  2017年   205篇
  2016年   253篇
  2015年   259篇
  2014年   236篇
  2013年   360篇
  2012年   191篇
  2011年   180篇
  2010年   156篇
  2009年   118篇
  2008年   155篇
  2007年   95篇
  2006年   117篇
  2005年   85篇
  2004年   89篇
  2003年   76篇
  2002年   53篇
  2001年   60篇
  2000年   50篇
  1999年   10篇
  1998年   1篇
  1997年   3篇
  1996年   7篇
  1995年   1篇
  1994年   6篇
  1993年   8篇
  1992年   2篇
  1991年   5篇
  1990年   2篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
排序方式: 共有3418条查询结果,搜索用时 15 毫秒
991.
While performance management (PM) is pervasive across contemporary workplaces, extant research into how performance management affects workers is often indirect or scattered across disciplinary silos. This paper reviews and synthesizes this research, identifies key gaps and explores ‘recognition theory’ as a nascent framework that can further develop this important body of knowledge. The paper develops in three main stages. The first stage reviews ‘mainstream’ human resource management (HRM) research. While this research analyses workers’ reactions to performance management in some depth, its focus on serving organizational goals marginalizes extra‐organizational impacts. The second stage reviews more critical HRM research, which interprets performance management as a disciplinary, coercive or inequitable management device. While this literature adds an important focus on organizational power, there is scope to analyse further how PM affects workers’ well‐being. To develop this strand of PM research, the third stage turns to the emerging field of recognition theory independently developed by Axel Honneth and Christophe Dejours. The authors focus especially on recognition theory's exploration of how (in)adequate acknowledgement of workers’ contributions can significantly affect their well‐being at the level of self‐conception. Although recognition theory is inherently critical, the paper argues that it can advance both mainstream and critical performance management research, and also inform broader inquiry into recognition and identity at work.  相似文献   
992.
Paradox and dual‐process theories are used by management and organization researchers in studying a variety of phenomena across a wide range of management sub‐fields. Cognition is a focal point of both of these theories. However, despite their growing importance and shared areas of inquiry, these two theories have developed largely in isolation from each other. To address this lack of integration, the authors conducted a review and synthesis of relevant aspects of the paradox and dual‐process literatures. Focusing bidirectionally on how paradox theory informs dual‐process theory and how dual‐process theory informs paradox theory, they highlight the ‘nestedness’ of intuition and analysis in paradox (a paradox within paradoxical thinking). On the basis of the review and synthesis, they identify four themes (epistemological and ontological assumptions in the relationship between intuition and analysis; psychological and psychometric issues in the relationships between intuition and analysis; managers’ experiences of tensions between intuition and analysis; managers’ approaches to tensions between intuition and analysis) and introduce an integrative framework that assimilates these two perspectives and sets out an agenda for future research and implications for management.  相似文献   
993.
A hazard is often spatially local in a network system, but its impact can spread out through network topology and become global. To qualitatively and quantitatively assess the impact of spatially local hazards on network systems, this article develops a new spatial vulnerability model by taking into account hazard location, area covered by hazard, and impact of hazard (including direct impact and indirect impact), and proposes an absolute spatial vulnerability index (ASVI) and a relative spatial vulnerability index (RSVI). The relationship between the new model and some relevant traditional network properties is also analyzed. A case study on the spatial vulnerability of the Chinese civil aviation network system is conducted to demonstrate the effectiveness of the model, and another case study on the Beijing subway network system to verify its relationship with traditional network properties.  相似文献   
994.
Autonomous vehicles (AVs) promise to make traffic safer, but their societal integration poses ethical challenges. What behavior of AVs is morally acceptable in critical traffic situations when consequences are only probabilistically known (a situation of risk) or even unknown (a situation of uncertainty)?  How do people retrospectively evaluate the behavior of an AV in situations in which a road user has been harmed? We addressed these questions in two empirical studies (N = 1,638) that approximated the real‐world conditions under which AVs operate by varying the degree of risk and uncertainty of the situation. In Experiment 1, subjects learned that an AV had to decide between staying in the lane or swerving. Each action could lead to a collision with another road user, with some known or unknown likelihood. Subjects’ decision preferences and moral judgments varied considerably with specified probabilities under risk, yet less so under uncertainty. The results suggest that staying in the lane and performing an emergency stop is considered a reasonable default, even when this action does not minimize expected loss. Experiment 2 demonstrated that if an AV collided with another road user, subjects’ retrospective evaluations of the default action were also more robust against unwanted outcome and hindsight effects than the alternative swerve maneuver. The findings highlight the importance of investigating moral judgments under risk and uncertainty in order to develop policies that are societally acceptable even under critical conditions.  相似文献   
995.
Next‐generation sequencing (NGS) data present an untapped potential to improve microbial risk assessment (MRA) through increased specificity and redefinition of the hazard. Most of the MRA models do not account for differences in survivability and virulence among strains. The potential of machine learning algorithms for predicting the risk/health burden at the population level while inputting large and complex NGS data was explored with Listeria monocytogenes as a case study. Listeria data consisted of a percentage similarity matrix from genome assemblies of 38 and 207 strains of clinical and food origin, respectively. Basic Local Alignment (BLAST) was used to align the assemblies against a database of 136 virulence and stress resistance genes. The outcome variable was frequency of illness, which is the percentage of reported cases associated with each strain. These frequency data were discretized into seven ordinal outcome categories and used for supervised machine learning and model selection from five ensemble algorithms. There was no significant difference in accuracy between the models, and support vector machine with linear kernel was chosen for further inference (accuracy of 89% [95% CI: 68%, 97%]). The virulence genes FAM002725, FAM002728, FAM002729, InlF, InlJ, Inlk, IisY, IisD, IisX, IisH, IisB, lmo2026, and FAM003296 were important predictors of higher frequency of illness. InlF was uniquely truncated in the sequence type 121 strains. Most important risk predictor genes occurred at highest prevalence among strains from ready‐to‐eat, dairy, and composite foods. We foresee that the findings and approaches described offer the potential for rethinking the current approaches in MRA.  相似文献   
996.
The establishment of interventions to maximize maternal health requires the identification of modifiable risk factors. Toward the identification of modifiable hospital‐based factors, we analyze over 2 million births from 2005 to 2010 in Texas, employing a series of quasi‐experimental tests involving hourly, daily, and monthly circumstances where medical service quality (or clinical capital) is known to vary exogenously. Motivated by a clinician's choice model, we investigate whether maternal delivery complications (1) vary by work shift, (2) increase by the hours worked within shifts, (3) increase on weekends and holidays when hospitals are typically understaffed, and (4) are higher in July when a new cohort of residents enter teaching hospitals. We find consistent evidence of a sizable statistical relationship between deliveries during nonstandard schedules and negative patient outcomes. Delivery complications are higher during night shifts (OR = 1.21, 95% CI: 1.18–1.25), and on weekends (OR = 1.09, 95% CI: 1.04–1.14) and holidays (OR = 1.29, 95% CI: 1.04–1.60), when hospitals are understaffed and less experienced doctors are more likely to work. Within shifts, we show deterioration of occupational performance per additional hour worked (OR = 1.02, 95% CI: 1.01–1.02). We observe substantial additional risk at teaching hospitals in July (OR = 1.28, 95% CI: 1.14–1.43), reflecting a cohort‐turnover effect. All results are robust to the exclusion of noninduced births and intuitively falsified with analyses of chromosomal disorders. Results from our multiple‐test strategy indicate that hospitals can meaningfully attenuate harm to maternal health through strategic scheduling of staff.  相似文献   
997.
In this work, we study the environmental and operational factors that influence airborne transmission of nosocomial infections. We link a deterministic zonal ventilation model for the airborne distribution of infectious material in a hospital ward, with a Markovian multicompartment SIS model for the infection of individuals within this ward, in order to conduct a parametric study on ventilation rates and their effect on the epidemic dynamics. Our stochastic model includes arrival and discharge of patients, as well as the detection of the outbreak by screening events or due to symptoms being shown by infective patients. For each ventilation setting, we measure the infectious potential of a nosocomial outbreak in the hospital ward by means of a summary statistic: the number of infections occurred within the hospital ward until end or declaration of the outbreak. We analytically compute the distribution of this summary statistic, and carry out local and global sensitivity analysis in order to identify the particular characteristics of each ventilation regime with the largest impact on the epidemic spread. Our results show that ward ventilation can have a significant impact on the infection spread, especially under slow detection scenarios or in overoccupied wards, and that decreasing the infection risk for the whole hospital ward might increase the risk in specific areas of the health‐care facility. Moreover, the location of the initial infective individual and the protocol in place for outbreak declaration both form an interplay with ventilation of the ward.  相似文献   
998.
Humans are continuously exposed to chemicals with suspected or proven endocrine disrupting chemicals (EDCs). Risk management of EDCs presents a major unmet challenge because the available data for adverse health effects are generated by examining one compound at a time, whereas real‐life exposures are to mixtures of chemicals. In this work, we integrate epidemiological and experimental evidence toward a whole mixture strategy for risk assessment. To illustrate, we conduct the following four steps in a case study: (1) identification of single EDCs (“bad actors”)—measured in prenatal blood/urine in the SELMA study—that are associated with a shorter anogenital distance (AGD) in baby boys; (2) definition and construction of a “typical” mixture consisting of the “bad actors” identified in Step 1; (3) experimentally testing this mixture in an in vivo animal model to estimate a dose–response relationship and determine a point of departure (i.e., reference dose [RfD]) associated with an adverse health outcome; and (4) use a statistical measure of “sufficient similarity” to compare the experimental RfD (from Step 3) to the exposure measured in the human population and generate a “similar mixture risk indicator” (SMRI). The objective of this exercise is to generate a proof of concept for the systematic integration of epidemiological and experimental evidence with mixture risk assessment strategies. Using a whole mixture approach, we could find a higher rate of pregnant women under risk (13%) when comparing with the data from more traditional models of additivity (3%), or a compound‐by‐compound strategy (1.6%).  相似文献   
999.
The persistent gap in flood risk awareness in Canada, and elsewhere in North America, is a continual source of worry for researchers and emergency managers; many people living in at‐risk places are simply unaware of risks and of their proximity to hazards. This study seeks to understand which residents were aware of flood risk, using unique representative survey data of Calgary residents living in the city's flood‐prone neighborhoods collected after the devastating and costly 2013 Southern Alberta Flood. The article uses logistic regression models to analyze which residents were aware of risk to their homes. Findings indicate that, in addition to various demographic predictors, many of the geographic predictors (including the elevation of one's home relative to the river) are significant predictors of awareness. Having a direct sight line to one of Calgary's two rivers is also a significant predictor in some of the models, suggesting that the visibility of hazards matters for flood risk perception, although this effect fades when many of the geographic predictors are added. Finally, the models indicate that several variables related to local, neighborhood‐based social networks are significant as well. These findings reveal that both physical surroundings and social context are important for understanding risk awareness. The article concludes by discussing the relevance for social science research on disasters and hazards, as well as for planners and emergency managers.  相似文献   
1000.
An optimization model was used to gain insight into cost‐effective monitoring plans for aflatoxins along the maize supply chain. The model was based on a typical Dutch maize chain, with maize grown in the Black Sea region, and transported by ship to the Netherlands for use as an ingredient in compound feed for dairy cattle. Six different scenarios, with different aflatoxin concentrations at harvest and possible aflatoxin production during transport, were used. By minimizing the costs and using parameters such as the concentration, the variance of the sampling plan, and the monitoring and replacement costs, the model optimized the control points (CPs; e.g., after harvest, before or after transport by sea ship), the number of batches sampled at the CP, and the number of samples per batch. This optimization approach led to an end‐of‐chain aflatoxin concentration below the predetermined limit. The model showed that, when postharvest aflatoxin production was not possible, it was most cost‐effective to collect samples from all batches and replace contaminated batches directly after the harvest, since the replacement costs were the lowest at the origin of the chain. When there was aflatoxin production during storage, it was most cost‐effective to collect samples and replace contaminated batches after storage and transport to avoid the duplicate before and after monitoring and replacement costs. Further along the chain a contaminated batch is detected, the more stakeholders are involved, the more expensive the replacement costs and possible recall costs become.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号