首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Although distributed teams have been researched extensively in information systems and decision science disciplines, a review of the literature suggests that the dominant focus has been on understanding the factors affecting performance at the team level. There has however been an increasing recognition that specific individuals within such teams are often critical to the team's performance. Consequently, existing knowledge about such teams may be enhanced by examining the factors that affect the performance of individual team members. This study attempts to address this need by identifying individuals who emerge as “stars” in globally distributed teams involved in knowledge work such as information systems development (ISD). Specifically, the study takes a knowledge‐centered view in explaining which factors lead to “stardom” in such teams. Further, it adopts a social network approach consistent with the core principles of structural/relational analysis in developing and empirically validating the research model. Data from U.S.–Scandinavia self‐managed “hybrid” teams engaged in systems development were used to deductively test the proposed model. The overall study has several implications for group decision making: (i) the study focuses on stars within distributed teams, who play an important role in shaping group decision making, and emerge as a result of a negotiated/consensual decision making within egalitarian teams; (ii) an examination of emergent stars from the team members’ point of view reflects the collective acceptance and support dimension decision‐making contexts identified in prior literature; (iii) finally, the study suggests that the social network analysis technique using relational data can be a tool for a democratic decision‐making technique within groups.  相似文献   

2.
We review approaches for characterizing “peak” exposures in epidemiologic studies and methods for incorporating peak exposure metrics in dose–response assessments that contribute to risk assessment. The focus was on potential etiologic relations between environmental chemical exposures and cancer risks. We searched the epidemiologic literature on environmental chemicals classified as carcinogens in which cancer risks were described in relation to “peak” exposures. These articles were evaluated to identify some of the challenges associated with defining and describing cancer risks in relation to peak exposures. We found that definitions of peak exposure varied considerably across studies. Of nine chemical agents included in our review of peak exposure, six had epidemiologic data used by the U.S. Environmental Protection Agency (US EPA) in dose–response assessments to derive inhalation unit risk values. These were benzene, formaldehyde, styrene, trichloroethylene, acrylonitrile, and ethylene oxide. All derived unit risks relied on cumulative exposure for dose–response estimation and none, to our knowledge, considered peak exposure metrics. This is not surprising, given the historical linear no‐threshold default model (generally based on cumulative exposure) used in regulatory risk assessments. With newly proposed US EPA rule language, fuller consideration of alternative exposure and dose–response metrics will be supported. “Peak” exposure has not been consistently defined and rarely has been evaluated in epidemiologic studies of cancer risks. We recommend developing uniform definitions of “peak” exposure to facilitate fuller evaluation of dose response for environmental chemicals and cancer risks, especially where mechanistic understanding indicates that the dose response is unlikely linear and that short‐term high‐intensity exposures increase risk.  相似文献   

3.
Inter‐customer interactions are important to the operation of self‐services in retail settings. More specifically, when self‐service terminals are used as part of customers’ checkout processes in retail operations without the explicit involvement of retailers as the direct service providers, inter‐customer interactions become a significant managerial issue. In this article, we examine the impact of inter‐customer interactions at retail self‐service terminals on customers’ service quality perceptions and repeat purchase intentions at retail stores. We conduct a scenario‐based experimental design (N = 674) using a 2 × 2 factorial design in which inter‐customer interactions are divided into “positive” vs. “negative” and occur during the “waiting” or during the actual “transaction” stages of self‐services at a retail store. We use attribution theory to develop the hypotheses. The results demonstrate that, through their interactions, fellow customers can exert influences on a focal customer's quality perceptions and repeat purchasing intentions toward a retail store. Furthermore, these influences were impacted by how customers attribute blame or assign responsibility toward the retail store. Service operations managers should leverage these interactions by designing into self‐service settings the capacities and interfaces that are best suited for customers’ co‐production of their self‐service experiences.  相似文献   

4.
When assessing risks posed by environmental chemical mixtures, whole mixture approaches are preferred to component approaches. When toxicological data on whole mixtures as they occur in the environment are not available, Environmental Protection Agency guidance states that toxicity data from a mixture considered “sufficiently similar” to the environmental mixture can serve as a surrogate. We propose a novel method to examine whether mixtures are sufficiently similar, when exposure data and mixture toxicity study data from at least one representative mixture are available. We define sufficient similarity using equivalence testing methodology comparing the distance between benchmark dose estimates for mixtures in both data‐rich and data‐poor cases. We construct a “similar mixtures risk indicator”(SMRI) (analogous to the hazard index) on sufficiently similar mixtures linking exposure data with mixtures toxicology data. The methods are illustrated using pyrethroid mixtures occurrence data collected in child care centers (CCC) and dose‐response data examining acute neurobehavioral effects of pyrethroid mixtures in rats. Our method shows that the mixtures from 90% of the CCCs were sufficiently similar to the dose‐response study mixture. Using exposure estimates for a hypothetical child, the 95th percentile of the (weighted) SMRI for these sufficiently similar mixtures was 0.20 (i.e., where SMRI <1, less concern; >1, more concern).  相似文献   

5.
The semantic of the terms “sustainable development” and “corporate social responsibility” have changed over time to a point where these concepts have become two interrelated processes for ensuring the far‐reaching development of society. Their convergence has given dimension to the environmental and corporate regulation mechanisms in strong economies. This article deals with the question of how the ethos of this convergence could be incorporated into the self‐regulation of businesses in weak economies where nonlegal drivers are either inadequate or inefficient. It proposes that the policies for this incorporation should be based on the precepts of meta‐regulation that have the potential to hold force majeure, economic incentives, and assistance‐related strategies to reach an objective from the perspective of weak economies.  相似文献   

6.
Two images, “black swans” and “perfect storms,” have struck the public's imagination and are used—at times indiscriminately—to describe the unthinkable or the extremely unlikely. These metaphors have been used as excuses to wait for an accident to happen before taking risk management measures, both in industry and government. These two images represent two distinct types of uncertainties (epistemic and aleatory). Existing statistics are often insufficient to support risk management because the sample may be too small and the system may have changed. Rationality as defined by the von Neumann axioms leads to a combination of both types of uncertainties into a single probability measure—Bayesian probability—and accounts only for risk aversion. Yet, the decisionmaker may also want to be ambiguity averse. This article presents an engineering risk analysis perspective on the problem, using all available information in support of proactive risk management decisions and considering both types of uncertainty. These measures involve monitoring of signals, precursors, and near‐misses, as well as reinforcement of the system and a thoughtful response strategy. It also involves careful examination of organizational factors such as the incentive system, which shape human performance and affect the risk of errors. In all cases, including rare events, risk quantification does not allow “prediction” of accidents and catastrophes. Instead, it is meant to support effective risk management rather than simply reacting to the latest events and headlines.  相似文献   

7.
A challenge for large‐scale environmental health investigations such as the National Children's Study (NCS), is characterizing exposures to multiple, co‐occurring chemical agents with varying spatiotemporal concentrations and consequences modulated by biochemical, physiological, behavioral, socioeconomic, and environmental factors. Such investigations can benefit from systematic retrieval, analysis, and integration of diverse extant information on both contaminant patterns and exposure‐relevant factors. This requires development, evaluation, and deployment of informatics methods that support flexible access and analysis of multiattribute data across multiple spatiotemporal scales. A new “Tiered Exposure Ranking” (TiER) framework, developed to support various aspects of risk‐relevant exposure characterization, is described here, with examples demonstrating its application to the NCS. TiER utilizes advances in informatics computational methods, extant database content and availability, and integrative environmental/exposure/biological modeling to support both “discovery‐driven” and “hypothesis‐driven” analyses. “Tier 1” applications focus on “exposomic” pattern recognition for extracting information from multidimensional data sets, whereas second and higher tier applications utilize mechanistic models to develop risk‐relevant exposure metrics for populations and individuals. In this article, “tier 1” applications of TiER explore identification of potentially causative associations among risk factors, for prioritizing further studies, by considering publicly available demographic/socioeconomic, behavioral, and environmental data in relation to two health endpoints (preterm birth and low birth weight). A “tier 2” application develops estimates of pollutant mixture inhalation exposure indices for NCS counties, formulated to support risk characterization for these endpoints. Applications of TiER demonstrate the feasibility of developing risk‐relevant exposure characterizations for pollutants using extant environmental and demographic/socioeconomic data.  相似文献   

8.
Ted W. Yellman 《Risk analysis》2016,36(6):1072-1078
Some of the terms used in risk assessment and management are poorly and even contradictorily defined. One such term is “event,” which arguably describes the most basic of all risk‐related concepts. The author cites two contemporary textbook interpretations of “event” that he contends are incorrect and misleading. He then examines the concept of an event in A. N. Kolmogorov's probability axioms and in several more‐current textbooks. Those concepts are found to be too narrow for risk assessments and inconsistent with the actual usage of “event” by risk analysts. The author goes on to define and advocate linguistic definitions of events (as opposed to mathematical definitions)—definitions constructed from natural language. He argues that they should be recognized for what they are: the de facto primary method of defining events.  相似文献   

9.
Anne Chapman 《Risk analysis》2006,26(3):603-616
Under current European Union legislation, action to restrict the production and use of a chemical is only justified if there is evidence that the chemical poses a risk to human health or the environment. Risk is understood as being a matter of the magnitude and probability of specifiable harms. An examination of how risks from chemicals are assessed shows the process to be fraught with uncertainty, with the result that evidence that commands agreement as to whether a chemical poses a risk or not is often not available. Hence the frequent disputes as to whether restrictions on chemicals are justified. Rather than trying to assess the risks from a chemical, I suggest that we should aim to assess how risky a chemical is in a more everyday sense, where riskiness is a matter of the possibility of harm. Risky chemicals are those where, given our state of knowledge, it is possible that they cause harm. I discuss four things that make a chemical more risky: (1) its capacity to cause harm; (2) its novelty; (3) its persistence; and (4) its mobility. Regulation of chemicals should aim to reduce the production and use of risky chemicals by requiring that the least risky substance or method is always used for any particular purpose. Any use of risky substances should be justifiable in terms of the public benefits of that use.  相似文献   

10.
11.
This essay deals critically with the use of the term “systemic”. Initially three basic system-theoretical concepts used in consulting and counseling are introduced: the problem of complexity, the construction of realities and the paradigm of functionality. Subsequently, the author demonstrates, that these principles have always been integral components of the most early pedagogical/psychological consulting practices. The addition of the term “systemic” is thus often superfluous and misleading.  相似文献   

12.
This article is a reply to an essay titled “Systemic—what else!”, recently published by Claus Nowak. Despite of great sympathy with Nowak’s passionate plea for an appreciative use of classical approaches to counseling and consulting—especially approaches of the humanistic psychological tradition—our analysis shows that the terms “systemic” and “systems theory” open up our understanding for new phenomena. Therefore, an adequate usage of the term “systemic” and a proper utilization of “systems theory” and its principles are not superfluous.  相似文献   

13.
Emerging “prevention‐based” approaches to chemical regulation seek to minimize the use of toxic chemicals by mandating or directly incentivizing the adoption of viable safer alternative chemicals or processes. California and Maine are beginning to implement such programs, requiring manufacturers of consumer products containing certain chemicals of concern to identify and evaluate potential safer alternatives. In the European Union, the REACH program imposes similar obligations on manufacturers of certain substances of very high concern. Effective prevention‐based regulation requires regulatory alternatives analysis (RAA), a methodology for comparing and evaluating the regulated chemical or process and its alternatives across a range of relevant criteria. RAA has both public and private dimensions. To a significant degree, alternatives analysis is an aspect of product design; that is, the process by which private industry designs the goods it sells. Accordingly, an RAA method should reflect the attributes of well‐crafted product design tools used by businesses. But RAA adds health and environmental objectives to the mix of concerns taken into account by the product designer. Moreover, as part of a prevention‐based regulatory regime, it implicates important public values such as legitimacy, equity, public engagement, and accountability. Thus, an RAA should reflect both private standards and public values, and be evaluated against them. This article adopts that perspective, identifying an integrated set of design principles for RAA, and illustrating the application of those principles.  相似文献   

14.
A mass customization strategy enables a firm to match its product designs to unique consumer tastes. In a classic horizontal product‐differentiation framework, a consumer's utility is a decreasing function of the distance between their ideal taste and the taste defined by the most closely aligned product the firm offers. A consumer thus considers the taste mismatch associated with their purchased product, but otherwise the positioning of the firm's product portfolio (or, “brand image”) is immaterial. In contrast, self‐congruency theory suggests that consumers assess how well both the purchased product and its overall brand image match with their ideal taste. Therefore, we incorporate within the consumer utility function both product‐specific and brand‐level components. Mass customization has the potential to improve taste alignment with regard to a specific purchased product, but at the risk of increasing brand dilution. Absent brand dilution concerns, a firm will optimally serve all consumers’ ideal tastes at a single price. In contrast, by endogenizing dilution costs within the consumer utility model, we prove that a mass‐customizing firm optimally uses differential pricing. Moreover, we show that the firm offers reduced prices to consumers with extreme tastes (to stimulate consumer “travel”), with a higher and fixed price being offered to those consumers having more central (mainstream) tastes. Given that a continuous spectrum of prices will likely not be practical in application, we also consider the more pragmatic approach of augmenting the uniformly priced mass customization range with preset (non‐customized) outlying designs, which serve customers at the taste extremes. We prove this practical approach performs close to optimal.  相似文献   

15.
Humans are continuously exposed to chemicals with suspected or proven endocrine disrupting chemicals (EDCs). Risk management of EDCs presents a major unmet challenge because the available data for adverse health effects are generated by examining one compound at a time, whereas real‐life exposures are to mixtures of chemicals. In this work, we integrate epidemiological and experimental evidence toward a whole mixture strategy for risk assessment. To illustrate, we conduct the following four steps in a case study: (1) identification of single EDCs (“bad actors”)—measured in prenatal blood/urine in the SELMA study—that are associated with a shorter anogenital distance (AGD) in baby boys; (2) definition and construction of a “typical” mixture consisting of the “bad actors” identified in Step 1; (3) experimentally testing this mixture in an in vivo animal model to estimate a dose–response relationship and determine a point of departure (i.e., reference dose [RfD]) associated with an adverse health outcome; and (4) use a statistical measure of “sufficient similarity” to compare the experimental RfD (from Step 3) to the exposure measured in the human population and generate a “similar mixture risk indicator” (SMRI). The objective of this exercise is to generate a proof of concept for the systematic integration of epidemiological and experimental evidence with mixture risk assessment strategies. Using a whole mixture approach, we could find a higher rate of pregnant women under risk (13%) when comparing with the data from more traditional models of additivity (3%), or a compound‐by‐compound strategy (1.6%).  相似文献   

16.
Empirical evidence suggests that perfectionism can affect choice behavior. When striving for perfection, a person can desire to keep normatively appealing options feasible even if she persistently fails to use these options later. For instance, she can “pay not to go to the gym,” as in DellaVigna and Malmendier (2006). By contrast, some perfectionists may avoid normatively important tasks for fear of negative self‐evaluation of their performance. This paper models perfectionist behaviors in Gul and Pesendorfer's (2001) menu framework where agents may be tempted to deviate from their long‐term normative objectives. In addition to self‐control costs, I identify a utility component that reflects emotional costs and benefits of perfectionism. My model is derived from axioms imposed on preferences over menus in an essentially unique way.  相似文献   

17.
We examine the role of managers in controlling the positive impact of stakeholder management (SM) on firm financial performance (FP) in the long term. We develop and test competing hypotheses on whether managers act as “good citizens” or engage in “self‐dealing” when allowed greater discretion. We test our assertions using dynamic panel data analysis of a sample of 806 U.S. public firms operating in 34 industries over 5 years (2005–2009). Our results indicate a nuanced influence of managerial discretion contexts on the SM‐FP relationship. We infer that given more latitude in decision making, as long as the “going is good” managers act as good citizens, but otherwise they revert to managerial self‐dealing. In light of our results, firms designing governance mechanisms to encourage managers to balance the needs of both shareholders and stakeholders must remain cognizant of contextual contingencies.  相似文献   

18.
Self‐driving vehicles will affect the future of transportation, but factors that underlie perception and acceptance of self‐driving cars are yet unclear. Research on feelings as information and the affect heuristic has suggested that feelings are an important source of information, especially in situations of complexity and uncertainty. In this study (N = 1,484), we investigated how feelings related to traditional driving affect risk perception, benefit perception, and trust related to self‐driving cars as well as people's acceptance of the technology. Due to limited experiences with and knowledge of self‐driving cars, we expected that feelings related to a similar experience, namely, driving regular cars, would influence judgments of self‐driving cars. Our results support this assumption. While positive feelings of enjoyment predicted higher benefit perception and trust, negative affect predicted higher risk and higher benefit perception of self‐driving cars. Feelings of control were inversely related to risk and benefit perception, which is in line with research on the affect heuristic. Furthermore, negative affect was an important source of information for judgments of use and acceptance. Interest in using a self‐driving car was also predicted by lower risk perception, higher benefit perception, and higher levels of trust in the technology. Although people's individual experiences with advanced vehicle technologies and knowledge were associated with perceptions and acceptance, many simply have never been exposed to the technology and know little about it. In the absence of this experience or knowledge, all that is left is the knowledge, experience, and feelings they have related to regular driving.  相似文献   

19.
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): “objective” dependence between the (random) occurrences of different basic events (BEs) in the FT and “state‐of‐knowledge” (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well‐known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present).  相似文献   

20.
It has been suggested that affect may play an important role in risk perception. Slovic et al. argued that people make use of the “affect heuristic” when assessing risks because it is easier and more efficient to rely on spontaneous affective reactions than to analyze all available information. In the present studies, a single category implicit association test (SC‐IAT) to measure associations evoked by different hazards was employed. In the first study, we tested the extent to which the SC‐IAT corresponds to the theoretical construct of affect in a risk framework. Specifically, we found that the SC‐IAT correlates with other explicit measures that claim to measure affect, as well as with a measure of trust, but not with a measure that captures a different construct (subjective knowledge). In the second study, we addressed the question of whether hazards that vary along the dread dimension of the psychometric paradigm also differ in the affect they evoke. The results of the SC‐IAT indicated that a high‐dread hazard (nuclear power) elicits negative associations. Moreover, the high‐dread hazard evoked more negative associations than a medium‐dread hazard (hydroelectric power). In contrast, a nondread hazard (home appliances) led to positive associations. The results of our study highlight the importance of affect in shaping attitudes and opinions toward risks. The results further suggest that implicit measures may provide valuable insight into people's risk perception above and beyond explicit measures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号