首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 637 毫秒
1.
The concepts of risk, safety, and security have received substantial academic interest. Several assumptions exist about their nature and relation. Besides academic use, the words risk, safety, and security are frequent in ordinary language, for example, in media reporting. In this article, we analyze the concepts of risk, safety, and security, and their relation, based on empirical observation of their actual everyday use. The “behavioral profiles” of the nouns risk, safety, and security and the adjectives risky, safe, and secure are coded and compared regarding lexical and grammatical contexts. The main findings are: (1) the three nouns risk, safety, and security, and the two adjectives safe and secure, have widespread use in different senses, which will make any attempt to define them in a single unified manner extremely difficult; (2) the relationship between the central risk terms is complex and only partially confirms the distinctions commonly made between the terms in specialized terminology; (3) whereas most attempts to define risk in specialized terminology have taken the term to have a quantitative meaning, nonquantitative meanings dominate in everyday language, and numerical meanings are rare; and (4) the three adjectives safe, secure, and risky are frequently used in comparative form. This speaks against interpretations that would take them as absolute, all‐or‐nothing concepts.  相似文献   

2.
Self‐driving vehicles (SDVs) promise to considerably reduce traffic crashes. One pressing concern facing the public, automakers, and governments is “How safe is safe enough for SDVs?” To answer this question, a new expressed‐preference approach was proposed for the first time to determine the socially acceptable risk of SDVs. In our between‐subject survey (N = 499), we determined the respondents’ risk‐acceptance rate of scenarios with varying traffic‐risk frequencies to examine the logarithmic relationships between the traffic‐risk frequency and risk‐acceptance rate. Logarithmic regression models of SDVs were compared to those of human‐driven vehicles (HDVs); the results showed that SDVs were required to be safer than HDVs. Given the same traffic‐risk‐acceptance rates for SDVs and HDVs, their associated acceptable risk frequencies of SDVs and HDVs were predicted and compared. Two risk‐acceptance criteria emerged: the tolerable risk criterion, which indicates that SDVs should be four to five times as safe as HDVs, and the broadly acceptable risk criterion, which suggests that half of the respondents hoped that the traffic risk of SDVs would be two orders of magnitude lower than the current estimated traffic risk. The approach and these results could provide insights for government regulatory authorities for establishing clear safety requirements for SDVs.  相似文献   

3.
Cox LA 《Risk analysis》2012,32(7):1244-1252
Simple risk formulas, such as risk = probability × impact, or risk = exposure × probability × consequence, or risk = threat × vulnerability × consequence, are built into many commercial risk management software products deployed in public and private organizations. These formulas, which we call risk indices, together with risk matrices, “heat maps,” and other displays based on them, are widely used in applications such as enterprise risk management (ERM), terrorism risk analysis, and occupational safety. But, how well do they serve to guide allocation of limited risk management resources? This article evaluates and compares different risk indices under simplifying conditions favorable to their use (statistically independent, uniformly distributed values of their components; and noninteracting risk‐reduction opportunities). Compared to an optimal (nonindex) approach, simple indices produce inferior resource allocations that for a given cost may reduce risk by as little as 60% of what the optimal decisions would provide, at least in our simple simulations. This article suggests a better risk reduction per unit cost index that achieves 98–100% of the maximum possible risk reduction on these problems for all budget levels except the smallest, which allow very few risks to be addressed. Substantial gains in risk reduction achieved for resources spent can be obtained on our test problems by using this improved index instead of simpler ones that focus only on relative sizes of risk (or of components of risk) in informing risk management priorities and allocating limited risk management resources. This work suggests the need for risk management tools to explicitly consider costs in prioritization activities, particularly in situations where budget restrictions make careful allocation of resources essential for achieving close‐to‐maximum risk‐reduction benefits.  相似文献   

4.
Yacov Y. Haimes 《Risk analysis》2009,29(12):1647-1654
The premise of this article is that risk to a system, as well as its vulnerability and resilience, can be understood, defined, and quantified most effectively through a systems-based philosophical and methodological approach, and by recognizing the central role of the system states in this process. A universally agreed-upon definition of risk has been difficult to develop; one reason is that the concept is multidimensional and nuanced. It requires an understanding that risk to a system is inherently and fundamentally a function of the initiating event, the states of the system and of its environment, and the time frame. In defining risk, this article posits that: (a) the performance capabilities of a system are a function of its state vector; (b) a system's vulnerability and resilience vectors are each a function of the input (e.g., initiating event), its time of occurrence, and the states of the system; (c) the consequences are a function of the specificity and time of the event, the vector of the states, the vulnerability, and the resilience of the system; (d) the states of a system are time-dependent and commonly fraught with variability uncertainties and knowledge uncertainties; and (e) risk is a measure of the probability and severity of consequences. The above implies that modeling must evaluate consequences for each risk scenario as functions of the threat (initiating event), the vulnerability and resilience of the system, and the time of the event. This fundamentally complex modeling and analysis process cannot be performed correctly and effectively without relying on the states of the system being studied.  相似文献   

5.
Yacov Y. Haimes 《Risk analysis》2011,31(8):1175-1186
This article highlights the complexity of the quantification of the multidimensional risk function, develops five systems‐based premises on quantifying the risk of terrorism to a threatened system, and advocates the quantification of vulnerability and resilience through the states of the system. The five premises are: (i) There exists interdependence between a specific threat to a system by terrorist networks and the states of the targeted system, as represented through the system's vulnerability, resilience, and criticality‐impact. (ii) A specific threat, its probability, its timing, the states of the targeted system, and the probability of consequences can be interdependent. (iii) The two questions in the risk assessment process: “What is the likelihood?” and “What are the consequences?” can be interdependent. (iv) Risk management policy options can reduce both the likelihood of a threat to a targeted system and the associated likelihood of consequences by changing the states (including both vulnerability and resilience) of the system. (v) The quantification of risk to a vulnerable system from a specific threat must be built on a systemic and repeatable modeling process, by recognizing that the states of the system constitute an essential step to construct quantitative metrics of the consequences based on intelligence gathering, expert evidence, and other qualitative information. The fact that the states of all systems are functions of time (among other variables) makes the time frame pivotal in each component of the process of risk assessment, management, and communication. Thus, risk to a system, caused by an initiating event (e.g., a threat) is a multidimensional function of the specific threat, its probability and time frame, the states of the system (representing vulnerability and resilience), and the probabilistic multidimensional consequences.  相似文献   

6.
In counterterrorism risk management decisions, the analyst can choose to represent terrorist decisions as defender uncertainties or as attacker decisions. We perform a comparative analysis of probabilistic risk analysis (PRA) methods including event trees, influence diagrams, Bayesian networks, decision trees, game theory, and combined methods on the same illustrative examples (container screening for radiological materials) to get insights into the significant differences in assumptions and results. A key tenent of PRA and decision analysis is the use of subjective probability to assess the likelihood of possible outcomes. For each technique, we compare the assumptions, probability assessment requirements, risk levels, and potential insights for risk managers. We find that assessing the distribution of potential attacker decisions is a complex judgment task, particularly considering the adaptation of the attacker to defender decisions. Intelligent adversary risk analysis and adversarial risk analysis are extensions of decision analysis and sequential game theory that help to decompose such judgments. These techniques explicitly show the adaptation of the attacker and the resulting shift in risk based on defender decisions.  相似文献   

7.
Qualitative systems for rating animal antimicrobial risks using ordered categorical labels such as “high,”“medium,” and “low” can potentially simplify risk assessment input requirements used to inform risk management decisions. But do they improve decisions? This article compares the results of qualitative and quantitative risk assessment systems and establishes some theoretical limitations on the extent to which they are compatible. In general, qualitative risk rating systems satisfying conditions found in real‐world rating systems and guidance documents and proposed as reasonable make two types of errors: (1) Reversed rankings, i.e., assigning higher qualitative risk ratings to situations that have lower quantitative risks; and (2) Uninformative ratings, e.g., frequently assigning the most severe qualitative risk label (such as “high”) to situations with arbitrarily small quantitative risks and assigning the same ratings to risks that differ by many orders of magnitude. Therefore, despite their appealing consensus‐building properties, flexibility, and appearance of thoughtful process in input requirements, qualitative rating systems as currently proposed often do not provide sufficient information to discriminate accurately between quantitatively small and quantitatively large risks. The value of information (VOI) that they provide for improving risk management decisions can be zero if most risks are small but a few are large, since qualitative ratings may then be unable to confidently distinguish the large risks from the small. These limitations suggest that it is important to continue to develop and apply practical quantitative risk assessment methods, since qualitative ones are often unreliable.  相似文献   

8.
This article proposes, develops, and illustrates the application of level‐k game theory to adversarial risk analysis. Level‐k reasoning, which assumes that players play strategically but have bounded rationality, is useful for operationalizing a Bayesian approach to adversarial risk analysis. It can be applied in a broad class of settings, including settings with asynchronous play and partial but incomplete revelation of early moves. Its computational and elicitation requirements are modest. We illustrate the approach with an application to a simple defend‐attack model in which the defender's countermeasures are revealed with a probability less than one to the attacker before he decides on how or whether to attack.  相似文献   

9.
The coefficient of relative risk aversion is a key parameter for analyses of behavior toward risk, but good estimates of this parameter do not exist. A promising place for reliable estimation is rare macroeconomic disasters, which have a major influence on the equity premium. The premium depends on the probability and size distribution of disasters, gauged by proportionate declines in per capita consumption or gross domestic product. Long‐term national‐accounts data for 36 countries provide a large sample of disasters of magnitude 10% or more. A power‐law density provides a good fit to the size distribution, and the upper‐tail exponent, α, is estimated to be around 4. A higher α signifies a thinner tail and, therefore, a lower equity premium, whereas a higher coefficient of relative risk aversion, γ, implies a higher premium. The premium is finite if α > γ. The observed premium of 5% generates an estimated γ close to 3, with a 95% confidence interval of 2 to 4. The results are robust to uncertainty about the values of the disaster probability and the equity premium, and can accommodate seemingly paradoxical situations in which the equity premium may appear to be infinite.  相似文献   

10.
The objective of the present study was to integrate the relative risk from mercury exposure to stream biota, groundwater, and humans in the Río Artiguas (Sucio) river basin, Nicaragua, where local gold mining occurs. A hazard quotient was used as a common exchange rate in probabilistic estimations of exposure and effects by means of Monte Carlo simulations. The endpoint for stream organisms was the lethal no‐observed‐effect concentration (NOECs), for groundwater the WHO guideline and the inhibitory Hg concentrations in bacteria (IC), and for humans the tolerable daily intake (TDI) and the benchmark dose level with an uncertainty factor of 10 (BMDLs0.1). Macroinvertebrates and fish in the contaminated river are faced with a higher risk to suffer from exposure to Hg than humans eating contaminated fish and bacteria living in the groundwater. The river sediment is the most hazardous source for the macroinvertebrates, and macroinvertebrates make up the highest risk for fish. The distribution of body concentrations of Hg in fish in the mining areas of the basin may exceed the distribution of endpoint values with close to 100% probability. Similarly, the Hg concentration in cord blood of humans feeding on fish from the river was predicted to exceed the BMDLs0.1 with about 10% probability. Most of the risk to the groundwater quality is confined to the vicinity of the gold refining plants and along the river, with a probability of about 20% to exceed the guideline value.  相似文献   

11.
Louis Anthony Cox  Jr. 《Risk analysis》2009,29(8):1062-1068
Risk analysts often analyze adversarial risks from terrorists or other intelligent attackers without mentioning game theory. Why? One reason is that many adversarial situations—those that can be represented as attacker‐defender games, in which the defender first chooses an allocation of defensive resources to protect potential targets, and the attacker, knowing what the defender has done, then decides which targets to attack—can be modeled and analyzed successfully without using most of the concepts and terminology of game theory. However, risk analysis and game theory are also deeply complementary. Game‐theoretic analyses of conflicts require modeling the probable consequences of each choice of strategies by the players and assessing the expected utilities of these probable consequences. Decision and risk analysis methods are well suited to accomplish these tasks. Conversely, game‐theoretic formulations of attack‐defense conflicts (and other adversarial risks) can greatly improve upon some current risk analyses that attempt to model attacker decisions as random variables or uncertain attributes of targets (“threats”) and that seek to elicit their values from the defender's own experts. Game theory models that clarify the nature of the interacting decisions made by attackers and defenders and that distinguish clearly between strategic choices (decision nodes in a game tree) and random variables (chance nodes, not controlled by either attacker or defender) can produce more sensible and effective risk management recommendations for allocating defensive resources than current risk scoring models. Thus, risk analysis and game theory are (or should be) mutually reinforcing.  相似文献   

12.
Yacov Y. Haimes 《Risk analysis》2012,32(9):1451-1467
This article is grounded on the premise that the complex process of risk assessment, management, and communication, when applied to systems of systems, should be guided by universal systems‐based principles. It is written from the perspective of systems engineering with the hope and expectation that the principles introduced here will be supplemented and complemented by principles from the perspectives of other disciplines. Indeed, there is no claim that the following 10 guiding principles constitute a complete set; rather, the intent is to initiate a discussion on this important subject that will incrementally lead us to a more complete set of guiding principles. The 10 principles are as follows: First Principle: Holism is the common denominator that bridges risk analysis and systems engineering. Second Principle: The process of risk modeling, assessment, management, and communication must be systemic and integrated. Third Principle: Models and state variables are central to quantitative risk analysis. Fourth Principle: Multiple models are required to represent the essence of the multiple perspectives of complex systems of systems. Fifth Principle: Meta‐modeling and subsystems integration must be derived from the intrinsic states of the system of systems. Sixth Principle: Multiple conflicting and competing objectives are inherent in risk management. Seventh Principle: Risk analysis must account for epistemic and aleatory uncertainties. Eighth Principle: Risk analysis must account for risks of low probability with extreme consequences. Ninth Principle: The time frame is central to quantitative risk analysis. Tenth Principle: Risk analysis must be holistic, adaptive, incremental, and sustainable, and it must be supported with appropriate data collection, metrics with which to measure efficacious progress, and criteria on the basis of which to act. The relevance and efficacy of each guiding principle is demonstrated by applying it to the U.S. Federal Aviation Administration complex Next Generation (NextGen) system of systems.  相似文献   

13.
14.
The three classic pillars of risk analysis are risk assessment (how big is the risk and how sure can we be?), risk management (what shall we do about it?), and risk communication (what shall we say about it, to whom, when, and how?). We propose two complements as important parts of these three bases: risk attribution (who or what addressable conditions actually caused an accident or loss?) and learning from experience about risk reduction (what works, and how well?). Failures in complex systems usually evoke blame, often with insufficient attention to root causes of failure, including some aspects of the situation, design decisions, or social norms and culture. Focusing on blame, however, can inhibit effective learning, instead eliciting excuses to deflect attention and perceived culpability. Productive understanding of what went wrong, and how to do better, thus requires moving past recrimination and excuses. This article identifies common blame‐shifting “lame excuses” for poor risk management. These generally contribute little to effective improvements and may leave real risks and preventable causes unaddressed. We propose principles from risk and decision sciences and organizational design to improve results. These start with organizational leadership. More specifically, they include: deliberate testing and learning—especially from near‐misses and accident precursors; careful causal analysis of accidents; risk quantification; candid expression of uncertainties about costs and benefits of risk‐reduction options; optimization of tradeoffs between gathering additional information and immediate action; promotion of safety culture; and mindful allocation of people, responsibilities, and resources to reduce risks. We propose that these principles provide sound foundations for improving successful risk management.  相似文献   

15.
Dose‐response models are essential to quantitative microbial risk assessment (QMRA), providing a link between levels of human exposure to pathogens and the probability of negative health outcomes. In drinking water studies, the class of semi‐mechanistic models known as single‐hit models, such as the exponential and the exact beta‐Poisson, has seen widespread use. In this work, an attempt is made to carefully develop the general mathematical single‐hit framework while explicitly accounting for variation in (1) host susceptibility and (2) pathogen infectivity. This allows a precise interpretation of the so‐called single‐hit probability and precise identification of a set of statistical independence assumptions that are sufficient to arrive at single‐hit models. Further analysis of the model framework is facilitated by formulating the single‐hit models compactly using probability generating and moment generating functions. Among the more practically relevant conclusions drawn are: (1) for any dose distribution, variation in host susceptibility always reduces the single‐hit risk compared to a constant host susceptibility (assuming equal mean susceptibilities), (2) the model‐consistent representation of complete host immunity is formally demonstrated to be a simple scaling of the response, (3) the model‐consistent expression for the total risk from repeated exposures deviates (gives lower risk) from the conventional expression used in applications, and (4) a model‐consistent expression for the mean per‐exposure dose that produces the correct total risk from repeated exposures is developed.  相似文献   

16.
The purpose of this article is to provide a risk‐based predictive model to assess the impact of false mussel Mytilopsis sallei invasions on hard clam Meretrix lusoria farms in the southwestern region of Taiwan. The actual spread of invasive false mussel was predicted by using analytical models based on advection‐diffusion and gravity models. The proportion of hard clam colonized and infestation by false mussel were used to characterize risk estimates. A mortality model was parameterized to assess hard clam mortality risk characterized by false mussel density and infestation intensity. The published data were reanalyzed to parameterize a predictive threshold model described by a cumulative Weibull distribution function that can be used to estimate the exceeding thresholds of proportion of hard clam colonized and infestation. Results indicated that the infestation thresholds were 2–17 ind clam?1 for adult hard clams, whereas 4 ind clam?1 for nursery hard clams. The average colonization thresholds were estimated to be 81–89% for cultivated and nursery hard clam farms, respectively. Our results indicated that false mussel density and infestation, which caused 50% hard clam mortality, were estimated to be 2,812 ind m?2 and 31 ind clam?1, respectively. This study further indicated that hard clam farms that are close to the coastal area have at least 50% probability for 43% mortality caused by infestation. This study highlighted that a probabilistic risk‐based framework characterized by probability distributions and risk curves is an effective representation of scientific assessments for farmed hard clam in response to the nonnative false mussel invasion.  相似文献   

17.
Protection motivation theory states individuals conduct threat and coping appraisals when deciding how to respond to perceived risks. However, that model does not adequately explain today's risk culture, where engaging in recommended behaviors may create a separate set of real or perceived secondary risks. We argue for and then demonstrate the need for a new model accounting for a secondary threat appraisal, which we call secondary risk theory. In an online experiment, 1,246 participants indicated their intention to take a vaccine after reading about the likelihood and severity of side effects. We manipulated likelihood and severity in a 2 × 2 between‐subjects design and examined how well secondary risk theory predicts vaccination intention compared to protection motivation theory. Protection motivation theory performed better when the likelihood and severity of side effects were both low (R2 = 0.30) versus high (R2 = 0.15). In contrast, secondary risk theory performed similarly when the likelihood and severity of side effects were both low (R2 = 0.42) or high (R2 = 0.45). But the latter figure is a large improvement over protection motivation theory, suggesting the usefulness of secondary risk theory when individuals perceive a high secondary threat.  相似文献   

18.
《Risk analysis》2018,38(7):1455-1473
Recently, growing earthquake activity in the northeastern Netherlands has aroused considerable concern among the 600,000 provincial inhabitants. There, at 3 km deep, the rich Groningen gas field extends over 900 km2 and still contains about 600 of the original 2,800 billion cubic meters (bcm). Particularly after 2001, earthquakes have increased in number, magnitude (M, on the logarithmic Richter scale), and damage to numerous buildings. The man‐made nature of extraction‐induced earthquakes challenges static notions of risk, complicates formal risk assessment, and questions familiar conceptions of acceptable risk. Here, a 26‐year set of 294 earthquakes with M ≥ 1.5 is statistically analyzed in relation to increasing cumulative gas extraction since 1963. Extrapolations from a fast‐rising trend over 2001–2013 indicate that—under “business as usual”—around 2021 some 35 earthquakes with M ≥ 1.5 might occur annually, including four with M ≥ 2.5 (ten‐fold stronger), and one with M ≥ 3.5 every 2.5 years. Given this uneasy prospect, annual gas extraction has been reduced from 54 bcm in 2013 to 24 bcm in 2017. This has significantly reduced earthquake activity, so far. However, when extraction is stabilized at 24 bcm per year for 2017–2021 (or 21.6 bcm, as judicially established in Nov. 2017), the annual number of earthquakes would gradually increase again, with an expected all‐time maximum M ≈ 4.5. Further safety management may best follow distinct stages of seismic risk generation, with moderation of gas extraction and massive (but late and slow) building reinforcement as outstanding strategies. Officially, “acceptable risk” is mainly approached by quantification of risk (e.g., of fatal building collapse) for testing against national safety standards, but actual (local) risk estimation remains problematic. Additionally important are societal cost–benefit analysis, equity considerations, and precautionary restraint. Socially and psychologically, deliberate attempts are made to improve risk communication, reduce public anxiety, and restore people's confidence in responsible experts and policymakers.  相似文献   

19.
This paper describes and analyzes the results of a unique field experiment especially designed to test the effects of the level of commitment and information available to individuals when sharing risk. We find that limiting exogenously provided commitmentis associated with less risk sharing, whereas limiting information on defections can be associated with more risk sharing. These results can be understood by distinguishing between intrinsic and extrinsic incentives, and by recognizing that social sanctions are costly to inflict or that individuals suffer from time‐inconsistent preferences. Comparing the groups formed within our experiment with the real life risk‐sharing networks in a few villages allows us to test the external validity of our experiment and suggests that the results are salient to our understanding of risk‐sharing arrangements observed in developing countries. (JEL: C93, D71, D81, O12)  相似文献   

20.
We design and conduct a stated‐preference survey to estimate willingness to pay (WTP) to reduce foodborne risk of acute illness and to test whether WTP is proportional to the corresponding gain in expected quality‐adjusted life years (QALYs). If QALYs measure utility for health, then economic theory requires WTP to be nearly proportional to changes in both health quality and duration of illness and WTP could be estimated by multiplying the expected change in QALYs by an appropriate monetary value. WTP is elicited using double‐bounded, dichotomous‐choice questions in which respondents (randomly selected from the U.S. general adult population, n = 2,858) decide whether to purchase a more expensive food to reduce the risk of foodborne illness. Health risks vary by baseline probability of illness, reduction in probability, duration and severity of illness, and conditional probability of mortality. The expected gain in QALYs is calculated using respondent‐assessed decrements in health‐related quality of life if ill combined with the duration of illness and reduction in probability specified in the survey. We find sharply diminishing marginal WTP for severity and duration of illness prevented. Our results suggest that individuals do not have a constant rate of WTP per QALY, which implies that WTP cannot be accurately estimated by multiplying the change in QALYs by an appropriate monetary value.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号