首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two images, “black swans” and “perfect storms,” have struck the public's imagination and are used—at times indiscriminately—to describe the unthinkable or the extremely unlikely. These metaphors have been used as excuses to wait for an accident to happen before taking risk management measures, both in industry and government. These two images represent two distinct types of uncertainties (epistemic and aleatory). Existing statistics are often insufficient to support risk management because the sample may be too small and the system may have changed. Rationality as defined by the von Neumann axioms leads to a combination of both types of uncertainties into a single probability measure—Bayesian probability—and accounts only for risk aversion. Yet, the decisionmaker may also want to be ambiguity averse. This article presents an engineering risk analysis perspective on the problem, using all available information in support of proactive risk management decisions and considering both types of uncertainty. These measures involve monitoring of signals, precursors, and near‐misses, as well as reinforcement of the system and a thoughtful response strategy. It also involves careful examination of organizational factors such as the incentive system, which shape human performance and affect the risk of errors. In all cases, including rare events, risk quantification does not allow “prediction” of accidents and catastrophes. Instead, it is meant to support effective risk management rather than simply reacting to the latest events and headlines.  相似文献   

2.
Terje Aven 《Risk analysis》2011,31(10):1515-1525
Few policies for risk management have created more controversy than the precautionary principle. A main problem is the extreme number of different definitions and interpretations. Almost all definitions of the precautionary principle identify “scientific uncertainties” as the trigger or criterion for its invocation; however, the meaning of this concept is not clear. For applying the precautionary principle it is not sufficient that the threats or hazards are uncertain. A stronger requirement is needed. This article provides an in‐depth analysis of this issue. We question how the scientific uncertainties are linked to the interpretation of the probability concept, expected values, the results from probabilistic risk assessments, the common distinction between aleatory uncertainties and epistemic uncertainties, and the problem of establishing an accurate prediction model (cause‐effect relationship). A new classification structure is suggested to define what scientific uncertainties mean.  相似文献   

3.
The failure to foresee the catastrophic earthquakes, tsunamis, and nuclear accident of 2011 has been perceived by many in Japan as a fundamental shortcoming of modern disaster risk science. Hampered by a variety of cognitive and institutional biases, the conventional disaster risk management planning based on the “known risks” led to the cascading failures of the interlinked disaster risk management (DRM) apparatus. This realization led to a major rethinking in the use of science for policy and the incorporations of lessons learned in the country's new DRM policy. This study reviews publicly available documents on expert committee discussions and scientific articles to identify what continuities and changes have been made in the use of scientific knowledge in Japanese risk management. In general, the prior influence of cognitive bias (e.g., overreliance on documented hazard risks) has been largely recognized, and increased attention is now being paid to the incorporation of less documented but known risks. This has led to upward adjustments in estimated damages from future risks and recognition of the need for further strengthening of DRM policy. At the same time, there remains significant continuity in the way scientific knowledge is perceived to provide sufficient and justifiable grounds for the development and implementation of DRM policy. The emphasis on “evidence‐based policy” in earthquake and tsunami risk reduction measures continues, despite the critical reflections of a group of scientists who advocate for a major rethinking of the country's science‐policy institution respecting the limitations of the current state science.  相似文献   

4.
Graphs are increasingly recommended for improving decision-making and promoting risk-avoidant behaviors. Graphs that depict only the number of people affected by a risk (“foreground-only” displays) tend to increase perceived risk and risk aversion (e.g., willingness to get vaccinated), as compared to graphs that also depict the number of people at risk for harm (“foreground+background” displays). However, previous research examining these “foreground-only effects” has focused on relatively low-probability risks (<10%), limiting generalizability to communications about larger risks. In two experiments, we systematically investigated the moderating role of probability size on foreground-only effects, using a wide range of probability sizes (from 0.1% to 40%). Additionally, we examined the moderating role of the size of the risk reduction, that is, the extent to which a protective behavior reduces the risk. Across both experiments, foreground-only effects on perceived risk and risk aversion were weaker for larger probabilities. Experiment 2 also revealed that foreground-only effects were weaker for smaller risk reductions, while foreground-only displays decreased understanding of absolute risk magnitudes independently of probability size. These findings suggest that the greater effectiveness of foreground-only versus foreground+background displays for increasing perceived risk and risk aversion diminishes with larger probability sizes and smaller risk reductions. Moreover, if the goal is to promote understanding of absolute risk magnitudes, foreground+background displays should be used rather than foreground-only displays regardless of probability size. Our findings also help to refine and extend existing theoretical accounts of foreground-only effects to situations involving a wide range of probability sizes.  相似文献   

5.
Scott Janzwood 《Risk analysis》2023,43(10):2004-2016
Outside of the field of risk analysis, an important theoretical conversation on the slippery concept of uncertainty has unfolded over the last 40 years within the adjacent field of environmental risk. This literature has become increasingly standardized behind the tripartite distinction between uncertainty location, the nature of uncertainty, and uncertainty level, popularized by the “W&H framework.” This article introduces risk theorists and practitioners to the conceptual literature on uncertainty with the goal of catalyzing further development and clarification of the uncertainty concept within the field of risk analysis. It presents two critiques of the W&H framework's dimension of uncertainty level—the dimension that attempts to define the characteristics separating greater uncertainties from lesser uncertainties. First, I argue the framework's conceptualization of uncertainty level lacks a clear and consistent epistemological position and fails to acknowledge or reconcile the tensions between Bayesian and frequentist perspectives present within the framework. This article reinterprets the dimension of uncertainty level from a Bayesian perspective, which understands uncertainty as a mental phenomenon arising from “confidence deficits” as opposed to the ill-defined notion of “knowledge deficits” present in the framework. And second, I elaborate the undertheorized concept of uncertainty “reducibility.” These critiques inform a clarified conceptualization of uncertainty level that can be integrated with risk analysis concepts and usefully applied by modelers and decisionmakers engaged in model-based decision support.  相似文献   

6.
Risk aversion (a second‐order risk preference) is a time‐proven concept in economic models of choice under risk. More recently, the higher order risk preferences of prudence (third‐order) and temperance (fourth‐order) also have been shown to be quite important. While a majority of the population seems to exhibit both risk aversion and these higher order risk preferences, a significant minority does not. We show how both risk‐averse and risk‐loving behaviors might be generated by a simple type of basic lottery preference for either (1) combining “good” outcomes with “bad” ones, or (2) combining “good with good” and “bad with bad,” respectively. We further show that this dichotomy is fairly robust at explaining higher order risk attitudes in the laboratory. In addition to our own experimental evidence, we take a second look at the extant laboratory experiments that measure higher order risk preferences and we find a fair amount of support for this dichotomy. Our own experiment also is the first to look beyond fourth‐order risk preferences, and we examine risk attitudes at even higher orders.  相似文献   

7.
Wildfires present a complex applied risk management environment, but relatively little attention has been paid to behavioral and cognitive responses to risk among public agency wildfire managers. This study investigates responses to risk, including probability weighting and risk aversion, in a wildfire management context using a survey‐based experiment administered to federal wildfire managers. Respondents were presented with a multiattribute lottery‐choice experiment where each lottery is defined by three outcome attributes: expenditures for fire suppression, damage to private property, and exposure of firefighters to the risk of aviation‐related fatalities. Respondents choose one of two strategies, each of which includes “good” (low cost/low damage) and “bad” (high cost/high damage) outcomes that occur with varying probabilities. The choice task also incorporates an information framing experiment to test whether information about fatality risk to firefighters alters managers' responses to risk. Results suggest that managers exhibit risk aversion and nonlinear probability weighting, which can result in choices that do not minimize expected expenditures, property damage, or firefighter exposure. Information framing tends to result in choices that reduce the risk of aviation fatalities, but exacerbates nonlinear probability weighting.  相似文献   

8.
The three classic pillars of risk analysis are risk assessment (how big is the risk and how sure can we be?), risk management (what shall we do about it?), and risk communication (what shall we say about it, to whom, when, and how?). We propose two complements as important parts of these three bases: risk attribution (who or what addressable conditions actually caused an accident or loss?) and learning from experience about risk reduction (what works, and how well?). Failures in complex systems usually evoke blame, often with insufficient attention to root causes of failure, including some aspects of the situation, design decisions, or social norms and culture. Focusing on blame, however, can inhibit effective learning, instead eliciting excuses to deflect attention and perceived culpability. Productive understanding of what went wrong, and how to do better, thus requires moving past recrimination and excuses. This article identifies common blame‐shifting “lame excuses” for poor risk management. These generally contribute little to effective improvements and may leave real risks and preventable causes unaddressed. We propose principles from risk and decision sciences and organizational design to improve results. These start with organizational leadership. More specifically, they include: deliberate testing and learning—especially from near‐misses and accident precursors; careful causal analysis of accidents; risk quantification; candid expression of uncertainties about costs and benefits of risk‐reduction options; optimization of tradeoffs between gathering additional information and immediate action; promotion of safety culture; and mindful allocation of people, responsibilities, and resources to reduce risks. We propose that these principles provide sound foundations for improving successful risk management.  相似文献   

9.
After the Seveso disaster occurred more than 40 years ago, there has been an increasing awareness of the potential impacts that similar accident events can occur in a wide range of process establishments, where the handling and production of hazardous substances pose a real threat to society and the environment. In these industrial sites denominated “Seveso sites,” the urgent need for an effective strategy emerged markedly to handle hazardous activities and to ensure safe conditions. Since then, the main challenging research issues have focused on how to prevent such accident events and how to mitigate their consequences leading to the development of many risk assessment methodologies. In recent years, researchers and practitioners have tried to provide useful overviews of the existing risk assessment methodologies proposing several reviews. However, these reviews are not exhaustive because they are either dated or focus only on one specific topic (e.g., liquefied natural gas, domino effect, etc.). This work aims to overcome the limitations of the current reviews by providing an up-to-date and comprehensive overview of the risk assessment methodologies for handling hazardous substances within the European industry. In particular, we have focused on the current techniques for hazards and accident scenarios identification, as well as probability and consequence analyses for both onshore and offshore installations. Thus, we have identified the research streams that have characterized the activities of researchers and practitioners over the years, and we have then presented and discussed the different risk assessment methodologies available concerning the research stream that they belong to.  相似文献   

10.
基于条件风险价值CoVaR和SIM单指数分位数回归技术,选取2012-2018年我国股市24行业指数周频数据,构建时变的跨行业尾部风险网络,通过网络拓扑结构反映系统性风险的空间关联及潜在变化趋势。此外,引入ARDL模型探究网络结构和宏观经济变量对股市系统性风险的长短期效应,最后对系统性风险进行预测。结果表明:(1)我国股市行业板块间存在明显的系统性风险空间关联和传染效应,风险溢出网络具有“小世界”特征;(2)网络连边集中度HHI呈明显的周期性变化。在尾部事件期间,HHI指标显著增加,风险网络呈较单一的中心节点结构,网络稳定性差;(3)通过节点风险传播强度和中心化程度发现,仅通过节点内部属性判断节点的系统重要性已不够全面和准确,应结合节点在网络中的位置和关联关系来判断;信息技术、医疗保健、商业和专业服务行业是风险网络中最有影响力的行业;(4)通过ARDL-ECM模型发现网络连边集中度是系统性风险的主要影响因素,并对股市系统性风险进行了高度准确的预测。本研究可为监管机构有效识别我国股市中有影响力的行业提供参考,依据关键行业的溢出关联制定针对性的风险防范措施,同时对风险溢出效应设立预警机制。  相似文献   

11.
Terje Aven 《Risk analysis》2010,30(3):354-360
It is common perspective in risk analysis that there are two kinds of uncertainties: i) variability as resulting from heterogeneity and stochasticity (aleatory uncertainty) and ii) partial ignorance or epistemic uncertainties resulting from systematic measurement error and lack of knowledge. Probability theory is recognized as the proper tool for treating the aleatory uncertainties, but there are different views on what is the best approach for describing partial ignorance and epistemic uncertainties. Subjective probabilities are often used for representing this type of ignorance and uncertainties, but several alternative approaches have been suggested, including interval analysis, probability bound analysis, and bounds based on evidence theory. It is argued that probability theory generates too precise results when the background knowledge of the probabilities is poor. In this article, we look more closely into this issue. We argue that this critique of probability theory is based on a conception of risk assessment being a tool to objectively report on the true risk and variabilities. If risk assessment is seen instead as a method for describing the analysts’ (and possibly other stakeholders’) uncertainties about unknown quantities, the alternative approaches (such as the interval analysis) often fail in providing the necessary decision support.  相似文献   

12.
In risk analysis, the treatment of the epistemic uncertainty associated to the probability of occurrence of an event is fundamental. Traditionally, probabilistic distributions have been used to characterize the epistemic uncertainty due to imprecise knowledge of the parameters in risk models. On the other hand, it has been argued that in certain instances such uncertainty may be best accounted for by fuzzy or possibilistic distributions. This seems the case in particular for parameters for which the information available is scarce and of qualitative nature. In practice, it is to be expected that a risk model contains some parameters affected by uncertainties that may be best represented by probability distributions and some other parameters that may be more properly described in terms of fuzzy or possibilistic distributions. In this article, a hybrid method that jointly propagates probabilistic and possibilistic uncertainties is considered and compared with pure probabilistic and pure fuzzy methods for uncertainty propagation. The analyses are carried out on a case study concerning the uncertainties in the probabilities of occurrence of accident sequences in an event tree analysis of a nuclear power plant.  相似文献   

13.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

14.
Regulatory impact analyses (RIAs), required for new major federal regulations, are often criticized for not incorporating epistemic uncertainties into their quantitative estimates of benefits and costs. “Integrated uncertainty analysis,” which relies on subjective judgments about epistemic uncertainty to quantitatively combine epistemic and statistical uncertainties, is often prescribed. This article identifies an additional source for subjective judgment regarding a key epistemic uncertainty in RIAs for National Ambient Air Quality Standards (NAAQS)—the regulator's degree of confidence in continuation of the relationship between pollutant concentration and health effects at varying concentration levels. An illustrative example is provided based on the 2013 decision on the NAAQS for fine particulate matter (PM2.5). It shows how the regulator's justification for setting that NAAQS was structured around the regulator's subjective confidence in the continuation of health risks at different concentration levels, and it illustrates how such expressions of uncertainty might be directly incorporated into the risk reduction calculations used in the rule's RIA. The resulting confidence-weighted quantitative risk estimates are found to be substantially different from those in the RIA for that rule. This approach for accounting for an important source of subjective uncertainty also offers the advantage of establishing consistency between the scientific assumptions underlying RIA risk and benefit estimates and the science-based judgments developed when deciding on the relevant standards for important air pollutants such as PM2.5.  相似文献   

15.
Modeling Ship Transportation Risk   总被引:2,自引:0,他引:2  
This article presents results from the Commission of the European Communities (CEC) project 'Safety of Shipping in Coastal Waters' (SAFECO). The project was performed by ten European partners during the period 1995-1998. The principal aim of the SAFECO project was to determine the influences that could increase the safety of shipping in coastal waters by analyzing the underlying factors contributing to the marine accident risk level. The work reported here focuses on the Marine Accident Risk Calculation System (MARCS) that was further developed during the SAFECO project. This paper presents the methods used by MARCS, as well as data and results from a 'demonstration of concept' case study covering the North Sea area. The estimated accident frequencies (number of accidents per year) were compared with historical accident data, to demonstrate the validity of the modeling approach. Reasonable (within a factor of 5) to good (within a factor of 2) agreement between calculated accident frequencies and observed accident statistics was generally obtained. However, significant discrepancies were identified for some ship types and accident categories. The risk model has particular problems with estimating the accident frequency for drift grounding in general and powered grounding for ferries. It was concluded that these discrepancies are related to uncertainties in several areas, specifically in the risk model algorithms, the traffic data, the error and failure probability data, and the historical accident statistics.  相似文献   

16.
We perform a statistical study of risk in nuclear energy systems. This study provides and analyzes a data set that is twice the size of the previous best data set on nuclear incidents and accidents, comparing three measures of severity: the industry standard International Nuclear Event Scale, the Nuclear Accident Magnitude Scale of radiation release, and cost in U.S. dollars. The rate of nuclear accidents with cost above 20 MM 2013 USD, per reactor per year, has decreased from the 1970s until the present time. Along the way, the rate dropped significantly after Chernobyl (April 1986) and is expected to be roughly stable around a level of 0.003, suggesting an average of just over one event per year across the current global fleet. The distribution of costs appears to have changed following the Three Mile Island major accident (March 1979). The median cost became approximately 3.5 times smaller, but an extremely heavy tail emerged, being well described by a Pareto distribution with parameter α = 0.5–0.6. For instance, the cost of the two largest events, Chernobyl and Fukushima (March 2011), is equal to nearly five times the sum of the 173 other events. We also document a significant runaway disaster regime in both radiation release and cost data, which we associate with the “dragon‐king” phenomenon. Since the major accident at Fukushima (March 2011) occurred recently, we are unable to quantify an impact of the industry response to this disaster. Excluding such improvements, in terms of costs, our range of models suggests that there is presently a 50% chance that (i) a Fukushima event (or larger) occurs every 60–150 years, and (ii) that a Three Mile Island event (or larger) occurs every 10–20 years. Further—even assuming that it is no longer possible to suffer an event more costly than Chernobyl or Fukushima—the expected annual cost and its standard error bracket the cost of a new plant. This highlights the importance of improvements not only immediately following Fukushima, but also deeper improvements to effectively exclude the possibility of “dragon‐king” disasters. Finally, we find that the International Nuclear Event Scale (INES) is inconsistent in terms of both cost and radiation released. To be consistent with cost data, the Chernobyl and Fukushima disasters would need to have between an INES level of 10 and 11, rather than the maximum of 7.  相似文献   

17.
Terje Aven 《Risk analysis》2015,35(3):476-483
Nassim Taleb's antifragile concept has been shown considerable interest in the media and on the Internet recently. For Taleb, the antifragile concept is a blueprint for living in a black swan world (where surprising extreme events may occur), the key being to love variation and uncertainty to some degree, and thus also errors. The antonym of “fragile” is not robustness or resilience, but “please mishandle” or “please handle carelessly,” using an example from Taleb when referring to sending a package full of glasses by post. In this article, we perform a detailed analysis of this concept, having a special focus on how the antifragile concept relates to common ideas and principles of risk management. The article argues that Taleb's antifragile concept adds an important contribution to the current practice of risk analysis by its focus on the dynamic aspects of risk and performance, and the necessity of some variation, uncertainties, and risk to achieve improvements and high performance at later stages.  相似文献   

18.
与传统文献将风险下降比率作为风险对冲效率指标不同,本文引入期望效用理论来比较最小方差对冲策略、最小在险价值(VaR)对冲策略和最小条件在险价值(CVaR)对冲策略的对冲效率,从而将人们的风险态度同对冲策略选择联系起来,以实现不同风险态度的投资者选择不同风险对冲策略的目的。借用风险中性效用函数、二次效用函数和CARA效用函数,本文严格证明:在这三种对冲策略中,最小方差对冲策略过于保守,最小VaR对冲策略最为激进,风险厌恶程度大的投资者偏好最小方差对冲策略,风险中性投资者和风险厌恶程度小的投资者更偏好最小VaR对冲策略,最小CVaR对冲策略介于二者之间。  相似文献   

19.
《Risk analysis》2018,38(1):56-70
Feedback from industrial accidents is provided by various state or even international, institutions, and lessons learned can be controversial. However, there has been little research into organizational learning at the international level. This article helps to fill the gap through an in‐depth review of official reports of the Fukushima Daiichi accident published shortly after the event. We present a new method to analyze the arguments contained in these voluminous documents. Taking an intertextual perspective, the method focuses on the accident narratives, their rationale, and links between “facts,” “causes,” and “recommendations.” The aim is to evaluate how the findings of the various reports are consistent with (or contradict) “institutionalized knowledge,” and identify the social representations that underpin them. We find that although the scientific controversy surrounding the results of the various inquiries reflects different ethical perspectives, they are integrated into the same utopian ideal. The involvement of multiple actors in this controversy raises questions about the public construction of epistemic authority, and we highlight the special status given to the International Atomic Energy Agency in this regard.  相似文献   

20.
In decision theory the concept denoted variously as “risk aversion increment” or “risk premium” has not been fully exploited, although it is neither new nor complex. In this paper we will show how the concept of the risk aversion increment can be used for developing an alternative to the explicit use of the utility function. For most people the use of a risk aversion increment provides a better conceptual reference than does the use of a utility function. To illustrate the usefulness of the concept as a basis for gaining insight into problem statements and their analysis, the following applications are developed: 1) general results for the exponential utility function. 2) estimation of utility functions. 3) general results for various combinations of utility functions and probability distributions. 4) use in sequential decisions. 5) application in the theory of incentives.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号