首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Terje Aven 《Risk analysis》2013,33(2):270-280
The Funtowicz and Ravetz model for classifying problem‐solving strategies into applied sciences, professional consultancy, and postnormal sciences is well known in the social science risk literature. The model is illustrated in a diagram based on the two axes: (i) decision stakes—the value dimension (costs, benefits) and (ii) the system uncertainties—the knowledge dimension. These axes resemble the same two dimensions that characterize some recently developed risk perspectives: (a) consequences and the severity of these consequences and (b) associated uncertainties. In this article, we make a detailed comparison of these two types of risk frameworks. We point to similarities and differences in motivation and use. A main conclusion of the article is that these risk perspectives all provide adequate scientific bases for the Funtowicz and Ravetz model. New insights are provided on the understanding of what the outcome stakes/consequences and uncertainty dimensions really capture in these perspectives and frameworks.  相似文献   

2.
Louis Anthony Cox  Jr  . 《Risk analysis》2007,27(1):27-43
This article discusses a concept of concern-driven risk management, in which qualitative expert judgments about whether concerns warrant specified risk management interventions are used in preference to quantitative risk assessment (QRA) to guide risk management decisions. Where QRA emphasizes formal quantitative assessment of the probable consequences caused by the recommended actions, and comparison to the probable consequences of alternatives, including the status quo, concern-driven risk management instead emphasizes perceived urgency or severity of the situation motivating recommended interventions. In many instances, especially those involving applications of the precautionary principle, no formal quantification or comparison of probable consequences for alternative decisions is seen as being necessary (or, perhaps, possible or desirable) prior to implementation of risk management measures. Such concern-driven risk management has been recommended by critics of QRA in several areas of applied risk management. Based on case studies and psychological literature on the empirical performance of judgment-based approaches to decision making under risk and uncertainty, we conclude that, although concern-driven risk management has several important potential political and psychological advantages over QRA, it is not clear that it performs better than (or as well as) QRA in identifying risk management interventions that successfully protect human health or achieve other desired consequences. Therefore, those who advocate replacing QRA with concern-driven alternatives, such as expert judgment and consensus decision processes, should assess whether their recommended alternatives truly outperform QRA, by the criterion of producing preferred consequences, before rejecting the QRA paradigm for practical applications.  相似文献   

3.
Louis Anthony Cox  Jr. 《Risk analysis》2009,29(8):1062-1068
Risk analysts often analyze adversarial risks from terrorists or other intelligent attackers without mentioning game theory. Why? One reason is that many adversarial situations—those that can be represented as attacker‐defender games, in which the defender first chooses an allocation of defensive resources to protect potential targets, and the attacker, knowing what the defender has done, then decides which targets to attack—can be modeled and analyzed successfully without using most of the concepts and terminology of game theory. However, risk analysis and game theory are also deeply complementary. Game‐theoretic analyses of conflicts require modeling the probable consequences of each choice of strategies by the players and assessing the expected utilities of these probable consequences. Decision and risk analysis methods are well suited to accomplish these tasks. Conversely, game‐theoretic formulations of attack‐defense conflicts (and other adversarial risks) can greatly improve upon some current risk analyses that attempt to model attacker decisions as random variables or uncertain attributes of targets (“threats”) and that seek to elicit their values from the defender's own experts. Game theory models that clarify the nature of the interacting decisions made by attackers and defenders and that distinguish clearly between strategic choices (decision nodes in a game tree) and random variables (chance nodes, not controlled by either attacker or defender) can produce more sensible and effective risk management recommendations for allocating defensive resources than current risk scoring models. Thus, risk analysis and game theory are (or should be) mutually reinforcing.  相似文献   

4.
The purpose of this article is to discuss the role of quantitative risk assessments for characterizing risk and uncertainty and delineating appropriate risk management options. Our main concern is situations (risk problems) with large potential consequences, large uncertainties, and/or ambiguities (related to the relevance, meaning, and implications of the decision basis; or related to the values to be protected and the priorities to be made), in particular terrorism risk. We look into the scientific basis of the quantitative risk assessments and the boundaries of the assessments in such a context. Based on a risk perspective that defines risk as uncertainty about and severity of the consequences (or outcomes) of an activity with respect to something that humans value we advocate a broad risk assessment approach characterizing uncertainties beyond probabilities and expected values. Key features of this approach are qualitative uncertainty assessment and scenario building instruments.  相似文献   

5.
Adam M. Finkel 《Risk analysis》2014,34(10):1785-1794
If exposed to an identical concentration of a carcinogen, every human being would face a different level of risk, determined by his or her genetic, environmental, medical, and other uniquely individual characteristics. Various lines of evidence indicate that this susceptibility variable is distributed rather broadly in the human population, with perhaps a factor of 25‐ to 50‐fold between the center of this distribution and either of its tails, but cancer risk assessment at the EPA and elsewhere has always treated every (adult) human as identically susceptible. The National Academy of Sciences “Silver Book” concluded that EPA and the other agencies should fundamentally correct their mis‐computation of carcinogenic risk in two ways: (1) adjust individual risk estimates upward to provide information about the upper tail; and (2) adjust population risk estimates upward (by about sevenfold) to correct an underestimation due to a mathematical property of the interindividual distribution of human susceptibility, in which the susceptibility averaged over the entire (right‐skewed) population exceeds the median value for the typical human. In this issue of Risk Analysis, Kenneth Bogen disputes the second adjustment and endorses the first, though he also relegates the problem of underestimated individual risks to the realm of “equity concerns” that he says should have little if any bearing on risk management policy. In this article, I show why the basis for the population risk adjustment that the NAS recommended is correct—that current population cancer risk estimates, whether they are derived from animal bioassays or from human epidemiologic studies, likely provide estimates of the median with respect to human variation, which in turn must be an underestimate of the mean. If cancer risk estimates have larger “conservative” biases embedded in them, a premise I have disputed in many previous writings, such a defect would not excuse ignoring this additional bias in the direction of underestimation. I also demonstrate that sensible, legally appropriate, and ethical risk policy must not only inform the public when the tail of the individual risk distribution extends into the “high‐risk” range, but must alter benefit‐cost balancing to account for the need to try to reduce these tail risks preferentially.  相似文献   

6.
The consequences that climate change could have on infrastructure systems are potentially severe but highly uncertain. This should make risk analysis a natural framework for climate adaptation in infrastructure systems. However, many aspects of climate change, such as weak background knowledge and societal controversy, make it an emerging risk where traditional approaches for risk assessment and management cannot be confidently employed. A number of research developments aimed at addressing these issues have emerged in recent years, such as the development of probabilistic climate projections, climate services, and robust decision frameworks. However, additional research is needed to improve the suitability of these methods for infrastructure planning. In this perspective, we outline some of the challenges in addressing climate change risks to infrastructure and summarize new developments aimed at meeting these challenges. We end by highlighting needs for future research, many of which could be well‐served by expertise within the risk analysis community.  相似文献   

7.
Royce A. Francis 《Risk analysis》2015,35(11):1983-1995
This article argues that “game‐changing” approaches to risk analysis must focus on “democratizing” risk analysis in the same way that information technologies have democratized access to, and production of, knowledge. This argument is motivated by the author's reading of Goble and Bier's analysis, “Risk Assessment Can Be a Game‐Changing Information Technology—But Too Often It Isn't” (Risk Analysis, 2013; 33: 1942–1951), in which living risk assessments are shown to be “game changing” in probabilistic risk analysis. In this author's opinion, Goble and Bier's article focuses on living risk assessment's potential for transforming risk analysis from the perspective of risk professionals—yet, the game‐changing nature of information technologies has typically achieved a much broader reach. Specifically, information technologies change who has access to, and who can produce, information. From this perspective, the author argues that risk assessment is not a game‐changing technology in the same way as the printing press or the Internet because transformative information technologies reduce the cost of production of, and access to, privileged knowledge bases. The author argues that risk analysis does not reduce these costs. The author applies Goble and Bier's metaphor to the chemical risk analysis context, and in doing so proposes key features that transformative risk analysis technology should possess. The author also discusses the challenges and opportunities facing risk analysis in this context. These key features include: clarity in information structure and problem representation, economical information dissemination, increased transparency to nonspecialists, democratized manufacture and transmission of knowledge, and democratic ownership, control, and interpretation of knowledge. The chemical safety decision‐making context illustrates the impact of changing the way information is produced and accessed in the risk context. Ultimately, the author concludes that although new chemical safety regulations do transform access to risk information, they do not transform the costs of producing this information—rather, they change the bearer of these costs. The need for further risk assessment transformation continues to motivate new practical and theoretical developments in risk analysis and management.  相似文献   

8.
Land subsidence risk assessment (LSRA) is a multi‐attribute decision analysis (MADA) problem and is often characterized by both quantitative and qualitative attributes with various types of uncertainty. Therefore, the problem needs to be modeled and analyzed using methods that can handle uncertainty. In this article, we propose an integrated assessment model based on the evidential reasoning (ER) algorithm and fuzzy set theory. The assessment model is structured as a hierarchical framework that regards land subsidence risk as a composite of two key factors: hazard and vulnerability. These factors can be described by a set of basic indicators defined by assessment grades with attributes for transforming both numerical data and subjective judgments into a belief structure. The factor‐level attributes of hazard and vulnerability are combined using the ER algorithm, which is based on the information from a belief structure calculated by the Dempster‐Shafer (D‐S) theory, and a distributed fuzzy belief structure calculated by fuzzy set theory. The results from the combined algorithms yield distributed assessment grade matrices. The application of the model to the Xixi‐Chengnan area, China, illustrates its usefulness and validity for LSRA. The model utilizes a combination of all types of evidence, including all assessment information—quantitative or qualitative, complete or incomplete, and precise or imprecise—to provide assessment grades that define risk assessment on the basis of hazard and vulnerability. The results will enable risk managers to apply different risk prevention measures and mitigation planning based on the calculated risk states.  相似文献   

9.
商业银行信用风险评估预测模型研究   总被引:31,自引:2,他引:31  
于立勇 《管理科学》2003,6(5):46-52,98
依据商业银行信用风险的内涵,指出信用风险评估应当充分考虑信贷资金安全系数的不 确定性和信用风险的相对性特征,并以“信用风险度”作为系统的输出,构建了基于人工神经网 络的信用风险评估预测模型,为有效转变信用风险的分类评估模式,提供更为全面的信贷决策 支持奠定了基础.  相似文献   

10.
Jan F. Van Impe 《Risk analysis》2011,31(8):1295-1307
The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo‐randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used—that is, an ANOVA‐like model and Sobol sensitivity indices—to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.  相似文献   

11.
Risk‐informed decision making is often accompanied by the specification of an acceptable level of risk. Such target level is compared against the value of a risk metric, usually computed through a probabilistic safety assessment model, to decide about the acceptability of a given design, the launch of a space mission, etc. Importance measures complement the decision process with information about the risk/safety significance of events. However, importance measures do not tell us whether the occurrence of an event can change the overarching decision. By linking value of information and importance measures for probabilistic risk assessment models, this work obtains a value‐of‐information‐based importance measure that brings together the risk metric, risk importance measures, and the risk threshold in one expression. The new importance measure does not impose additional computational burden because it can be calculated from our knowledge of the risk achievement and risk reduction worth, and complements the insights delivered by these importance measures. Several properties are discussed, including the joint decision worth of basic event groups. The application to the large loss of coolant accident sequence of the Advanced Test Reactor helps us in illustrating the risk analysis insights.  相似文献   

12.
Occupational risk rates per hour of exposure have been quantified for 63 occupational accident types for the Dutch working population. Data were obtained from the analysis of more than 9,000 accidents that occurred over a period of six years in the Netherlands and resulted in three types of reportable consequences under Dutch law: (a) fatal injury, (b) permanent injury, and (c) serious recoverable injury requiring at least one day of hospitalization. A Bayesian uncertainty assessment on the value of the risk rates has been performed. Annual risks for each of the 63 occupational accident types have been calculated, including the variability in the annual exposure of the working population to the corresponding hazards. The suitability of three risk measures—individual risk rates, individual annual risk, and number of accidents—is examined and discussed.  相似文献   

13.
The rapid industrialization occurring in developing regions of the world brings not only economic benefits, but changes in the types and severity of health and environmental problems that each region experiences. As the industrialized world moves toward the use of risk assessment methodologies to aid in problem evaluation and regulatory and policy decision analysis, it seems inevitable that these methodologies will be applied globally. The changes brought about by rapid industrialization, however, must be viewed within the context of societies that are still struggling with the more traditional and basic environmental problems associated with urban and rural poverty. The urgency of development and the lack of adequate resources for characterizing health and environmental changes, often present under these circumstances, offer special challenges to the application of risk assessment methodologies.  相似文献   

14.
Natural hazards, human-induced accidents, and malicious acts have caused great losses and disruptions to society. After September 11, 2001, critical infrastructure protection has become a national focus in the United States and is likely to remain one for the foreseeable future. Damage to the infrastructures and assets could be mitigated through predisaster planning and actions. A systematic methodology was developed to assess and rank the risks from these multiple hazards in a community of 20,000 people. It is an interdisciplinary study that includes probabilistic risk assessment (PRA), decision analysis, and expert judgment. Scenarios are constructed to show how the initiating events evolve into undesirable consequences. A value tree, based on multi-attribute utility theory (MAUT), is used to capture the decisionmaker's preferences about the impacts on the infrastructures and other assets. The risks from random failures are ranked according to their expected performance index (PI), which is the product of frequency, probabilities, and consequences of a scenario. Risks from malicious acts are ranked according to their PI as the frequency of attack is not available. A deliberative process is used to capture the factors that could not be addressed in the analysis and to scrutinize the results. This methodology provides a framework for the development of a risk-informed decision strategy. Although this study uses the Massachusetts Institute of Technology campus as a case study of a real project, it is a general methodology that could be used by other similar communities and municipalities.  相似文献   

15.
The paper addresses the question of how operations research (OR) ought to handle decision problems that involve value conflicts. First, we note that early OR was considered essentially value free within the OR community, with a mechanistic systems perspective, although some voiced concern that an analyst should not detach herself from the consequences of her work. Then we propose a value conflict scale, which we use to assess the conflict levels in a small sample of OR applications. We then turn to value identification. In practise, organizational value statements include many kinds of values, and we discuss how values can be sorted out according to ethical categories, which helps in identifying consequentialistic decision criteria. The next question is how values can be enacted in a decision process. We review findings in neuroscience, which indicate that intra-personal decision-making takes place in a field of tension between deliberation and affect. The implication is that low level conflicts may leave decision-makers too cold for values to be enacted and therefore want infusion of emotion. On the other hand, emotions in high-level conflicts may run too high to give reason a chance. Emotions, therefore, need to be tempered and this can be achieved through at least two strategies: a focus on consequences rather than virtues and rules and discourse ethics. These are the subjects of the two last parts of the paper. We conclude by proposing five ethical rules for OR analysis of value conflicts. An analyst should not regard herself as being detached from the decision that are made, should be conscious that good decision-making requires temperate emotions that balance affect and deliberation, should promote focus on consequences, should promote the view that stakeholders have intrinsic value; they should not be treated instrumentally and should encourage fair processes to identify stakeholder values.  相似文献   

16.
Risk analysts frequently view the regulation of risks as being largely a matter of decision theory. According to this view, risk analysis methods provide information on the likelihood and severity of various possible outcomes; this information should then be assessed using a decision‐theoretic approach (such as cost/benefit analysis) to determine whether the risks are acceptable, and whether additional regulation is warranted. However, this view ignores the fact that in many industries (particularly industries that are technologically sophisticated and employ specialized risk and safety experts), risk analyses may be done by regulated firms, not by the regulator. Moreover, those firms may have more knowledge about the levels of safety at their own facilities than the regulator does. This creates a situation in which the regulated firm has both the opportunity—and often also the motive—to provide inaccurate (in particular, favorably biased) risk information to the regulator, and hence the regulator has reason to doubt the accuracy of the risk information provided by regulated parties. Researchers have argued that decision theory is capable of dealing with many such strategic interactions as well as game theory can. This is especially true in two‐player, two‐stage games in which the follower has a unique best strategy in response to the leader's strategy, as appears to be the case in the situation analyzed in this article. However, even in such cases, we agree with Cox that game‐theoretic methods and concepts can still be useful. In particular, the tools of mechanism design, and especially the revelation principle, can simplify the analysis of such games because the revelation principle provides rigorous assurance that it is sufficient to analyze only games in which licensees truthfully report their risk levels, making the problem more manageable. Without that, it would generally be necessary to consider much more complicated forms of strategic behavior (including deception), to identify optimal regulatory strategies. Therefore, we believe that the types of regulatory interactions analyzed in this article are better modeled using game theory rather than decision theory. In particular, the goals of this article are to review the relevant literature in game theory and regulatory economics (to stimulate interest in this area among risk analysts), and to present illustrative results showing how the application of game theory can provide useful insights into the theory and practice of risk‐informed regulation.  相似文献   

17.
Many large organizations accomplish their various functions through interactions across their major components. Components refers to functional entities within a large complex organization, such as business sectors, academic departments, or regional divisions. The dependency between the various components can cause risk to propagate through their overall system. This article presents a risk assessment framework that integrates risk across a diverse set of components to the overall organization functions. This project addresses three major challenges: aggregating risk, estimating component interdependencies including cycles of dependencies, and propagating risk across components. The framework aggregates risk assessments through a value function for severity that is evaluated at the expected outcome of accomplishing planned goals in terms of performance, schedule, and resources. The value function, which represents risk tolerance, scales between defined points corresponding to failure and success. Different risk assessment may be aggregated together. This article presents a novel approach to establishing relationships between the various components. This article develops and compares three network risk propagation models that characterize the overall organizational risk. The U.S. Air Force has applied this risk framework to evaluate success in hypothetical future wars. The analysts employing this risk framework have informed billions of dollars of strategic investment decisions.  相似文献   

18.
Prediction of natural disasters and their consequences is difficult due to the uncertainties and complexity of multiple related factors. This article explores the use of domain knowledge and spatial data to construct a Bayesian network (BN) that facilitates the integration of multiple factors and quantification of uncertainties within a consistent system for assessment of catastrophic risk. A BN is chosen due to its advantages such as merging multiple source data and domain knowledge in a consistent system, learning from the data set, inference with missing data, and support of decision making. A key advantage of our methodology is the combination of domain knowledge and learning from the data to construct a robust network. To improve the assessment, we employ spatial data analysis and data mining to extend the training data set, select risk factors, and fine‐tune the network. Another major advantage of our methodology is the integration of an optimal discretizer, informative feature selector, learners, search strategies for local topologies, and Bayesian model averaging. These techniques all contribute to a robust prediction of risk probability of natural disasters. In the flood disaster's study, our methodology achieved a better probability of detection of high risk, a better precision, and a better ROC area compared with other methods, using both cross‐validation and prediction of catastrophic risk based on historic data. Our results suggest that BN is a good alternative for risk assessment and as a decision tool in the management of catastrophic risk.  相似文献   

19.
A significant majority of hazardous materials (hazmat) shipments are moved via the highway and railroad networks, wherein the latter mode is generally preferred for long distances. Although the characteristics of highway transportation make trucks the most dominant surface transportation mode, should it be preferred for hazmat whose accidental release can cause catastrophic consequences? We answer this question by first developing a novel and comprehensive assessment methodology—which incorporates the sequence of events leading to hazmat release from the derailed railcars and the resulting consequence—to measure rail transport risk, and second making use of the proposed assessment methodology to analyze hazmat transport risk resulting from meeting the demand for chlorine and ammonia in six distinct corridors in North America. We demonstrate that rail transport will reduce risk, irrespective of the risk measure and the transport corridor, and that every attempt must be made to use railroads to transport these shipments.  相似文献   

20.
Traditional probabilistic risk assessment (PRA), of the type originally developed for engineered systems, is still proposed for terrorism risk analysis. We show that such PRA applications are unjustified in general. The capacity of terrorists to seek and use information and to actively research different attack options before deciding what to do raises unique features of terrorism risk assessment that are not adequately addressed by conventional PRA for natural and engineered systems—in part because decisions based on such PRA estimates do not adequately hedge against the different probabilities that attackers may eventually act upon. These probabilities may differ from the defender's (even if the defender's experts are thoroughly trained, well calibrated, unbiased probability assessors) because they may be conditioned on different information. We illustrate the fundamental differences between PRA and terrorism risk analysis, and suggest use of robust decision analysis for risk management when attackers may know more about some attack options than we do.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号