首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Behavioral decision research has demonstrated that judgments and decisions of ordinary people and experts are subject to numerous biases. Decision and risk analysis were designed to improve judgments and decisions and to overcome many of these biases. However, when eliciting model components and parameters from decisionmakers or experts, analysts often face the very biases they are trying to help overcome. When these inputs are biased they can seriously reduce the quality of the model and resulting analysis. Some of these biases are due to faulty cognitive processes; some are due to motivations for preferred analysis outcomes. This article identifies the cognitive and motivational biases that are relevant for decision and risk analysis because they can distort analysis inputs and are difficult to correct. We also review and provide guidance about the existing debiasing techniques to overcome these biases. In addition, we describe some biases that are less relevant because they can be corrected by using logic or decomposing the elicitation task. We conclude the article with an agenda for future research.  相似文献   

2.
How can risk analysts help to improve policy and decision making when the correct probabilistic relation between alternative acts and their probable consequences is unknown? This practical challenge of risk management with model uncertainty arises in problems from preparing for climate change to managing emerging diseases to operating complex and hazardous facilities safely. We review constructive methods for robust and adaptive risk analysis under deep uncertainty. These methods are not yet as familiar to many risk analysts as older statistical and model‐based methods, such as the paradigm of identifying a single “best‐fitting” model and performing sensitivity analyses for its conclusions. They provide genuine breakthroughs for improving predictions and decisions when the correct model is highly uncertain. We demonstrate their potential by summarizing a variety of practical risk management applications.  相似文献   

3.
E. S. Levine 《Risk analysis》2012,32(2):294-303
Many analyses conducted to inform security decisions depend on estimates of the conditional probabilities of different attack alternatives. These probabilities are difficult to estimate since analysts have limited access to the adversary and limited knowledge of the adversary’s utility function, so subject matter experts often provide the estimates through direct elicitation. In this article, we describe a method of using uncertainty in utility function value tradeoffs to model the adversary’s decision process and solve for the conditional probabilities of different attacks in closed form. The conditional probabilities are suitable to be used as inputs to probabilistic risk assessments and other decision support techniques. The process we describe is an extension of value‐focused thinking and is broadly applicable, including in general business decision making. We demonstrate the use of this technique with simple examples.  相似文献   

4.
Qualitative systems for rating animal antimicrobial risks using ordered categorical labels such as “high,”“medium,” and “low” can potentially simplify risk assessment input requirements used to inform risk management decisions. But do they improve decisions? This article compares the results of qualitative and quantitative risk assessment systems and establishes some theoretical limitations on the extent to which they are compatible. In general, qualitative risk rating systems satisfying conditions found in real‐world rating systems and guidance documents and proposed as reasonable make two types of errors: (1) Reversed rankings, i.e., assigning higher qualitative risk ratings to situations that have lower quantitative risks; and (2) Uninformative ratings, e.g., frequently assigning the most severe qualitative risk label (such as “high”) to situations with arbitrarily small quantitative risks and assigning the same ratings to risks that differ by many orders of magnitude. Therefore, despite their appealing consensus‐building properties, flexibility, and appearance of thoughtful process in input requirements, qualitative rating systems as currently proposed often do not provide sufficient information to discriminate accurately between quantitatively small and quantitatively large risks. The value of information (VOI) that they provide for improving risk management decisions can be zero if most risks are small but a few are large, since qualitative ratings may then be unable to confidently distinguish the large risks from the small. These limitations suggest that it is important to continue to develop and apply practical quantitative risk assessment methods, since qualitative ones are often unreliable.  相似文献   

5.
Modern technology, together with an advanced economy, can provide a good or service in myriad ways, giving us choices on what to produce and how to produce it. To make those choices more intelligently, society needs to know not only the market price of each alternative, but the associated health and environmental consequences. A fair comparison requires evaluating the consequences across the whole "life cycle"--from the extraction of raw materials and processing to manufacture/construction, use, and end-of-life--of each alternative. Focusing on only one stage (e.g., manufacture) of the life cycle is often misleading. Unfortunately, analysts and researchers still have only rudimentary tools to quantify the materials and energy inputs and the resulting damage to health and the environment. Life cycle assessment (LCA) provides an overall framework for identifying and evaluating these implications. Since the 1960s, considerable progress has been made in developing methods for LCA, especially in characterizing, qualitatively and quantitatively, environmental discharges. However, few of these analyses have attempted to assess the quantitative impact on the environment and health of material inputs and environmental discharges Risk analysis and LCA are connected closely. While risk analysis has characterized and quantified the health risks of exposure to a toxicant, the policy implications have not been clear. Inferring that an occupational or public health exposure carries a nontrivial risk is only the first step in formulating a policy response. A broader framework, including LCA, is needed to see which response is likely to lower the risk without creating high risks elsewhere. Even more important, LCA has floundered at the stage of translating an inventory of environmental discharges into estimates of impact on health and the environment. Without the impact analysis, policymakers must revert to some simple rule, such as that all discharges, regardless of which chemical, which medium, and where they are discharged, are equally toxic. Thus, risk analysts should seek LCA guidance in translating a risk analysis into policy conclusions or even advice to those at risk. LCA needs the help of RA to go beyond simplistic assumptions about the implications of a discharge inventory. We demonstrate the need and rationale for LCA, present a brief history of LCA, present examples of the application of this tool, note the limitations of LCA models, and present several methods for incorporating risk assessment into LCA. However, we warn the reader not to expect too much. A comprehensive comparison of the health and environmental implications of alternatives is beyond the state of the art. LCA is currently not able to provide risk analysts with detailed information on the chemical form and location of the environmental discharges that would allow detailed estimation of the risks to individuals due to toxicants. For example, a challenge for risk analysts is to estimate health and other risks where the location and chemical speciation are not characterized precisely. Providing valuable information to decisionmakers requires advances in both LCA and risk analysis. These two disciplines should be closely linked, since each has much to contribute to the other.  相似文献   

6.
Louis Anthony Cox  Jr. 《Risk analysis》2009,29(8):1062-1068
Risk analysts often analyze adversarial risks from terrorists or other intelligent attackers without mentioning game theory. Why? One reason is that many adversarial situations—those that can be represented as attacker‐defender games, in which the defender first chooses an allocation of defensive resources to protect potential targets, and the attacker, knowing what the defender has done, then decides which targets to attack—can be modeled and analyzed successfully without using most of the concepts and terminology of game theory. However, risk analysis and game theory are also deeply complementary. Game‐theoretic analyses of conflicts require modeling the probable consequences of each choice of strategies by the players and assessing the expected utilities of these probable consequences. Decision and risk analysis methods are well suited to accomplish these tasks. Conversely, game‐theoretic formulations of attack‐defense conflicts (and other adversarial risks) can greatly improve upon some current risk analyses that attempt to model attacker decisions as random variables or uncertain attributes of targets (“threats”) and that seek to elicit their values from the defender's own experts. Game theory models that clarify the nature of the interacting decisions made by attackers and defenders and that distinguish clearly between strategic choices (decision nodes in a game tree) and random variables (chance nodes, not controlled by either attacker or defender) can produce more sensible and effective risk management recommendations for allocating defensive resources than current risk scoring models. Thus, risk analysis and game theory are (or should be) mutually reinforcing.  相似文献   

7.
This article presents a process for an integrated policy analysis that combines risk assessment and benefit-cost analysis. This concept, which explicitly combines the two types of related analyses, seems to contradict the long-accepted risk analysis paradigm of separating risk assessment and risk management since benefit-cost analysis is generally considered to be a part of risk management. Yet that separation has become a problem because benefit-cost analysis uses risk assessment results as a starting point and considerable debate over the last several years focused on the incompatibility of the use of upper bounds or "safe" point estimates in many risk assessments with benefit-cost analysis. The problem with these risk assessments is that they ignore probabilistic information. As advanced probabilistic techniques for risk assessment emerge and economic analysts receive distributions of risks instead of point estimates, the artificial separation between risk analysts and the economic/decision analysts complicates the overall analysis. In addition, recent developments in countervailing risk theory suggest that combining the risk and benefit-cost analyses is required to fully understand the complexity of choices and tradeoffs faced by the decisionmaker. This article also argues that the separation of analysis and management is important, but that benefit-cost analysis has been wrongly classified into the risk management category and that the analytical effort associated with understanding the economic impacts of risk reduction actions need to be part of a broader risk assessment process.  相似文献   

8.
9.
Improving Risk Communication   总被引:3,自引:0,他引:3  
This paper explores reasons for difficulties in communicating risks among analysts, the laypublic, media, and regulators. Formulating risk communication problems as decisions involving objectives and alternatives helps to identify strategies for overcoming these difficulties. Several strategies are suggested to achieve risk communication objectives like improving public knowledge about risks and risk management, encouraging risk reduction behavior, understanding public values and concerns, and increasing trust and credibility.  相似文献   

10.
The bounding analysis methodology described by Ha-Duong et al. (this issue) is logically incomplete and invites serious misuse and misinterpretation, as their own example and interpretation illustrate. A key issue is the extent to which these problems are inherent in their methodology, and resolvable by a logically complete assessment (such as Monte Carlo or Bayesian risk assessment), as opposed to being general problems in any risk-assessment methodology. I here attempt to apportion the problems between those inherent in the proposed bounding analysis and those that are more general, such as reliance on questionable expert elicitations. I conclude that the specific methodology of Ha-Duong et al. suffers from logical gaps in the definition and construction of inputs, and hence should not be used in the form proposed. Furthermore, the labor required to do a sound bounding analysis is great enough so that one may as well skip that analysis and carry out a more logically complete probabilistic analysis, one that will better inform the consumer of the appropriate level uncertainty. If analysts insist on carrying out a bounding analysis in place of more thorough assessments, extensive analyses of sensitivity to inputs and assumptions will be essential to display uncertainties, arguably more essential than it would be in full probabilistic analyses.  相似文献   

11.
《Risk analysis》2018,38(5):876-888
To solve real‐life problems—such as those related to technology, health, security, or climate change—and make suitable decisions, risk is nearly always a main issue. Different types of sciences are often supporting the work, for example, statistics, natural sciences, and social sciences. Risk analysis approaches and methods are also commonly used, but risk analysis is not broadly accepted as a science in itself. A key problem is the lack of explanatory power and large uncertainties when assessing risk. This article presents an emerging new risk analysis science based on novel ideas and theories on risk analysis developed in recent years by the risk analysis community. It builds on a fundamental change in thinking, from the search for accurate predictions and risk estimates, to knowledge generation related to concepts, theories, frameworks, approaches, principles, methods, and models to understand, assess, characterize, communicate, and (in a broad sense) manage risk. Examples are used to illustrate the importance of this distinct/separate risk analysis science for solving risk problems, supporting science in general and other disciplines in particular.  相似文献   

12.
Most attacker–defender games consider players as risk neutral, whereas in reality attackers and defenders may be risk seeking or risk averse. This article studies the impact of players' risk preferences on their equilibrium behavior and its effect on the notion of deterrence. In particular, we study the effects of risk preferences in a single‐period, sequential game where a defender has a continuous range of investment levels that could be strategically chosen to potentially deter an attack. This article presents analytic results related to the effect of attacker and defender risk preferences on the optimal defense effort level and their impact on the deterrence level. Numerical illustrations and some discussion of the effect of risk preferences on deterrence and the utility of using such a model are provided, as well as sensitivity analysis of continuous attack investment levels and uncertainty in the defender's beliefs about the attacker's risk preference. A key contribution of this article is the identification of specific scenarios in which the defender using a model that takes into account risk preferences would be better off than a defender using a traditional risk‐neutral model. This study provides insights that could be used by policy analysts and decisionmakers involved in investment decisions in security and safety.  相似文献   

13.
A number of investigators have explored the use of value of information (VOI) analysis to evaluate alternative information collection procedures in diverse decision-making contexts. This paper presents an analytic framework for determining the value of toxicity information used in risk-based decision making. The framework is specifically designed to explore the trade-offs between cost, timeliness, and uncertainty reduction associated with different toxicity-testing methodologies. The use of the proposed framework is demonstrated by two illustrative applications which, although based on simplified assumptions, show the insights that can be obtained through the use of VOI analysis. Specifically, these results suggest that timeliness of information collection has a significant impact on estimates of the VOI of chemical toxicity tests, even in the presence of smaller reductions in uncertainty. The framework introduces the concept of the expected value of delayed sample information, as an extension to the usual expected value of sample information, to accommodate the reductions in value resulting from delayed decision making. Our analysis also suggests that lower cost and higher throughput testing also may be beneficial in terms of public health benefits by increasing the number of substances that can be evaluated within a given budget. When the relative value is expressed in terms of return-on-investment per testing strategy, the differences can be substantial.  相似文献   

14.
In counterterrorism risk management decisions, the analyst can choose to represent terrorist decisions as defender uncertainties or as attacker decisions. We perform a comparative analysis of probabilistic risk analysis (PRA) methods including event trees, influence diagrams, Bayesian networks, decision trees, game theory, and combined methods on the same illustrative examples (container screening for radiological materials) to get insights into the significant differences in assumptions and results. A key tenent of PRA and decision analysis is the use of subjective probability to assess the likelihood of possible outcomes. For each technique, we compare the assumptions, probability assessment requirements, risk levels, and potential insights for risk managers. We find that assessing the distribution of potential attacker decisions is a complex judgment task, particularly considering the adaptation of the attacker to defender decisions. Intelligent adversary risk analysis and adversarial risk analysis are extensions of decision analysis and sequential game theory that help to decompose such judgments. These techniques explicitly show the adaptation of the attacker and the resulting shift in risk based on defender decisions.  相似文献   

15.
《Risk analysis》2018,38(8):1529-1533
In the field of risk analysis, the normative value systems underlying accepted methodology are rarely explicitly discussed. This perspective provides a critique of the various ethical frameworks that can be used in risk assessments and risk management decisions. The goal is to acknowledge philosophical weaknesses that should be considered and communicated in order to improve the public acceptance of the work of risk analysts.  相似文献   

16.
Coastal flood risk is expected to increase as a result of climate change effects, such as sea level rise, and socioeconomic growth. To support policymakers in making adaptation decisions, accurate flood risk assessments that account for the influence of complex adaptation processes on the developments of risks are essential. In this study, we integrate the dynamic adaptive behavior of homeowners within a flood risk modeling framework. Focusing on building-level adaptation and flood insurance, the agent-based model (DYNAMO) is benchmarked with empirical data for New York City, USA. The model simulates the National Flood Insurance Program (NFIP) and frequently proposed reforms to evaluate their effectiveness. The model is applied to a case study of Jamaica Bay, NY. Our results indicate that risk-based premiums can improve insurance penetration rates and the affordability of insurance compared to the baseline NFIP market structure. While a premium discount for disaster risk reduction incentivizes more homeowners to invest in dry-floodproofing measures, it does not significantly improve affordability. A low interest rate loan for financing risk-mitigation investments improves the uptake and affordability of dry-floodproofing measures. The benchmark and sensitivity analyses demonstrate how the behavioral component of our model matches empirical data and provides insights into the underlying theories and choices that autonomous agents make.  相似文献   

17.
In Science and Decisions: Advancing Risk Assessment, the National Research Council recommends improvements in the U.S. Environmental Protection Agency's approach to risk assessment. The recommendations aim to increase the utility of these assessments, embedding them within a new risk‐based decision‐making framework. The framework involves first identifying the problem and possible options for addressing it, conducting related analyses, then reviewing the results and making the risk management decision. Experience with longstanding requirements for regulatory impact analysis provides insights into the implementation of this framework. First, neither the Science and Decisions framework nor the framework for regulatory impact analysis should be viewed as a static or linear process, where each step is completed before moving on to the next. Risk management options are best evaluated through an iterative and integrative procedure. The extent to which a hazard has been previously studied will strongly influence analysts’ ability to identify options prior to conducting formal analyses, and these options will be altered and refined as the analysis progresses. Second, experience with regulatory impact analysis suggests that legal and political constraints may limit the range of options assessed, contrary to both existing guidance for regulatory impact analysis and the Science and Decisions recommendations. Analysts will need to work creatively to broaden the range of options considered. Finally, the usefulness of regulatory impact analysis has been significantly hampered by the inability to quantify many health impacts of concern, suggesting that the scientific improvements offered within Science and Decisions will fill an crucial research gap.  相似文献   

18.
Risk‐benefit analyses are introduced as a new paradigm for old problems. However, in many cases it is not always necessary to perform a full comprehensive and expensive quantitative risk‐benefit assessment to solve the problem, nor is it always possible, given the lack of required date. The choice to continue from a more qualitative to a full quantitative risk‐benefit assessment can be made using a tiered approach. In this article, this tiered approach for risk‐benefit assessment will be addressed using a decision tree. The tiered approach described uses the same four steps as the risk assessment paradigm: hazard and benefit identification, hazard and benefit characterization, exposure assessment, and risk‐benefit characterization, albeit in a different order. For the purpose of this approach, the exposure assessment has been moved upward and the dose‐response modeling (part of hazard and benefit characterization) is moved to a later stage. The decision tree includes several stop moments, depending on the situation where the gathered information is sufficient to answer the initial risk‐benefit question. The approach has been tested for two food ingredients. The decision tree presented in this article is useful to assist on a case‐by‐case basis a risk‐benefit assessor and policymaker in making informed choices when to stop or continue with a risk‐benefit assessment.  相似文献   

19.
Concern about the degree of uncertainty and potential conservatism in deterministic point estimates of risk has prompted researchers to turn increasingly to probabilistic methods for risk assessment. With Monte Carlo simulation techniques, distributions of risk reflecting uncertainty and/or variability are generated as an alternative. In this paper the compounding of conservatism(1) between the level associated with point estimate inputs selected from probability distributions and the level associated with the deterministic value of risk calculated using these inputs is explored. Two measures of compounded conservatism are compared and contrasted. The first measure considered, F , is defined as the ratio of the risk value, R d, calculated deterministically as a function of n inputs each at the j th percentile of its probability distribution, and the risk value, R j that falls at the j th percentile of the simulated risk distribution (i.e., F=Rd/Rj). The percentile of the simulated risk distribution which corresponds to the deterministic value, Rd , serves as a second measure of compounded conservatism. Analytical results for simple products of lognormal distributions are presented. In addition, a numerical treatment of several complex cases is presented using five simulation analyses from the literature to illustrate. Overall, there are cases in which conservatism compounds dramatically for deterministic point estimates of risk constructed from upper percentiles of input parameters, as well as those for which the effect is less notable. The analytical and numerical techniques discussed are intended to help analysts explore the factors that influence the magnitude of compounding conservatism in specific cases.  相似文献   

20.
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean‐variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean‐variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean‐variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号