首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
2.
We develop and apply a judgment‐based approach to selecting robust alternatives, which are defined here as reasonably likely to achieve objectives, over a range of uncertainties. The intent is to develop an approach that is more practical in terms of data and analysis requirements than current approaches, informed by the literature and experience with probability elicitation and judgmental forecasting. The context involves decisions about managing forest lands that have been severely affected by mountain pine beetles in British Columbia, a pest infestation that is climate‐exacerbated. A forest management decision was developed as the basis for the context, objectives, and alternatives for land management actions, to frame and condition the judgments. A wide range of climate forecasts, taken to represent the 10–90% levels on cumulative distributions for future climate, were developed to condition judgments. An elicitation instrument was developed, tested, and revised to serve as the basis for eliciting probabilistic three‐point distributions regarding the performance of selected alternatives, over a set of relevant objectives, in the short and long term. The elicitations were conducted in a workshop comprising 14 regional forest management specialists. We employed the concept of stochastic dominance to help identify robust alternatives. We used extensive sensitivity analysis to explore the patterns in the judgments, and also considered the preferred alternatives for each individual expert. The results show that two alternatives that are more flexible than the current policies are judged more likely to perform better than the current alternatives on average in terms of stochastic dominance. The results suggest judgmental approaches to robust decision making deserve greater attention and testing.  相似文献   

3.
Louis Anthony Cox  Jr. 《Risk analysis》2009,29(8):1062-1068
Risk analysts often analyze adversarial risks from terrorists or other intelligent attackers without mentioning game theory. Why? One reason is that many adversarial situations—those that can be represented as attacker‐defender games, in which the defender first chooses an allocation of defensive resources to protect potential targets, and the attacker, knowing what the defender has done, then decides which targets to attack—can be modeled and analyzed successfully without using most of the concepts and terminology of game theory. However, risk analysis and game theory are also deeply complementary. Game‐theoretic analyses of conflicts require modeling the probable consequences of each choice of strategies by the players and assessing the expected utilities of these probable consequences. Decision and risk analysis methods are well suited to accomplish these tasks. Conversely, game‐theoretic formulations of attack‐defense conflicts (and other adversarial risks) can greatly improve upon some current risk analyses that attempt to model attacker decisions as random variables or uncertain attributes of targets (“threats”) and that seek to elicit their values from the defender's own experts. Game theory models that clarify the nature of the interacting decisions made by attackers and defenders and that distinguish clearly between strategic choices (decision nodes in a game tree) and random variables (chance nodes, not controlled by either attacker or defender) can produce more sensible and effective risk management recommendations for allocating defensive resources than current risk scoring models. Thus, risk analysis and game theory are (or should be) mutually reinforcing.  相似文献   

4.
《Risk analysis》2018,38(5):1070-1084
Human exposure to bacteria resistant to antimicrobials and transfer of related genes is a complex issue and occurs, among other pathways, via meat consumption. In a context of limited resources, the prioritization of risk management activities is essential. Since the antimicrobial resistance (AMR) situation differs substantially between countries, prioritization should be country specific. The objective of this study was to develop a systematic and transparent framework to rank combinations of bacteria species resistant to selected antimicrobial classes found in meat, based on the risk they represent for public health in Switzerland. A risk assessment model from slaughter to consumption was developed following the Codex Alimentarius guidelines for risk analysis of foodborne AMR. Using data from the Swiss AMR monitoring program, 208 combinations of animal species/bacteria/antimicrobial classes were identified as relevant hazards. Exposure assessment and hazard characterization scores were developed and combined using multicriteria decision analysis. The effect of changing weights of scores was explored with sensitivity analysis. Attributing equal weights to each score, poultry‐associated combinations represented the highest risk. In particular, contamination with extended‐spectrum β‐lactamase/plasmidic AmpC‐producing Escherichia coli in poultry meat ranked high for both exposure and hazard characterization. Tetracycline‐ or macrolide‐resistant Enterococcus spp., as well as fluoroquinolone‐ or macrolide‐resistant Campylobacter jejuni , ranked among combinations with the highest risk. This study provides a basis for prioritizing future activities to mitigate the risk associated with foodborne AMR in Switzerland. A user‐friendly version of the model was provided to risk managers; it can easily be adjusted to the constantly evolving knowledge on AMR.  相似文献   

5.
Researchers in judgment and decision making have long debunked the idea that we are economically rational optimizers. However, problematic assumptions of rationality remain common in studies of agricultural economics and climate change adaptation, especially those that involve quantitative models. Recent movement toward more complex agent‐based modeling provides an opportunity to reconsider the empirical basis for farmer decision making. Here, we reconceptualize farmer decision making from the ground up, using an in situ mental models approach to analyze weather and climate risk management. We assess how large‐scale commercial grain farmers in South Africa (n = 90) coordinate decisions about weather, climate variability, and climate change with those around other environmental, agronomic, economic, political, and personal risks that they manage every day. Contrary to common simplifying assumptions, we show that these farmers tend to satisfice rather than optimize as they face intractable and multifaceted uncertainty; they make imperfect use of limited information; they are differently averse to different risks; they make decisions on multiple time horizons; they are cautious in responding to changing conditions; and their diverse risk perceptions contribute to important differences in individual behaviors. We find that they use two important nonoptimizing strategies, which we call cognitive thresholds and hazy hedging, to make practical decisions under pervasive uncertainty. These strategies, evident in farmers' simultaneous use of conservation agriculture and livestock to manage weather risks, are the messy in situ performance of naturalistic decision‐making techniques. These results may inform continued research on such behavioral tendencies in narrower lab‐ and modeling‐based studies.  相似文献   

6.
Behavioral decision research has demonstrated that judgments and decisions of ordinary people and experts are subject to numerous biases. Decision and risk analysis were designed to improve judgments and decisions and to overcome many of these biases. However, when eliciting model components and parameters from decisionmakers or experts, analysts often face the very biases they are trying to help overcome. When these inputs are biased they can seriously reduce the quality of the model and resulting analysis. Some of these biases are due to faulty cognitive processes; some are due to motivations for preferred analysis outcomes. This article identifies the cognitive and motivational biases that are relevant for decision and risk analysis because they can distort analysis inputs and are difficult to correct. We also review and provide guidance about the existing debiasing techniques to overcome these biases. In addition, we describe some biases that are less relevant because they can be corrected by using logic or decomposing the elicitation task. We conclude the article with an agenda for future research.  相似文献   

7.
Research suggests that hurricane‐related risk perception is a critical predictor of behavioral response, such as evacuation. Less is known, however, about the precursors of these subjective risk judgments, especially when time has elapsed from a focal event. Drawing broadly from the risk communication, social psychology, and natural hazards literature, and specifically from concepts adapted from the risk information seeking and processing model and the protective action decision model, we examine how individuals’ distant recollections, including attribution of responsibility for the effects of a storm, attitude toward relevant information, and past hurricane experience, relate to risk judgment for a future, similar event. The present study reports on a survey involving U.S. residents in Connecticut, New Jersey, and New York (n = 619) impacted by Hurricane Sandy. While some results confirm past findings, such as that hurricane experience increases risk judgment, others suggest additional complexity, such as how various types of experience (e.g., having evacuated vs. having experienced losses) may heighten or attenuate individual‐level judgments of responsibility. We suggest avenues for future research, as well as implications for federal agencies involved in severe weather/natural hazard forecasting and communication with public audiences.  相似文献   

8.
We develop results for the use of Lasso and post‐Lasso methods to form first‐stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or post‐Lasso in the first stage is root‐n consistent and asymptotically normal when the first stage is approximately sparse, that is, when the conditional expectation of the endogenous variables given the instruments can be well‐approximated by a relatively small set of variables whose identities may be unknown. We also show that the estimator is semiparametrically efficient when the structural error is homoscedastic. Notably, our results allow for imperfect model selection, and do not rely upon the unrealistic “beta‐min” conditions that are widely used to establish validity of inference following model selection (see also Belloni, Chernozhukov, and Hansen (2011b)). In simulation experiments, the Lasso‐based IV estimator with a data‐driven penalty performs well compared to recently advocated many‐instrument robust procedures. In an empirical example dealing with the effect of judicial eminent domain decisions on economic outcomes, the Lasso‐based IV estimator outperforms an intuitive benchmark. Optimal instruments are conditional expectations. In developing the IV results, we establish a series of new results for Lasso and post‐Lasso estimators of nonparametric conditional expectation functions which are of independent theoretical and practical interest. We construct a modification of Lasso designed to deal with non‐Gaussian, heteroscedastic disturbances that uses a data‐weighted 1‐penalty function. By innovatively using moderate deviation theory for self‐normalized sums, we provide convergence rates for the resulting Lasso and post‐Lasso estimators that are as sharp as the corresponding rates in the homoscedastic Gaussian case under the condition that logp = o(n1/3). We also provide a data‐driven method for choosing the penalty level that must be specified in obtaining Lasso and post‐Lasso estimates and establish its asymptotic validity under non‐Gaussian, heteroscedastic disturbances.  相似文献   

9.
Decision biases can distort cost‐benefit evaluations of uncertain risks, leading to risk management policy decisions with predictably high retrospective regret. We argue that well‐documented decision biases encourage learning aversion, or predictably suboptimal learning and premature decision making in the face of high uncertainty about the costs, risks, and benefits of proposed changes. Biases such as narrow framing, overconfidence, confirmation bias, optimism bias, ambiguity aversion, and hyperbolic discounting of the immediate costs and delayed benefits of learning, contribute to deficient individual and group learning, avoidance of information seeking, underestimation of the value of further information, and hence needlessly inaccurate risk‐cost‐benefit estimates and suboptimal risk management decisions. In practice, such biases can create predictable regret in selection of potential risk‐reducing regulations. Low‐regret learning strategies based on computational reinforcement learning models can potentially overcome some of these suboptimal decision processes by replacing aversion to uncertain probabilities with actions calculated to balance exploration (deliberate experimentation and uncertainty reduction) and exploitation (taking actions to maximize the sum of expected immediate reward, expected discounted future reward, and value of information). We discuss the proposed framework for understanding and overcoming learning aversion and for implementing low‐regret learning strategies using regulation of air pollutants with uncertain health effects as an example.  相似文献   

10.
We present a method for forecasting sales using financial market information and test this method on annual data for US public retailers. Our method is motivated by the permanent income hypothesis in economics, which states that the amount of consumer spending and the mix of spending between discretionary and necessity items depend on the returns achieved on equity portfolios held by consumers. Taking as input forecasts from other sources, such as equity analysts or time‐series models, we construct a market‐based forecast by augmenting the input forecast with one additional variable, lagged return on an aggregate financial market index. For this, we develop and estimate a martingale model of joint evolution of sales forecasts and the market index. We show that the market‐based forecast achieves an average 15% reduction in mean absolute percentage error compared with forecasts given by equity analysts at the same time instant on out‐of‐sample data. We extensively analyze the performance improvement using alternative model specifications and statistics. We also show that equity analysts do not incorporate lagged financial market returns in their forecasts. Our model yields correlation coefficients between retail sales and market returns for all firms in the data set. Besides forecasting, these results can be applied in risk management and hedging.  相似文献   

11.
In risk assessment, the moment‐independent sensitivity analysis (SA) technique for reducing the model uncertainty has attracted a great deal of attention from analysts and practitioners. It aims at measuring the relative importance of an individual input, or a set of inputs, in determining the uncertainty of model output by looking at the entire distribution range of model output. In this article, along the lines of Plischke et al., we point out that the original moment‐independent SA index (also called delta index) can also be interpreted as the dependence measure between model output and input variables, and introduce another moment‐independent SA index (called extended delta index) based on copula. Then, nonparametric methods for estimating the delta and extended delta indices are proposed. Both methods need only a set of samples to compute all the indices; thus, they conquer the problem of the “curse of dimensionality.” At last, an analytical test example, a risk assessment model, and the levelE model are employed for comparing the delta and the extended delta indices and testing the two calculation methods. Results show that the delta and the extended delta indices produce the same importance ranking in these three test examples. It is also shown that these two proposed calculation methods dramatically reduce the computational burden.  相似文献   

12.
In recent years, there have been growing concerns regarding risks in federal information technology (IT) supply chains in the United States that protect cyber infrastructure. A critical need faced by decisionmakers is to prioritize investment in security mitigations to maximally reduce risks in IT supply chains. We extend existing stochastic expected budgeted maximum multiple coverage models that identify “good” solutions on average that may be unacceptable in certain circumstances. We propose three alternative models that consider different robustness methods that hedge against worst‐case risks, including models that maximize the worst‐case coverage, minimize the worst‐case regret, and maximize the average coverage in the ( 1 ? α ) worst cases (conditional value at risk). We illustrate the solutions to the robust methods with a case study and discuss the insights their solutions provide into mitigation selection compared to an expected‐value maximizer. Our study provides valuable tools and insights for decisionmakers with different risk attitudes to manage cybersecurity risks under uncertainty.  相似文献   

13.
In risk analysis problems, the decision‐making process is supported by the utilization of quantitative models. Assessing the relevance of interactions is an essential information in the interpretation of model results. By such knowledge, analysts and decisionmakers are able to understand whether risk is apportioned by individual factor contributions or by their joint action. However, models are oftentimes large, requiring a high number of input parameters, and complex, with individual model runs being time consuming. Computational complexity leads analysts to utilize one‐parameter‐at‐a‐time sensitivity methods, which prevent one from assessing interactions. In this work, we illustrate a methodology to quantify interactions in probabilistic safety assessment (PSA) models by varying one parameter at a time. The method is based on a property of the functional ANOVA decomposition of a finite change that allows to exactly determine the relevance of factors when considered individually or together with their interactions with all other factors. A set of test cases illustrates the technique. We apply the methodology to the analysis of the core damage frequency of the large loss of coolant accident of a nuclear reactor. Numerical results reveal the nonadditive model structure, allow to quantify the relevance of interactions, and to identify the direction of change (increase or decrease in risk) implied by individual factor variations and by their cooperation.  相似文献   

14.
Mathematical programming and multicriteria approaches to classification and discrimination are reviewed, with an emphasis on preference disaggregation. The latter include the UTADIS family and a new method, Multigroup Hierarchical DIScrimination (MHDIS). They are used to assess investing risk in 51 countries that have stock exchanges, according to 27 criteria. These criteria include quantitative and qualitative measures of market risk (volatility and currency fluctuations); range of investment opportunities; quantity and quality on market information; investor protection (security regulations treatment of minority shareholders); and administrative “headaches” (custody, settlement, and taxes). The model parameters are determined so that the results best match the risk level assigned to those countries by experienced international investment managers commissioned by The Wall Street Journal. Among the six evaluation models developed, one (MHDIS) classifies correctly all countries into the appropriate groups. Thus, this model is able to reproduce consistently the evaluation of the expert investment analysts. The most significant criteria and their weights for assessing global risk investing are also presented, along with their marginal utilities, leading to identifiers of risk groups and global utilities portraying the strength of each country's risk classification. The same method, MHDIS, outperformed the other five methods in a 10‐fold validation experiment. These results are promising for the study of emerging new markets in fast‐growing regions, which present fertile areas for investment growth but also  相似文献   

15.
Ted W. Yellman 《Risk analysis》2016,36(6):1072-1078
Some of the terms used in risk assessment and management are poorly and even contradictorily defined. One such term is “event,” which arguably describes the most basic of all risk‐related concepts. The author cites two contemporary textbook interpretations of “event” that he contends are incorrect and misleading. He then examines the concept of an event in A. N. Kolmogorov's probability axioms and in several more‐current textbooks. Those concepts are found to be too narrow for risk assessments and inconsistent with the actual usage of “event” by risk analysts. The author goes on to define and advocate linguistic definitions of events (as opposed to mathematical definitions)—definitions constructed from natural language. He argues that they should be recognized for what they are: the de facto primary method of defining events.  相似文献   

16.
Cox LA 《Risk analysis》2012,32(7):1244-1252
Simple risk formulas, such as risk = probability × impact, or risk = exposure × probability × consequence, or risk = threat × vulnerability × consequence, are built into many commercial risk management software products deployed in public and private organizations. These formulas, which we call risk indices, together with risk matrices, “heat maps,” and other displays based on them, are widely used in applications such as enterprise risk management (ERM), terrorism risk analysis, and occupational safety. But, how well do they serve to guide allocation of limited risk management resources? This article evaluates and compares different risk indices under simplifying conditions favorable to their use (statistically independent, uniformly distributed values of their components; and noninteracting risk‐reduction opportunities). Compared to an optimal (nonindex) approach, simple indices produce inferior resource allocations that for a given cost may reduce risk by as little as 60% of what the optimal decisions would provide, at least in our simple simulations. This article suggests a better risk reduction per unit cost index that achieves 98–100% of the maximum possible risk reduction on these problems for all budget levels except the smallest, which allow very few risks to be addressed. Substantial gains in risk reduction achieved for resources spent can be obtained on our test problems by using this improved index instead of simpler ones that focus only on relative sizes of risk (or of components of risk) in informing risk management priorities and allocating limited risk management resources. This work suggests the need for risk management tools to explicitly consider costs in prioritization activities, particularly in situations where budget restrictions make careful allocation of resources essential for achieving close‐to‐maximum risk‐reduction benefits.  相似文献   

17.
18.
Louis Anthony Cox  Jr 《Risk analysis》2008,28(6):1749-1761
Several important risk analysis methods now used in setting priorities for protecting U.S. infrastructures against terrorist attacks are based on the formula: Risk=Threat×Vulnerability×Consequence. This article identifies potential limitations in such methods that can undermine their ability to guide resource allocations to effectively optimize risk reductions. After considering specific examples for the Risk Analysis and Management for Critical Asset Protection (RAMCAP?) framework used by the Department of Homeland Security, we address more fundamental limitations of the product formula. These include its failure to adjust for correlations among its components, nonadditivity of risks estimated using the formula, inability to use risk‐scoring results to optimally allocate defensive resources, and intrinsic subjectivity and ambiguity of Threat, Vulnerability, and Consequence numbers. Trying to directly assess probabilities for the actions of intelligent antagonists instead of modeling how they adaptively pursue their goals in light of available information and experience can produce ambiguous or mistaken risk estimates. Recent work demonstrates that two‐level (or few‐level) hierarchical optimization models can provide a useful alternative to Risk=Threat×Vulnerability×Consequence scoring rules, and also to probabilistic risk assessment (PRA) techniques that ignore rational planning and adaptation. In such two‐level optimization models, defender predicts attacker's best response to defender's own actions, and then chooses his or her own actions taking into account these best responses. Such models appear valuable as practical approaches to antiterrorism risk analysis.  相似文献   

19.
Qualitative systems for rating animal antimicrobial risks using ordered categorical labels such as “high,”“medium,” and “low” can potentially simplify risk assessment input requirements used to inform risk management decisions. But do they improve decisions? This article compares the results of qualitative and quantitative risk assessment systems and establishes some theoretical limitations on the extent to which they are compatible. In general, qualitative risk rating systems satisfying conditions found in real‐world rating systems and guidance documents and proposed as reasonable make two types of errors: (1) Reversed rankings, i.e., assigning higher qualitative risk ratings to situations that have lower quantitative risks; and (2) Uninformative ratings, e.g., frequently assigning the most severe qualitative risk label (such as “high”) to situations with arbitrarily small quantitative risks and assigning the same ratings to risks that differ by many orders of magnitude. Therefore, despite their appealing consensus‐building properties, flexibility, and appearance of thoughtful process in input requirements, qualitative rating systems as currently proposed often do not provide sufficient information to discriminate accurately between quantitatively small and quantitatively large risks. The value of information (VOI) that they provide for improving risk management decisions can be zero if most risks are small but a few are large, since qualitative ratings may then be unable to confidently distinguish the large risks from the small. These limitations suggest that it is important to continue to develop and apply practical quantitative risk assessment methods, since qualitative ones are often unreliable.  相似文献   

20.
In Science and Decisions: Advancing Risk Assessment, the National Research Council recommends improvements in the U.S. Environmental Protection Agency's approach to risk assessment. The recommendations aim to increase the utility of these assessments, embedding them within a new risk‐based decision‐making framework. The framework involves first identifying the problem and possible options for addressing it, conducting related analyses, then reviewing the results and making the risk management decision. Experience with longstanding requirements for regulatory impact analysis provides insights into the implementation of this framework. First, neither the Science and Decisions framework nor the framework for regulatory impact analysis should be viewed as a static or linear process, where each step is completed before moving on to the next. Risk management options are best evaluated through an iterative and integrative procedure. The extent to which a hazard has been previously studied will strongly influence analysts’ ability to identify options prior to conducting formal analyses, and these options will be altered and refined as the analysis progresses. Second, experience with regulatory impact analysis suggests that legal and political constraints may limit the range of options assessed, contrary to both existing guidance for regulatory impact analysis and the Science and Decisions recommendations. Analysts will need to work creatively to broaden the range of options considered. Finally, the usefulness of regulatory impact analysis has been significantly hampered by the inability to quantify many health impacts of concern, suggesting that the scientific improvements offered within Science and Decisions will fill an crucial research gap.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号