首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
This article proposes, develops, and illustrates the application of level‐k game theory to adversarial risk analysis. Level‐k reasoning, which assumes that players play strategically but have bounded rationality, is useful for operationalizing a Bayesian approach to adversarial risk analysis. It can be applied in a broad class of settings, including settings with asynchronous play and partial but incomplete revelation of early moves. Its computational and elicitation requirements are modest. We illustrate the approach with an application to a simple defend‐attack model in which the defender's countermeasures are revealed with a probability less than one to the attacker before he decides on how or whether to attack.  相似文献   

2.
《Risk analysis》2018,38(5):1036-1051
Risks of allergic contact dermatitis (ACD) from consumer products intended for extended (nonpiercing) dermal contact are regulated by E.U. Directive EN 1811 that limits released Ni to a weekly equivalent dermal load of ≤0.5 μg/cm2. Similar approaches for thousands of known organic sensitizers are hampered by inability to quantify respective ACD‐elicitation risk levels. To help address this gap, normalized values of cumulative risk for eliciting a positive (“≥+”) clinical patch test response reported in 12 studies for a total of n = 625 Ni‐sensitized patients were modeled in relation to observed ACD‐eliciting Ni loads, yielding an approximate lognormal (LN) distribution with a geometric mean and standard deviation of GMNi = 15 μg/cm2 and GSDNi = 8.0, respectively. Such data for five sensitizers (including formaldehyde and 2‐hydroxyethyl methacrylate) were also ∼LN distributed, but with a common GSD value equal to GSDNi and with heterogeneous sensitizer‐specific GM values each defining a respective ACD‐eliciting potency GMNi/GM relative to Ni. Such potencies were also estimated for nine (meth)acrylates by applying this general LN ACD‐elicitation risk model to respective sets of fewer data. ACD‐elicitation risk patterns observed for Cr(VI) (n = 417) and Cr(III) (n = 78) were fit to mixed‐LN models in which ∼30% and ∼40% of the most sensitive responders, respectively, were estimated to exhibit a LN response also governed by GSDNi. The observed common LN‐response shape parameter GSDNi may reflect a common underlying ACD mechanism and suggests a common interim approach to quantitative ACD‐elicitation risk assessment based on available clinical data.  相似文献   

3.
Louis Anthony Cox  Jr. 《Risk analysis》2009,29(8):1062-1068
Risk analysts often analyze adversarial risks from terrorists or other intelligent attackers without mentioning game theory. Why? One reason is that many adversarial situations—those that can be represented as attacker‐defender games, in which the defender first chooses an allocation of defensive resources to protect potential targets, and the attacker, knowing what the defender has done, then decides which targets to attack—can be modeled and analyzed successfully without using most of the concepts and terminology of game theory. However, risk analysis and game theory are also deeply complementary. Game‐theoretic analyses of conflicts require modeling the probable consequences of each choice of strategies by the players and assessing the expected utilities of these probable consequences. Decision and risk analysis methods are well suited to accomplish these tasks. Conversely, game‐theoretic formulations of attack‐defense conflicts (and other adversarial risks) can greatly improve upon some current risk analyses that attempt to model attacker decisions as random variables or uncertain attributes of targets (“threats”) and that seek to elicit their values from the defender's own experts. Game theory models that clarify the nature of the interacting decisions made by attackers and defenders and that distinguish clearly between strategic choices (decision nodes in a game tree) and random variables (chance nodes, not controlled by either attacker or defender) can produce more sensible and effective risk management recommendations for allocating defensive resources than current risk scoring models. Thus, risk analysis and game theory are (or should be) mutually reinforcing.  相似文献   

4.
We design experiments to jointly elicit risk and time preferences for the adult Danish population. Since subjects are generally risk averse, we find that joint elicitation provides estimates of discount rates that are significantly lower than those found in previous studies and more in line with what would be considered as a priori reasonable rates. The statistical specification relies on a theoretical framework that involves a latent trade‐off between long‐run optimization and short‐run temptation. Estimation of this specification is undertaken using structural, maximum likelihood methods. Our main results based on exponential discounting are robust to alternative specifications such as hyperbolic discounting. These results have direct implications for attempts to elicit time preferences, as well as debates over the appropriate domain of the utility function when characterizing risk aversion and time consistency.  相似文献   

5.
Seda Erdem  Dan Rigby 《Risk analysis》2013,33(9):1728-1748
This research proposes and implements a new approach to the elicitation and analysis of perceptions of risk. We use best worst scaling (BWS) to elicit the levels of control respondents believe they have over risks and the level of concern those risks prompt. The approach seeks perceptions of control and concern over a large risk set and the elicitation method is structured so as to reduce the cognitive burden typically associated with ranking over large sets. The BWS approach is designed to yield strong discrimination over items. Further, the approach permits derivation of individual‐level values, in this case of perceptions of control and worry, and analysis of how these vary over observable characteristics, through estimation of random parameter logit models. The approach is implemented for a set of 20 food and nonfood risks. The results show considerable heterogeneity in perceptions of control and worry, that the degree of heterogeneity varies across the risks, and that women systematically consider themselves to have less control over the risks than men.  相似文献   

6.
We develop and apply a judgment‐based approach to selecting robust alternatives, which are defined here as reasonably likely to achieve objectives, over a range of uncertainties. The intent is to develop an approach that is more practical in terms of data and analysis requirements than current approaches, informed by the literature and experience with probability elicitation and judgmental forecasting. The context involves decisions about managing forest lands that have been severely affected by mountain pine beetles in British Columbia, a pest infestation that is climate‐exacerbated. A forest management decision was developed as the basis for the context, objectives, and alternatives for land management actions, to frame and condition the judgments. A wide range of climate forecasts, taken to represent the 10–90% levels on cumulative distributions for future climate, were developed to condition judgments. An elicitation instrument was developed, tested, and revised to serve as the basis for eliciting probabilistic three‐point distributions regarding the performance of selected alternatives, over a set of relevant objectives, in the short and long term. The elicitations were conducted in a workshop comprising 14 regional forest management specialists. We employed the concept of stochastic dominance to help identify robust alternatives. We used extensive sensitivity analysis to explore the patterns in the judgments, and also considered the preferred alternatives for each individual expert. The results show that two alternatives that are more flexible than the current policies are judged more likely to perform better than the current alternatives on average in terms of stochastic dominance. The results suggest judgmental approaches to robust decision making deserve greater attention and testing.  相似文献   

7.
Listeria monocytogenes is among the foodborne pathogens with the highest death toll in the United States. Ready‐to‐eat foods contaminated at retail are an important source of infection. Environmental sites in retail deli operations can be contaminated. However, commonly contaminated sites are unlikely to come into direct contact with food and the public health relevance of environmental contamination has remained unclear. To identify environmental sites that may pose a considerable cross‐contamination risk, to elucidate potential transmission pathways, and to identify knowledge gaps, we performed a structured expert elicitation of 41 experts from state regulatory agencies and the food retail industry with practical experience in retail deli operations. Following the “Delphi” method, the elicitation was performed in three consecutive steps: questionnaire, review and discussion of results, second questionnaire. Hands and gloves were identified as important potential contamination sources. However, bacterial transfers to and from hands or gloves represented a major data gap. Experts agreed about transfer probabilities from cutting boards, scales, deli cases, and deli preparation sinks to product, and about transfer probabilities from floor drains, walk‐in cooler floors, and knife racks to food contact surfaces. Comparison of experts' opinions to observational data revealed a tendency among experts with certain demographic characteristics and professional opinions to overestimate prevalence. Experts’ votes clearly clustered into separate groups not defined by place of employment, even though industry experts may have been somewhat overrepresented in one cluster. Overall, our study demonstrates the value and caveats of expert elicitation to identify data gaps and prioritize research efforts.  相似文献   

8.
Elicitation of expert opinion is important for risk analysis when only limited data are available. Expert opinion is often elicited in the form of subjective confidence intervals; however, these are prone to substantial overconfidence. We investigated the influence of elicitation question format, in particular the number of steps in the elicitation procedure. In a 3‐point elicitation procedure, an expert is asked for a lower limit, upper limit, and best guess, the two limits creating an interval of some assigned confidence level (e.g., 80%). In our 4‐step interval elicitation procedure, experts were also asked for a realistic lower limit, upper limit, and best guess, but no confidence level was assigned; the fourth step was to rate their anticipated confidence in the interval produced. In our three studies, experts made interval predictions of rates of infectious diseases (Study 1, n = 21 and Study 2, n = 24: epidemiologists and public health experts), or marine invertebrate populations (Study 3, n = 34: ecologists and biologists). We combined the results from our studies using meta‐analysis, which found average overconfidence of 11.9%, 95% CI [3.5, 20.3] (a hit rate of 68.1% for 80% intervals)—a substantial decrease in overconfidence compared with previous studies. Studies 2 and 3 suggest that the 4‐step procedure is more likely to reduce overconfidence than the 3‐point procedure (Cohen's d = 0.61, [0.04, 1.18]).  相似文献   

9.
David M. Stieb 《Risk analysis》2012,32(12):2133-2151
The monetized value of avoided premature mortality typically dominates the calculated benefits of air pollution regulations; therefore, characterization of the uncertainty surrounding these estimates is key to good policymaking. Formal expert judgment elicitation methods are one means of characterizing this uncertainty. They have been applied to characterize uncertainty in the mortality concentration‐response function, but have yet to be used to characterize uncertainty in the economic values placed on avoided mortality. We report the findings of a pilot expert judgment study for Health Canada designed to elicit quantitative probabilistic judgments of uncertainties in Value‐per‐Statistical‐Life (VSL) estimates for use in an air pollution context. The two‐stage elicitation addressed uncertainties in both a base case VSL for a reduction in mortality risk from traumatic accidents and in benefits transfer‐related adjustments to the base case for an air quality application (e.g., adjustments for age, income, and health status). Results for each expert were integrated to develop example quantitative probabilistic uncertainty distributions for VSL that could be incorporated into air quality models.  相似文献   

10.
Self‐driving vehicles (SDVs) promise to considerably reduce traffic crashes. One pressing concern facing the public, automakers, and governments is “How safe is safe enough for SDVs?” To answer this question, a new expressed‐preference approach was proposed for the first time to determine the socially acceptable risk of SDVs. In our between‐subject survey (N = 499), we determined the respondents’ risk‐acceptance rate of scenarios with varying traffic‐risk frequencies to examine the logarithmic relationships between the traffic‐risk frequency and risk‐acceptance rate. Logarithmic regression models of SDVs were compared to those of human‐driven vehicles (HDVs); the results showed that SDVs were required to be safer than HDVs. Given the same traffic‐risk‐acceptance rates for SDVs and HDVs, their associated acceptable risk frequencies of SDVs and HDVs were predicted and compared. Two risk‐acceptance criteria emerged: the tolerable risk criterion, which indicates that SDVs should be four to five times as safe as HDVs, and the broadly acceptable risk criterion, which suggests that half of the respondents hoped that the traffic risk of SDVs would be two orders of magnitude lower than the current estimated traffic risk. The approach and these results could provide insights for government regulatory authorities for establishing clear safety requirements for SDVs.  相似文献   

11.
Cryptosporidium human dose‐response data from seven species/isolates are used to investigate six models of varying complexity that estimate infection probability as a function of dose. Previous models attempt to explicitly account for virulence differences among C. parvum isolates, using three or six species/isolates. Four (two new) models assume species/isolate differences are insignificant and three of these (all but exponential) allow for variable human susceptibility. These three human‐focused models (fractional Poisson, exponential with immunity and beta‐Poisson) are relatively simple yet fit the data significantly better than the more complex isolate‐focused models. Among these three, the one‐parameter fractional Poisson model is the simplest but assumes that all Cryptosporidium oocysts used in the studies were capable of initiating infection. The exponential with immunity model does not require such an assumption and includes the fractional Poisson as a special case. The fractional Poisson model is an upper bound of the exponential with immunity model and applies when all oocysts are capable of initiating infection. The beta Poisson model does not allow an immune human subpopulation; thus infection probability approaches 100% as dose becomes huge. All three of these models predict significantly (>10x) greater risk at the low doses that consumers might receive if exposed through drinking water or other environmental exposure (e.g., 72% vs. 4% infection probability for a one oocyst dose) than previously predicted. This new insight into Cryptosporidium risk suggests additional inactivation and removal via treatment may be needed to meet any specified risk target, such as a suggested 10?4 annual risk of Cryptosporidium infection.  相似文献   

12.
From a cognitive perspective, mental models held by individuals are thought to guide interactions with objects or systems, including interpersonal interactions. Frameworks that categorize types of interactions in organizations suggest that they are guided by cultures and mental models that range from the egoistic to the cosmos-centric. From a behavioral perspective, what the cognitive approach calls mental models are sets of verbal rules. Therefore, we suggest that behavior analysis could be used to reconceptualize the mental model literature, generating new research questions and more rigorous experimentation. Cognitive constructs such as more expansive mental models may simply be a function of an individual’s or group’s increased attention to interlocking contingencies. Applying behavioral interventions such as acceptance and commitment therapy could be a way to examine the utility of a behavior analytic approach.  相似文献   

13.
Resilient infrastructure systems are essential for cities to withstand and rapidly recover from natural and human‐induced disasters, yet electric power, transportation, and other infrastructures are highly vulnerable and interdependent. New approaches for characterizing the resilience of sets of infrastructure systems are urgently needed, at community and regional scales. This article develops a practical approach for analysts to characterize a community's infrastructure vulnerability and resilience in disasters. It addresses key challenges of incomplete incentives, partial information, and few opportunities for learning. The approach is demonstrated for Metro Vancouver, Canada, in the context of earthquake and flood risk. The methodological approach is practical and focuses on potential disruptions to infrastructure services. In spirit, it resembles probability elicitation with multiple experts; however, it elicits disruption and recovery over time, rather than uncertainties regarding system function at a given point in time. It develops information on regional infrastructure risk and engages infrastructure organizations in the process. Information sharing, iteration, and learning among the participants provide the basis for more informed estimates of infrastructure system robustness and recovery that incorporate the potential for interdependent failures after an extreme event. Results demonstrate the vital importance of cross‐sectoral communication to develop shared understanding of regional infrastructure disruption in disasters. For Vancouver, specific results indicate that in a hypothetical M7.3 earthquake, virtually all infrastructures would suffer severe disruption of service in the immediate aftermath, with many experiencing moderate disruption two weeks afterward. Electric power, land transportation, and telecommunications are identified as core infrastructure sectors.  相似文献   

14.
Risk Perception by Offshore Oil Personnel During Bad Weather Conditions   总被引:1,自引:0,他引:1  
This article presents the results of analyses of employee subjective risk assessments caused by platform movements on an offshore oil installation in the Norwegian sector of the North Sea. The results are based on a self-completion questionnaire survey conducted among 179 respondents covering three shifts on the platform. The data collection was carried out during the spring of 1994. A minority expressed worry due to platform movements. A greater proportion of the personnel stated worry about the construction of the platform. The personnel were more unsafe when they assessed their own safety attitudes with regard to specific potentially hazardous consequences of platform movements. Two approaches aimed at modeling worry and concern caused by platform movements were tested. The models were the mental imagery approach and the rationalistic approach. The rationalistic and mental imagery models fitted equally well. Implications of the results for risk communication are discussed.  相似文献   

15.
Louis Anthony Cox  Jr 《Risk analysis》2008,28(6):1749-1761
Several important risk analysis methods now used in setting priorities for protecting U.S. infrastructures against terrorist attacks are based on the formula: Risk=Threat×Vulnerability×Consequence. This article identifies potential limitations in such methods that can undermine their ability to guide resource allocations to effectively optimize risk reductions. After considering specific examples for the Risk Analysis and Management for Critical Asset Protection (RAMCAP?) framework used by the Department of Homeland Security, we address more fundamental limitations of the product formula. These include its failure to adjust for correlations among its components, nonadditivity of risks estimated using the formula, inability to use risk‐scoring results to optimally allocate defensive resources, and intrinsic subjectivity and ambiguity of Threat, Vulnerability, and Consequence numbers. Trying to directly assess probabilities for the actions of intelligent antagonists instead of modeling how they adaptively pursue their goals in light of available information and experience can produce ambiguous or mistaken risk estimates. Recent work demonstrates that two‐level (or few‐level) hierarchical optimization models can provide a useful alternative to Risk=Threat×Vulnerability×Consequence scoring rules, and also to probabilistic risk assessment (PRA) techniques that ignore rational planning and adaptation. In such two‐level optimization models, defender predicts attacker's best response to defender's own actions, and then chooses his or her own actions taking into account these best responses. Such models appear valuable as practical approaches to antiterrorism risk analysis.  相似文献   

16.
Worldwide, more than 50 million cases of dengue fever are reported every year in at least 124 countries, and it is estimated that approximately 2.5 billion people are at risk for dengue infection. In Bangladesh, the recurrence of dengue has become a growing public health threat. Notably, knowledge and perceptions of dengue disease risk, particularly among the public, are not well understood. Recognizing the importance of assessing risk perception, we adopted a comparative approach to examine a generic methodology to assess diverse sets of beliefs related to dengue disease risk. Our study mapped existing knowledge structures regarding the risk associated with dengue virus, its vector (Aedes mosquitoes), water container use, and human activities in the city of Dhaka, Bangladesh. “Public mental models” were developed from interviews and focus group discussions with diverse community groups; “expert mental models” were formulated based on open‐ended discussions with experts in the pertinent fields. A comparative assessment of the public's and experts’ knowledge and perception of dengue disease risk has revealed significant gaps in the perception of: (a) disease risk indicators and measurements; (b) disease severity; (c) control of disease spread; and (d) the institutions responsible for intervention. This assessment further identifies misconceptions in public perception regarding: (a) causes of dengue disease; (b) dengue disease symptoms; (c) dengue disease severity; (d) dengue vector ecology; and (e) dengue disease transmission. Based on these results, recommendations are put forward for improving communication of dengue risk and practicing local community engagement and knowledge enhancement in Bangladesh.  相似文献   

17.
Listeria monocytogenes is a leading cause of hospitalization, fetal loss, and death due to foodborne illnesses in the United States. A quantitative assessment of the relative risk of listeriosis associated with the consumption of 23 selected categories of ready‐to‐eat foods, published by the U.S. Department of Health and Human Services and the U.S. Department of Agriculture in 2003, has been instrumental in identifying the food products and practices that pose the greatest listeriosis risk and has guided the evaluation of potential intervention strategies. Dose‐response models, which quantify the relationship between an exposure dose and the probability of adverse health outcomes, were essential components of the risk assessment. However, because of data gaps and limitations in the available data and modeling approaches, considerable uncertainty existed. Since publication of the risk assessment, new data have become available for modeling L. monocytogenes dose‐response. At the same time, recent advances in the understanding of L. monocytogenes pathophysiology and strain diversity have warranted a critical reevaluation of the published dose‐response models. To discuss strategies for modeling L. monocytogenes dose‐response, the Interagency Risk Assessment Consortium (IRAC) and the Joint Institute for Food Safety and Applied Nutrition (JIFSAN) held a scientific workshop in 2011 (details available at http://foodrisk.org/irac/events/ ). The main findings of the workshop and the most current and relevant data identified during the workshop are summarized and presented in the context of L. monocytogenes dose‐response. This article also discusses new insights on dose‐response modeling for L. monocytogenes and research opportunities to meet future needs.  相似文献   

18.
Royce A. Francis 《Risk analysis》2015,35(11):1983-1995
This article argues that “game‐changing” approaches to risk analysis must focus on “democratizing” risk analysis in the same way that information technologies have democratized access to, and production of, knowledge. This argument is motivated by the author's reading of Goble and Bier's analysis, “Risk Assessment Can Be a Game‐Changing Information Technology—But Too Often It Isn't” (Risk Analysis, 2013; 33: 1942–1951), in which living risk assessments are shown to be “game changing” in probabilistic risk analysis. In this author's opinion, Goble and Bier's article focuses on living risk assessment's potential for transforming risk analysis from the perspective of risk professionals—yet, the game‐changing nature of information technologies has typically achieved a much broader reach. Specifically, information technologies change who has access to, and who can produce, information. From this perspective, the author argues that risk assessment is not a game‐changing technology in the same way as the printing press or the Internet because transformative information technologies reduce the cost of production of, and access to, privileged knowledge bases. The author argues that risk analysis does not reduce these costs. The author applies Goble and Bier's metaphor to the chemical risk analysis context, and in doing so proposes key features that transformative risk analysis technology should possess. The author also discusses the challenges and opportunities facing risk analysis in this context. These key features include: clarity in information structure and problem representation, economical information dissemination, increased transparency to nonspecialists, democratized manufacture and transmission of knowledge, and democratic ownership, control, and interpretation of knowledge. The chemical safety decision‐making context illustrates the impact of changing the way information is produced and accessed in the risk context. Ultimately, the author concludes that although new chemical safety regulations do transform access to risk information, they do not transform the costs of producing this information—rather, they change the bearer of these costs. The need for further risk assessment transformation continues to motivate new practical and theoretical developments in risk analysis and management.  相似文献   

19.
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio‐scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low‐probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real‐world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats.  相似文献   

20.
Next‐generation sequencing (NGS) data present an untapped potential to improve microbial risk assessment (MRA) through increased specificity and redefinition of the hazard. Most of the MRA models do not account for differences in survivability and virulence among strains. The potential of machine learning algorithms for predicting the risk/health burden at the population level while inputting large and complex NGS data was explored with Listeria monocytogenes as a case study. Listeria data consisted of a percentage similarity matrix from genome assemblies of 38 and 207 strains of clinical and food origin, respectively. Basic Local Alignment (BLAST) was used to align the assemblies against a database of 136 virulence and stress resistance genes. The outcome variable was frequency of illness, which is the percentage of reported cases associated with each strain. These frequency data were discretized into seven ordinal outcome categories and used for supervised machine learning and model selection from five ensemble algorithms. There was no significant difference in accuracy between the models, and support vector machine with linear kernel was chosen for further inference (accuracy of 89% [95% CI: 68%, 97%]). The virulence genes FAM002725, FAM002728, FAM002729, InlF, InlJ, Inlk, IisY, IisD, IisX, IisH, IisB, lmo2026, and FAM003296 were important predictors of higher frequency of illness. InlF was uniquely truncated in the sequence type 121 strains. Most important risk predictor genes occurred at highest prevalence among strains from ready‐to‐eat, dairy, and composite foods. We foresee that the findings and approaches described offer the potential for rethinking the current approaches in MRA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号