首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Various approaches have been proposed for determining scenario probabilities to facilitate long-range planning and decision making. These include microlevel approaches based on the analysis of relevant underlying events and their interrelations and direct macrolevel examination of the scenarios. The determination of a unique solution demands excessive consistency and time requirements on the part of the expert and often is not guaranteed by these procedures. We propose an interactive information maximizing scenario probability query procedure (IMQP) that exploits the desirable features of existing methods while circumventing their drawbacks. The approach requires elicitation of cardinal probability assessments and bounds for only marginal and first-order conditional events, as well as ordinal probability comparisons (probability orderings or rankings) of carefully selected scenario subsets determined using concepts of information theory. Guidelines for implementation based on simulation results are also developed. A goal program for handling inconsistent ordinal probability responses is also integrated into the procedure. The results of behavioral experimentation (which compared our approach to Expert Choice and showed that the IMQP was viable) compared favorably in terms of ease of use and time requirements, and works best for problems with a large number of scenarios. Design modifications to IMQP learned from the experiments, such as incorporating interactive graphics, are also in progress.  相似文献   

2.
This article tries to clarify the potential role to be played by uncertainty theories such as imprecise probabilities, random sets, and possibility theory in the risk analysis process. Instead of opposing an objective bounding analysis, where only statistically founded probability distributions are taken into account, to the full‐fledged probabilistic approach, exploiting expert subjective judgment, we advocate the idea that both analyses are useful and should be articulated with one another. Moreover, the idea that risk analysis under incomplete information is purely objective is misconceived. The use of uncertainty theories cannot be reduced to a choice between probability distributions and intervals. Indeed, they offer representation tools that are more expressive than each of the latter approaches and can capture expert judgments while being faithful to their limited precision. Consequences of this thesis are examined for uncertainty elicitation, propagation, and at the decision‐making step.  相似文献   

3.
The Department of Homeland Security (DHS) characterized and prioritized the physical cross‐border threats and hazards to the nation stemming from terrorism, market‐driven illicit flows of people and goods (illegal immigration, narcotics, funds, counterfeits, and weaponry), and other nonmarket concerns (movement of diseases, pests, and invasive species). These threats and hazards pose a wide diversity of consequences with very different combinations of magnitudes and likelihoods, making it very challenging to prioritize them. This article presents the approach that was used at DHS to arrive at a consensus regarding the threats and hazards that stand out from the rest based on the overall risk they pose. Due to time constraints for the decision analysis, it was not feasible to apply multiattribute methodologies like multiattribute utility theory or the analytic hierarchy process. Using a holistic approach was considered, such as the deliberative method for ranking risks first published in this journal. However, an ordinal ranking alone does not indicate relative or absolute magnitude differences among the risks. Therefore, the use of the deliberative method for ranking risks is not sufficient for deciding whether there is a material difference between the top‐ranked and bottom‐ranked risks, let alone deciding what the stand‐out risks are. To address this limitation of ordinal rankings, the deliberative method for ranking risks was augmented by adding an additional step to transform the ordinal ranking into a ratio scale ranking. This additional step enabled the selection of stand‐out risks to help prioritize further analysis.  相似文献   

4.
In many situations where normative decision-aiding techniques could be usefully applied, historical data are inadequate for estimating the required outcome probabilities, and economic methodologies are inadequate for estimating the aggregate utility derived from the several outcome attributes. In such cases it is often useful to obtain the required estimates in the form of expert judgments, i.e. to obtain subjective probabilities and multi-attribute utilities. Similarly, in many situations where behavioral decision processes are to be studied, it is necessary to scale the expectations and perceived values of the decision makers. This article describes the methods for eliciting subjective probabilities and multi-attribute utilities whose usefulness has been empirically studied and reported in the research literature. It also contains summary guidelines concerning the elicitation and use of such judgments.  相似文献   

5.
Andrea Herrmann 《Risk analysis》2013,33(8):1510-1531
How well can people estimate IT‐related risk? Although estimating risk is a fundamental activity in software management and risk is the basis for many decisions, little is known about how well IT‐related risk can be estimated at all. Therefore, we executed a risk estimation experiment with 36 participants. They estimated the probabilities of IT‐related risks and we investigated the effect of the following factors on the quality of the risk estimation: the estimator's age, work experience in computing, (self‐reported) safety awareness and previous experience with this risk, the absolute value of the risk's probability, and the effect of knowing the estimates of the other participants (see: Delphi method). Our main findings are: risk probabilities are difficult to estimate. Younger and inexperienced estimators were not significantly worse than older and more experienced estimators, but the older and more experienced subjects better used the knowledge gained by knowing the other estimators' results. Persons with higher safety awareness tend to overestimate risk probabilities, but can better estimate ordinal ranks of risk probabilities. Previous own experience with a risk leads to an overestimation of its probability (unlike in other fields like medicine or disasters, where experience with a disease leads to more realistic probability estimates and nonexperience to an underestimation).  相似文献   

6.
《Risk analysis》2018,38(10):2128-2143
Subjective probabilities are central to risk assessment, decision making, and risk communication efforts. Surveys measuring probability judgments have traditionally used open‐ended response modes, asking participants to generate a response between 0% and 100%. A typical finding is the seemingly excessive use of 50%, perhaps as an expression of “I don't know.” In an online survey with a nationally representative sample of the Dutch population, we examined the effect of response modes on the use of 50% and other focal responses, predictive validity, and respondents’ survey evaluations. Respondents assessed the probability of dying, getting the flu, and experiencing other health‐related events. They were randomly assigned to a traditional open‐ended response mode, a visual linear scale ranging from 0% to 100%, or a version of that visual linear scale on which a magnifier emerged after clicking on it. We found that, compared to the open‐ended response mode, the visual linear and magnifier scale each reduced the use of 50%, 0%, and 100% responses, especially among respondents with low numeracy. Responses given with each response mode were valid, in terms of significant correlations with health behavior and outcomes. Where differences emerged, the visual scales seemed to have slightly better validity than the open‐ended response mode. Both high‐numerate and low‐numerate respondents’ evaluations of the surveys were highest for the visual linear scale. Our results have implications for subjective probability elicitation and survey design.  相似文献   

7.
E. S. Levine 《Risk analysis》2012,32(2):294-303
Many analyses conducted to inform security decisions depend on estimates of the conditional probabilities of different attack alternatives. These probabilities are difficult to estimate since analysts have limited access to the adversary and limited knowledge of the adversary’s utility function, so subject matter experts often provide the estimates through direct elicitation. In this article, we describe a method of using uncertainty in utility function value tradeoffs to model the adversary’s decision process and solve for the conditional probabilities of different attacks in closed form. The conditional probabilities are suitable to be used as inputs to probabilistic risk assessments and other decision support techniques. The process we describe is an extension of value‐focused thinking and is broadly applicable, including in general business decision making. We demonstrate the use of this technique with simple examples.  相似文献   

8.
Communicating the rationale for allocating resources to manage policy priorities and their risks is challenging. Here, we demonstrate that environmental risks have diverse attributes and locales in their effects that may drive disproportionate responses among citizens. When 2,065 survey participants deployed summary information and their own understanding to assess 12 policy‐level environmental risks singularly, their assessment differed from a prior expert assessment. However, participants provided rankings similar to those of experts when these same 12 risks were considered as a group, allowing comparison between the different risks. Following this, when individuals were shown the prior expert assessment of this portfolio, they expressed a moderate level of confidence with the combined expert analysis. These are important findings for the comprehension of policy risks that may be subject to augmentation by climate change, their representation alongside other threats within national risk assessments, and interpretations of agency for public risk management by citizens and others.  相似文献   

9.
Yifan Zhang 《Risk analysis》2013,33(1):109-120
Expert judgment (or expert elicitation) is a formal process for eliciting judgments from subject‐matter experts about the value of a decision‐relevant quantity. Judgments in the form of subjective probability distributions are obtained from several experts, raising the question how best to combine information from multiple experts. A number of algorithmic approaches have been proposed, of which the most commonly employed is the equal‐weight combination (the average of the experts’ distributions). We evaluate the properties of five combination methods (equal‐weight, best‐expert, performance, frequentist, and copula) using simulated expert‐judgment data for which we know the process generating the experts’ distributions. We examine cases in which two well‐calibrated experts are of equal or unequal quality and their judgments are independent, positively or negatively dependent. In this setting, the copula, frequentist, and best‐expert approaches perform better and the equal‐weight combination method performs worse than the alternative approaches.  相似文献   

10.
Autonomous underwater vehicles (AUVs) are used increasingly to explore hazardous marine environments. Risk assessment for such complex systems is based on subjective judgment and expert knowledge as much as on hard statistics. Here, we describe the use of a risk management process tailored to AUV operations, the implementation of which requires the elicitation of expert judgment. We conducted a formal judgment elicitation process where eight world experts in AUV design and operation were asked to assign a probability of AUV loss given the emergence of each fault or incident from the vehicle's life history of 63 faults and incidents. After discussing methods of aggregation and analysis, we show how the aggregated risk estimates obtained from the expert judgments were used to create a risk model. To estimate AUV survival with mission distance, we adopted a statistical survival function based on the nonparametric Kaplan‐Meier estimator. We present theoretical formulations for the estimator, its variance, and confidence limits. We also present a numerical example where the approach is applied to estimate the probability that the Autosub3 AUV would survive a set of missions under Pine Island Glacier, Antarctica in January–March 2009.  相似文献   

11.
Listeria monocytogenes is among the foodborne pathogens with the highest death toll in the United States. Ready‐to‐eat foods contaminated at retail are an important source of infection. Environmental sites in retail deli operations can be contaminated. However, commonly contaminated sites are unlikely to come into direct contact with food and the public health relevance of environmental contamination has remained unclear. To identify environmental sites that may pose a considerable cross‐contamination risk, to elucidate potential transmission pathways, and to identify knowledge gaps, we performed a structured expert elicitation of 41 experts from state regulatory agencies and the food retail industry with practical experience in retail deli operations. Following the “Delphi” method, the elicitation was performed in three consecutive steps: questionnaire, review and discussion of results, second questionnaire. Hands and gloves were identified as important potential contamination sources. However, bacterial transfers to and from hands or gloves represented a major data gap. Experts agreed about transfer probabilities from cutting boards, scales, deli cases, and deli preparation sinks to product, and about transfer probabilities from floor drains, walk‐in cooler floors, and knife racks to food contact surfaces. Comparison of experts' opinions to observational data revealed a tendency among experts with certain demographic characteristics and professional opinions to overestimate prevalence. Experts’ votes clearly clustered into separate groups not defined by place of employment, even though industry experts may have been somewhat overrepresented in one cluster. Overall, our study demonstrates the value and caveats of expert elicitation to identify data gaps and prioritize research efforts.  相似文献   

12.
The Constrained Extremal Distribution Selection Method   总被引:5,自引:0,他引:5  
Engineering design and policy formulation often involve the assessment of the likelihood of future events commonly expressed through a probability distribution. Determination of these distributions is based, when possible, on observational data. Unfortunately, these data are often incomplete, biased, and/or incorrect. These problems are exacerbated when policy formulation involves the risk of extreme events—situations of low likelihood and high consequences. Usually, observational data simply do not exist for such events. Therefore, determination of probabilities which characterize extreme events must utilize all available knowledge, be it subjective or observational, so as to most accurately reflect the likelihood of such events. Extending previous work on the statistics of extremes, the Constrained Extremal Distribution Selection Method is a methodology that assists in the selection of probability distributions that characterize the risk of extreme events using expert opinion to constrain the feasible values for parameters which explicitly define a distribution. An extremal distribution is then "fit" to observational data, conditional that the selection of parameters does not violate any constraints. Using a random search technique, genetic algorithms, parameters that minimize a measure of fit between a hypothesized distribution and observational data are estimated. The Constrained Extremal Distribution Selection Method is applied to a real world policy problem faced by the U.S. Environmental Protection Agency. Selected distributions characterize the likelihood of extreme, fatal hazardous material accidents in the United States. These distributions are used to characterize the risk of large scale accidents with numerous fatalities.  相似文献   

13.
Risk‐related knowledge gained from past construction projects is regarded as potentially extremely useful in risk management. This article describes a proposed approach to capture and integrate risk‐related knowledge to support decision making in construction projects. To ameliorate the problem related to the scarcity of risks information often encountered in construction projects, Bayesian Belief Networks are used and expert judgment is elicited to augment available information. Particularly, the article provides an overview of judgment‐based biases that can appear in the elicitation of judgments for constructing Bayesian Networks and the provisos that can be made in this respect to minimize these types of bias. The proposed approach is successfully applied to develop six models for top risks in tunnel works. More than 30 tunneling experts in the Netherlands and Germany were involved in the investigation to provide information on identifying relevant scenarios than can lead to failure events associated with tunneling risks. The article has provided an illustration of the applicability of the developed approach for the case of “face instability in soft soils using slurry shields.”  相似文献   

14.
15.
Researchers and commissions contend that the risk of human extinction is high, but none of these estimates have been based upon a rigorous methodology suitable for estimating existential risks. This article evaluates several methods that could be used to estimate the probability of human extinction. Traditional methods evaluated include: simple elicitation; whole evidence Bayesian; evidential reasoning using imprecise probabilities; and Bayesian networks. Three innovative methods are also considered: influence modeling based on environmental scans; simple elicitation using extinction scenarios as anchors; and computationally intensive possible‐worlds modeling. Evaluation criteria include: level of effort required by the probability assessors; level of effort needed to implement the method; ability of each method to model the human extinction event; ability to incorporate scientific estimates of contributory events; transparency of the inputs and outputs; acceptability to the academic community (e.g., with respect to intellectual soundness, familiarity, verisimilitude); credibility and utility of the outputs of the method to the policy community; difficulty of communicating the method's processes and outputs to nonexperts; and accuracy in other contexts. The article concludes by recommending that researchers assess the risks of human extinction by combining these methods.  相似文献   

16.
《Risk analysis》2016,36(2):191-202
We live in an age that increasingly calls for national or regional management of global risks. This article discusses the contributions that expert elicitation can bring to efforts to manage global risks and identifies challenges faced in conducting expert elicitation at this scale. In doing so it draws on lessons learned from conducting an expert elicitation as part of the World Health Organizations (WHO) initiative to estimate the global burden of foodborne disease; a study commissioned by the Foodborne Disease Epidemiology Reference Group (FERG). Expert elicitation is designed to fill gaps in data and research using structured, transparent methods. Such gaps are a significant challenge for global risk modeling. Experience with the WHO FERG expert elicitation shows that it is feasible to conduct an expert elicitation at a global scale, but that challenges do arise, including: defining an informative, yet feasible geographical structure for the elicitation; defining what constitutes expertise in a global setting; structuring international, multidisciplinary expert panels; and managing demands on experts’ time in the elicitation. This article was written as part of a workshop, “Methods for Research Synthesis: A Cross‐Disciplinary Approach” held at the Harvard Center for Risk Analysis on October 13, 2013.  相似文献   

17.
A key justification to support plant health regulations is the ability of quarantine services to conduct pest risk analyses (PRA). Despite the supranational nature of biological invasions and the close proximity and connectivity of Southeast Asian countries, PRAs are conducted at the national level. Furthermore, some countries have limited experience in the development of PRAs, which may result in inadequate phytosanitary responses that put their plant resources at risk to pests vectored via international trade. We review existing decision support schemes for PRAs and, following international standards for phytosanitary measures, propose new methods that adapt existing practices to suit the unique characteristics of Southeast Asia. Using a formal written expert elicitation survey, a panel of regional scientific experts was asked to identify and rate unique traits of Southeast Asia with respect to PRA. Subsequently, an expert elicitation workshop with plant protection officials was conducted to verify the potential applicability of the developed methods. Rich biodiversity, shortage of trained personnel, social vulnerability, tropical climate, agriculture‐dependent economies, high rates of land‐use change, and difficulties in implementing risk management options were identified as challenging Southeast Asian traits. The developed methods emphasize local Southeast Asian conditions and could help support authorities responsible for carrying out PRAs within the region. These methods could also facilitate the creation of other PRA schemes in low‐ and middle‐income tropical countries.  相似文献   

18.
Ali Mosleh 《Risk analysis》2012,32(11):1888-1900
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from “nominal predictions” due to “upsetting events” such as the 2008 global banking crisis.  相似文献   

19.
The RISK of an event generally relates to its expected severity and the perceived probability of its occurrence. In RISK research, however, there is no standard measure for subjective probability estimates. In this study, we compared five commonly used measurement formats—two rating scales, a visual analog scale, and two numeric measures—in terms of their ability to assess subjective probability judgments when objective probabilities are available. We varied the probabilities (low vs. moderate) and severity (low vs. high) of the events to be judged as well as the presentation mode of objective probabilities (sequential presentation of singular events vs. graphical presentation of aggregated information). We employed two complementary goodness‐of‐fit criteria: the correlation between objective and subjective probabilities (sensitivity), and the root mean square deviations of subjective probabilities from objective values (accuracy). The numeric formats generally outperformed all other measures. The severity of events had no effect on the performance. Generally, a rise in probability led to decreases in performance. This effect, however, depended on how the objective probabilities were encoded: pictographs ensured perfect information, which improved goodness of fit for all formats and diminished this negative effect on the performance. Differences in performance between scales are thus caused only in part by characteristics of the scales themselves—they also depend on the process of encoding. Consequently, researchers should take the source of probability information into account before selecting a measure.  相似文献   

20.
In human reliability analysis (HRA), dependence analysis refers to assessing the influence of the failure of the operators to perform one task on the failure probabilities of subsequent tasks. A commonly used approach is the technique for human error rate prediction (THERP). The assessment of the dependence level in THERP is a highly subjective judgment based on general rules for the influence of five main factors. A frequently used alternative method extends the THERP model with decision trees. Such trees should increase the repeatability of the assessments but they simplify the relationships among the factors and the dependence level. Moreover, the basis for these simplifications and the resulting tree is difficult to trace. The aim of this work is a method for dependence assessment in HRA that captures the rules used by experts to assess dependence levels and incorporates this knowledge into an algorithm and software tool to be used by HRA analysts. A fuzzy expert system (FES) underlies the method. The method and the associated expert elicitation process are demonstrated with a working model. The expert rules are elicited systematically and converted into a traceable, explicit, and computable model. Anchor situations are provided as guidance for the HRA analyst's judgment of the input factors. The expert model and the FES‐based dependence assessment method make the expert rules accessible to the analyst in a usable and repeatable way, with an explicit and traceable basis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号