首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed.  相似文献   

2.
Today, chemical risk and safety assessments rely heavily on the estimation of environmental fate by models. The key compound‐related properties in such models describe partitioning and reactivity. Uncertainty in determining these properties can be separated into random and systematic (incompleteness) components, requiring different types of representation. Here, we evaluate two approaches that are suitable to treat also systematic errors, fuzzy arithmetic, and probability bounds analysis. When a best estimate (mode) and a range can be computed for an input parameter, then it is possible to characterize the uncertainty with a triangular fuzzy number (possibility distribution) or a corresponding probability box bound by two uniform distributions. We use a five‐compartment Level I fugacity model and reported empirical data from the literature for three well‐known environmental pollutants (benzene, pyrene, and DDT) as illustrative cases for this evaluation. Propagation of uncertainty by discrete probability calculus or interval arithmetic can be done at a low computational cost and gives maximum flexibility in applying different approaches. Our evaluation suggests that the difference between fuzzy arithmetic and probability bounds analysis is small, at least for this specific case. The fuzzy arithmetic approach can, however, be regarded as less conservative than probability bounds analysis if the assumption of independence is removed. Both approaches are sensitive to repeated parameters that may inflate the uncertainty estimate. Uncertainty described by probability boxes was therefore also propagated through the model by Monte Carlo simulation to show how this problem can be avoided.  相似文献   

3.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

4.
In risk analysis, the treatment of the epistemic uncertainty associated to the probability of occurrence of an event is fundamental. Traditionally, probabilistic distributions have been used to characterize the epistemic uncertainty due to imprecise knowledge of the parameters in risk models. On the other hand, it has been argued that in certain instances such uncertainty may be best accounted for by fuzzy or possibilistic distributions. This seems the case in particular for parameters for which the information available is scarce and of qualitative nature. In practice, it is to be expected that a risk model contains some parameters affected by uncertainties that may be best represented by probability distributions and some other parameters that may be more properly described in terms of fuzzy or possibilistic distributions. In this article, a hybrid method that jointly propagates probabilistic and possibilistic uncertainties is considered and compared with pure probabilistic and pure fuzzy methods for uncertainty propagation. The analyses are carried out on a case study concerning the uncertainties in the probabilities of occurrence of accident sequences in an event tree analysis of a nuclear power plant.  相似文献   

5.
A wide range of uncertainties will be introduced inevitably during the process of performing a safety assessment of engineering systems. The impact of all these uncertainties must be addressed if the analysis is to serve as a tool in the decision-making process. Uncertainties present in the components (input parameters of model or basic events) of model output are propagated to quantify its impact in the final results. There are several methods available in the literature, namely, method of moments, discrete probability analysis, Monte Carlo simulation, fuzzy arithmetic, and Dempster-Shafer theory. All the methods are different in terms of characterizing at the component level and also in propagating to the system level. All these methods have different desirable and undesirable features, making them more or less useful in different situations. In the probabilistic framework, which is most widely used, probability distribution is used to characterize uncertainty. However, in situations in which one cannot specify (1) parameter values for input distributions, (2) precise probability distributions (shape), and (3) dependencies between input parameters, these methods have limitations and are found to be not effective. In order to address some of these limitations, the article presents uncertainty analysis in the context of level-1 probabilistic safety assessment (PSA) based on a probability bounds (PB) approach. PB analysis combines probability theory and interval arithmetic to produce probability boxes (p-boxes), structures that allow the comprehensive propagation through calculation in a rigorous way. A practical case study is also carried out with the developed code based on the PB approach and compared with the two-phase Monte Carlo simulation results.  相似文献   

6.
System unavailabilities for large complex systems such as nuclear power plants are often evaluated through use of fault tree analysis. The system unavailability is obtained from a Boolean representation of a system fault tree. Even after truncation of higher order terms these expressions can be quite large, involving thousands of terms. A general matrix notation is proposed for the representation of Boolean expressions which facilitates uncertainty and sensitivity analysis calculations.  相似文献   

7.
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): “objective” dependence between the (random) occurrences of different basic events (BEs) in the FT and “state‐of‐knowledge” (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well‐known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present).  相似文献   

8.
《Risk analysis》2018,38(4):694-709
Subsurface energy activities entail the risk of induced seismicity including low‐probability high‐consequence (LPHC) events. For designing respective risk communication, the scientific literature lacks empirical evidence of how the public reacts to different written risk communication formats about such LPHC events and to related uncertainty or expert confidence. This study presents findings from an online experiment (N = 590) that empirically tested the public's responses to risk communication about induced seismicity and to different technology frames, namely deep geothermal energy (DGE) and shale gas (between‐subject design). Three incrementally different formats of written risk communication were tested: (i) qualitative, (ii) qualitative and quantitative, and (iii) qualitative and quantitative with risk comparison. Respondents found the latter two the easiest to understand, the most exact, and liked them the most. Adding uncertainty and expert confidence statements made the risk communication less clear, less easy to understand and increased concern. Above all, the technology for which risks are communicated and its acceptance mattered strongly: respondents in the shale gas condition found the identical risk communication less trustworthy and more concerning than in the DGE conditions. They also liked the risk communication overall less. For practitioners in DGE or shale gas projects, the study shows that the public would appreciate efforts in describing LPHC risks with numbers and optionally risk comparisons. However, there seems to be a trade‐off between aiming for transparency by disclosing uncertainty and limited expert confidence, and thereby decreasing clarity and increasing concern in the view of the public.  相似文献   

9.
Currently, there is a trend away from the use of single (often conservative) estimates of risk to summarize the results of risk analyses in favor of stochastic methods which provide a more complete characterization of risk. The use of such stochastic methods leads to a distribution of possible values of risk, taking into account both uncertainty and variability in all of the factors affecting risk. In this article, we propose a general framework for the analysis of uncertainty and variability for use in the commonly encountered case of multiplicative risk models, in which risk may be expressed as a product of two or more risk factors. Our analytical methods facilitate the evaluation of overall uncertainty and variability in risk assessment, as well as the contributions of individual risk factors to both uncertainty and variability which is cumbersome using Monte Carlo methods. The use of these methods is illustrated in the analysis of potential cancer risks due to the ingestion of radon in drinking water.  相似文献   

10.
11.
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio‐scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low‐probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real‐world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats.  相似文献   

12.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

13.
Jan F. Van Impe 《Risk analysis》2011,31(8):1295-1307
The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo‐randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used—that is, an ANOVA‐like model and Sobol sensitivity indices—to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.  相似文献   

14.
《Risk analysis》2018,38(2):410-424
This article proposes a rigorous mathematical approach, named a reliability‐based capability approach (RCA), to quantify the societal impact of a hazard. The starting point of the RCA is a capability approach in which capabilities refer to the genuine opportunities open to individuals to achieve valuable doings and beings (such as being mobile and being sheltered) called functionings. Capabilities depend on what individuals have and what they can do with what they have. The article develops probabilistic predictive models that relate the value of each functioning to a set of easily predictable or measurable quantities (regressors) in the aftermath of a hazard. The predicted values of selected functionings for an individual collectively determine the impact of a hazard on his/her state of well‐being. The proposed RCA integrates the predictive models of functionings into a system reliability problem to determine the probability that the state of well‐being is acceptable, tolerable, or intolerable. Importance measures are defined to quantify the contribution of each functioning to the state of well‐being. The information from the importance measures can inform decisions on optimal allocation of limited resources for risk mitigation and management.  相似文献   

15.
Uncertainty analyses and the reporting of their results can be misinterpreted when these analyses are conditional on a set of assumptions generally intended to bring some conservatism in the decisions. In this paper, two cases of conditional uncertainty analysis are examined. The first case includes studies that result, for instance, in a family of risk curves representing percentiles of the probability distribution of the future frequency of exceeding specified consequence levels conditional on a set of hypotheses. The second case involves analyses that result in an interval of outcomes estimated on the basis of conservative assumptions. Both types of results are difficult to use because they are sometimes misinterpreted as if they represented the output of a full uncertainty analysis. In the first case, the percentiles shown on each risk curve may be taken at face value when in reality (in marginal terms) they are lower if the chosen hypotheses are conservative. In the second case, the fact that some segments of the resulting interval are highly unlikely—or that some more benign segments outside the range of results are quite possible—does not appear. Also, these results are difficult to compare to those of analyses of other risks, possibly competing for the same risk management resources, and the decision criteria have to be adapted to the conservatism of the hypotheses. In this paper, the focus is on the first type (conditional risk curves) more than on the second and the discussion is illustrated by the case of the performance assessment of the Waste Isolation Pilot Plant in New Mexico. For policy-making purposes, however, the problems of interpretation, comparison, and use of the results are similar.  相似文献   

16.
This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value‐at‐Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of “model risk” in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value‐at‐Risk model risk and compute the required regulatory capital add‐on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value‐at‐Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks.  相似文献   

17.
In recent years, various types of terrorist attacks occurred, causing worldwide catastrophes. According to the Global Terrorism Database (GTD), among all attack tactics, bombing attacks happened most frequently, followed by armed assaults. In this article, a model for analyzing and forecasting the conditional probability of bombing attacks (CPBAs) based on time‐series methods is developed. In addition, intervention analysis is used to analyze the sudden increase in the time‐series process. The results show that the CPBA increased dramatically at the end of 2011. During that time, the CPBA increased by 16.0% in a two‐month period to reach the peak value, but still stays 9.0% greater than the predicted level after the temporary effect gradually decays. By contrast, no significant fluctuation can be found in the conditional probability process of armed assault. It can be inferred that some social unrest, such as America's troop withdrawal from Afghanistan and Iraq, could have led to the increase of the CPBA in Afghanistan, Iraq, and Pakistan. The integrated time‐series and intervention model is used to forecast the monthly CPBA in 2014 and through 2064. The average relative error compared with the real data in 2014 is 3.5%. The model is also applied to the total number of attacks recorded by the GTD between 2004 and 2014.  相似文献   

18.
The error estimate of Borgonovo's moment‐independent index is considered, and it shows that the possible computational complexity of is mainly due to the probability density function (PDF) estimate because the PDF estimate is an ill‐posed problem and its convergence rate is quite slow. So it reminds us to compute Borgonovo's index using other methods. To avoid the PDF estimate, , which is based on the PDF, is first approximatively represented by the cumulative distribution function (CDF). The CDF estimate is well posed and its convergence rate is always faster than that of the PDF estimate. From the representation, a stable approach is proposed to compute with an adaptive procedure. Since the small probability multidimensional integral needs to be computed in this procedure, a computational strategy named asymptotic space integration is introduced to reduce a high‐dimensional integral to a one‐dimensional integral. Then we can compute the small probability multidimensional integral by adaptive numerical integration in one dimension with an improved convergence rate. From the comparison of numerical error analysis of some examples, it can be shown that the proposed method is an effective approach to uncertainty importance measure computation.  相似文献   

19.
Probabilistic risk analysis (PRA) can be an effective tool to assess risks and uncertainties and to set priorities among safety policy options. Based on systems analysis and Bayesian probability, PRA has been applied to a wide range of cases, three of which are briefly presented here: the maintenance of the tiles of the space shuttle, the management of patient risk in anesthesia, and the choice of seismic provisions of building codes for the San Francisco Bay Area. In the quantification of a risk, a number of problems arise in the public sector where multiple stakeholders are involved. In this article, I describe different approaches to the treatments of uncertainties in risk analysis, their implications for risk ranking, and the role of risk analysis results in the context of a safety decision process. I also discuss the implications of adopting conservative hypotheses before proceeding to what is, in essence, a conditional uncertainty analysis, and I explore some implications of different levels of "conservatism" for the ranking of risk mitigation measures.  相似文献   

20.
Land subsidence risk assessment (LSRA) is a multi‐attribute decision analysis (MADA) problem and is often characterized by both quantitative and qualitative attributes with various types of uncertainty. Therefore, the problem needs to be modeled and analyzed using methods that can handle uncertainty. In this article, we propose an integrated assessment model based on the evidential reasoning (ER) algorithm and fuzzy set theory. The assessment model is structured as a hierarchical framework that regards land subsidence risk as a composite of two key factors: hazard and vulnerability. These factors can be described by a set of basic indicators defined by assessment grades with attributes for transforming both numerical data and subjective judgments into a belief structure. The factor‐level attributes of hazard and vulnerability are combined using the ER algorithm, which is based on the information from a belief structure calculated by the Dempster‐Shafer (D‐S) theory, and a distributed fuzzy belief structure calculated by fuzzy set theory. The results from the combined algorithms yield distributed assessment grade matrices. The application of the model to the Xixi‐Chengnan area, China, illustrates its usefulness and validity for LSRA. The model utilizes a combination of all types of evidence, including all assessment information—quantitative or qualitative, complete or incomplete, and precise or imprecise—to provide assessment grades that define risk assessment on the basis of hazard and vulnerability. The results will enable risk managers to apply different risk prevention measures and mitigation planning based on the calculated risk states.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号