首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A wide range of uncertainties will be introduced inevitably during the process of performing a safety assessment of engineering systems. The impact of all these uncertainties must be addressed if the analysis is to serve as a tool in the decision-making process. Uncertainties present in the components (input parameters of model or basic events) of model output are propagated to quantify its impact in the final results. There are several methods available in the literature, namely, method of moments, discrete probability analysis, Monte Carlo simulation, fuzzy arithmetic, and Dempster-Shafer theory. All the methods are different in terms of characterizing at the component level and also in propagating to the system level. All these methods have different desirable and undesirable features, making them more or less useful in different situations. In the probabilistic framework, which is most widely used, probability distribution is used to characterize uncertainty. However, in situations in which one cannot specify (1) parameter values for input distributions, (2) precise probability distributions (shape), and (3) dependencies between input parameters, these methods have limitations and are found to be not effective. In order to address some of these limitations, the article presents uncertainty analysis in the context of level-1 probabilistic safety assessment (PSA) based on a probability bounds (PB) approach. PB analysis combines probability theory and interval arithmetic to produce probability boxes (p-boxes), structures that allow the comprehensive propagation through calculation in a rigorous way. A practical case study is also carried out with the developed code based on the PB approach and compared with the two-phase Monte Carlo simulation results.  相似文献   

2.
Governments are responsible for making policy decisions, often in the face of severe uncertainty about the factors involved. Expert elicitation can be used to fill information gaps where data are not available, cannot be obtained, or where there is no time for a full‐scale study and analysis. Various features of distributions for variables may be elicited, for example, the mean, standard deviation, and quantiles, but uncertainty about these values is not always recorded. Distributional and dependence assumptions often have to be made in models and although these are sometimes elicited from experts, modelers may also make assumptions for mathematical convenience (e.g., assuming independence between variables). Probability boxes (p‐boxes) provide a flexible methodology to analyze elicited quantities without having to make assumptions about the distribution shape. If information about distribution shape(s) is available, p‐boxes can provide bounds around the results given these possible input distributions. P‐boxes can also be used to combine variables without making dependence assumptions. This article aims to illustrate how p‐boxes may help to improve the representation of uncertainty for analyses based on elicited information. We focus on modeling elicited quantiles with nonparametric p‐boxes, modeling elicited quantiles with parametric p‐boxes where the elicited quantiles do not match the elicited distribution shape, and modeling elicited interval information.  相似文献   

3.
This article discusses how analyst's or expert's beliefs on the credibility and quality of models can be assessed and incorporated into the uncertainty assessment of an unknown of interest. The proposed methodology is a specialization of the Bayesian framework for the assessment of model uncertainty presented in an earlier paper. This formalism treats models as sources of information in assessing the uncertainty of an unknown, and it allows the use of predictions from multiple models as well as experimental validation data about the models’ performances. In this article, the methodology is extended to incorporate additional types of information about the model, namely, subjective information in terms of credibility of the model and its applicability when it is used outside its intended domain of application. An example in the context of fire risk modeling is also provided.  相似文献   

4.
Safety systems are important components of high-consequence systems that are intended to prevent the unintended operation of the system and thus the potentially significant negative consequences that could result from such an operation. This presentation investigates and illustrates formal procedures for assessing the uncertainty in the probability that a safety system will fail to operate as intended in an accident environment. Probability theory and evidence theory are introduced as possible mathematical structures for the representation of the epistemic uncertainty associated with the performance of safety systems, and a representation of this type is illustrated with a hypothetical safety system involving one weak link and one strong link that is exposed to a high temperature fire environment. Topics considered include (1) the nature of diffuse uncertainty information involving a system and its environment, (2) the conversion of diffuse uncertainty information into the mathematical structures associated with probability theory and evidence theory, and (3) the propagation of these uncertainty structures through a model for a safety system to obtain representations in the context of probability theory and evidence theory of the uncertainty in the probability that the safety system will fail to operate as intended. The results suggest that evidence theory provides a potentially valuable representational tool for the display of the implications of significant epistemic uncertainty in inputs to complex analyses.  相似文献   

5.
Land subsidence risk assessment (LSRA) is a multi‐attribute decision analysis (MADA) problem and is often characterized by both quantitative and qualitative attributes with various types of uncertainty. Therefore, the problem needs to be modeled and analyzed using methods that can handle uncertainty. In this article, we propose an integrated assessment model based on the evidential reasoning (ER) algorithm and fuzzy set theory. The assessment model is structured as a hierarchical framework that regards land subsidence risk as a composite of two key factors: hazard and vulnerability. These factors can be described by a set of basic indicators defined by assessment grades with attributes for transforming both numerical data and subjective judgments into a belief structure. The factor‐level attributes of hazard and vulnerability are combined using the ER algorithm, which is based on the information from a belief structure calculated by the Dempster‐Shafer (D‐S) theory, and a distributed fuzzy belief structure calculated by fuzzy set theory. The results from the combined algorithms yield distributed assessment grade matrices. The application of the model to the Xixi‐Chengnan area, China, illustrates its usefulness and validity for LSRA. The model utilizes a combination of all types of evidence, including all assessment information—quantitative or qualitative, complete or incomplete, and precise or imprecise—to provide assessment grades that define risk assessment on the basis of hazard and vulnerability. The results will enable risk managers to apply different risk prevention measures and mitigation planning based on the calculated risk states.  相似文献   

6.
Proposed applications of increasingly sophisticated biologically-based computational models, such as physiologically-based pharmacokinetic models, raise the issue of how to evaluate whether the models are adequate for proposed uses, including safety or risk assessment. A six-step process for model evaluation is described. It relies on multidisciplinary expertise to address the biological, toxicological, mathematical, statistical, and risk assessment aspects of the modeling and its application. The first step is to have a clear definition of the purpose(s) of the model in the particular assessment; this provides critical perspectives on all subsequent steps. The second step is to evaluate the biological characterization described by the model structure based on the intended uses of the model and available information on the compound being modeled or related compounds. The next two steps review the mathematical equations used to describe the biology and their implementation in an appropriate computer program. At this point, the values selected for the model parameters (i.e., model calibration) must be evaluated. Thus, the fifth step is a combination of evaluating the model parameterization and calibration against data and evaluating the uncertainty in the model outputs. The final step is to evaluate specialized analyses that were done using the model, such as modeling of population distributions of parameters leading to population estimates for model outcomes or inclusion of early pharmacodynamic events. The process also helps to define the kinds of documentation that would be needed for a model to facilitate its evaluation and implementation.  相似文献   

7.
A simple and useful characterization of many predictive models is in terms of model structure and model parameters. Accordingly, uncertainties in model predictions arise from uncertainties in the values assumed by the model parameters (parameter uncertainty) and the uncertainties and errors associated with the structure of the model (model uncertainty). When assessing uncertainty one is interested in identifying, at some level of confidence, the range of possible and then probable values of the unknown of interest. All sources of uncertainty and variability need to be considered. Although parameter uncertainty assessment has been extensively discussed in the literature, model uncertainty is a relatively new topic of discussion by the scientific community, despite being often the major contributor to the overall uncertainty. This article describes a Bayesian methodology for the assessment of model uncertainties, where models are treated as sources of information on the unknown of interest. The general framework is then specialized for the case where models provide point estimates about a single‐valued unknown, and where information about models are available in form of homogeneous and nonhomogeneous performance data (pairs of experimental observations and model predictions). Several example applications for physical models used in fire risk analysis are also provided.  相似文献   

8.
Jan F. Van Impe 《Risk analysis》2011,31(8):1295-1307
The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo‐randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used—that is, an ANOVA‐like model and Sobol sensitivity indices—to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.  相似文献   

9.
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic‐possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility‐probability (probability‐possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context.  相似文献   

10.
The aging domestic oil production infrastructure represents a high risk to the environment because of the type of fluids being handled (oil and brine) and the potential for accidental release of these fluids into sensitive ecosystems. Currently, there is not a quantitative risk model directly applicable to onshore oil exploration and production (E&P) facilities. We report on a probabilistic reliability model created for onshore exploration and production (E&P) facilities. Reliability theory, failure modes and effects analysis (FMEA), and event trees were used to develop the model estimates of the failure probability of typical oil production equipment. Monte Carlo simulation was used to translate uncertainty in input parameter values to uncertainty in the model output. The predicted failure rates were calibrated to available failure rate information by adjusting probability density function parameters used as random variates in the Monte Carlo simulations. The mean and standard deviation of normal variate distributions from which the Weibull distribution characteristic life was chosen were used as adjustable parameters in the model calibration. The model was applied to oil production leases in the Tallgrass Prairie Preserve, Oklahoma. We present the estimated failure probability due to the combination of the most significant failure modes associated with each type of equipment (pumps, tanks, and pipes). The results show that the estimated probability of failure for tanks is about the same as that for pipes, but that pumps have much lower failure probability. The model can provide necessary equipment reliability information for proactive risk management at the lease level by providing quantitative information to base allocation of maintenance resources to high-risk equipment that will minimize both lost production and ecosystem damage.  相似文献   

11.
The treatment of uncertainties associated with modeling and risk assessment has recently attracted significant attention. The methodology and guidance for dealing with parameter uncertainty have been fairly well developed and quantitative tools such as Monte Carlo modeling are often recommended. However, the issue of model uncertainty is still rarely addressed in practical applications of risk assessment. The use of several alternative models to derive a range of model outputs or risks is one of a few available techniques. This article addresses the often-overlooked issue of what we call "modeler uncertainty," i.e., difference in problem formulation, model implementation, and parameter selection originating from subjective interpretation of the problem at hand. This study uses results from the Fruit Working Group, which was created under the International Atomic Energy Agency (IAEA) BIOMASS program (BIOsphere Modeling and ASSessment). Model-model and model-data intercomparisons reviewed in this study were conducted by the working group for a total of three different scenarios. The greatest uncertainty was found to result from modelers' interpretation of scenarios and approximations made by modelers. In scenarios that were unclear for modelers, the initial differences in model predictions were as high as seven orders of magnitude. Only after several meetings and discussions about specific assumptions did the differences in predictions by various models merge. Our study shows that parameter uncertainty (as evaluated by a probabilistic Monte Carlo assessment) may have contributed over one order of magnitude to the overall modeling uncertainty. The final model predictions ranged between one and three orders of magnitude, depending on the specific scenario. This study illustrates the importance of problem formulation and implementation of an analytic-deliberative process in risk characterization.  相似文献   

12.
This paper examines cognitive considerations in developing model management systems (MMSs). First, two approaches to MMS design are reviewed briefly: one based on database theory and one based on knowledge-representation techniques. Then three major cognitive issues—human limitations, information storage and retrieval, and problem-solving strategies—and their implications for MMS design are discussed. Evidence indicates that automatic modeling, which generates more complicated models by integrating existing models automatically, is a critical function of model management systems. In order to discuss issues pertinent to automatic modeling, a graph-based framework for integrating models is introduced. The framework captures some aspects of the processes by which human beings develop models as route selections on a network of all possible alternatives. Based on this framework, three issues are investigated: (1) What are proper criteria for evaluating a model formulated by an MMS? (2) If more than one criterion is chosen for evaluation, how can evaluations on each of the criteria be combined to get an overall evaluation of the model? (3) When should a model be evaluated? Finally, examples are presented to illustrate various modeling strategies.  相似文献   

13.
The last few decades have seen increasingly widespread use of risk assessment and management techniques as aids in making complex decisions. However, despite the progress that has been made in risk science, there still remain numerous examples of risk-based decisions and conclusions that have caused great controversy. In particular, there is a great deal of debate surrounding risk assessment: the role of values and ethics and other extra-scientific factors, the efficacy of quantitative versus qualitative analysis, and the role of uncertainty and incomplete information. Many of the epistemological and methodological issues confronting risk assessment have been explored in general systems theory, where techniques exist to manage such issues. However, the use of systems theory and systems analysis tools is still not widespread in risk management. This article builds on the Alachlor risk assessment case study of Brunk, Haworth, and Lee to present a systems-based view of the risk assessment process. The details of the case study are reviewed and the authors' original conclusions regarding the effects of extra-scientific factors on risk assessment are discussed. Concepts from systems theory are introduced to provide a mechanism with which to illustrate these extra-scientific effects The role of a systems study within a risk assessment is explained, resulting in an improved view of the problem formulation process The consequences regarding the definition of risk and its role in decision making are then explored.  相似文献   

14.
The aim of this article is to investigate some implications of complexity in workplace risk assessment. Workplace is examined as a complex system, and some of its attributes and aspects of its behavior are investigated. Failure probability of various workplace elements is examined as a time variable and interference phenomena of these probabilities are presented. Potential inefficiencies of common perceptions in applying probabilistic risk assessment models are also discussed. This investigation is conducted through mathematical modeling and qualitative examples of workplace situations. A mathematical model for simulation of the evolution of workplace accident probability in time is developed. Its findings are then attempted to be translated in real-world terms and discussed through simple examples of workplace situations. The mathematical model indicates that workplace is more likely to exhibit an unpredictable behavior. Such a behavior raises issues about usual key assumptions for the workplace, such as aggregation. Chaotic phenomena (nonlinear feedback mechanisms) are also investigated for in simple workplace systems cases. The main conclusions are (1) that time is an important variable for risk assessment, since behavior patterns are complex and unpredictable in the long term and (2) that workplace risk identification should take place in a holistic view (not by work post).  相似文献   

15.
Model uncertainty is a primary source of uncertainty in the assessment of the performance of repositories for the disposal of nuclear wastes, due to the complexity of the system and the large spatial and temporal scales involved. This work considers multiple assumptions on the system behavior and corresponding alternative plausible modeling hypotheses. To characterize the uncertainty in the correctness of the different hypotheses, the opinions of different experts are treated probabilistically or, in alternative, by the belief and plausibility functions of the Dempster‐Shafer theory. A comparison is made with reference to a flow model for the evaluation of the hydraulic head distributions present at a radioactive waste repository site. Three experts are assumed available for the evaluation of the uncertainties associated with the hydrogeological properties of the repository and the groundwater flow mechanisms.  相似文献   

16.
Topics in Microbial Risk Assessment: Dynamic Flow Tree Process   总被引:5,自引:0,他引:5  
Microbial risk assessment is emerging as a new discipline in risk assessment. A systematic approach to microbial risk assessment is presented that employs data analysis for developing parsimonious models and accounts formally for the variability and uncertainty of model inputs using analysis of variance and Monte Carlo simulation. The purpose of the paper is to raise and examine issues in conducting microbial risk assessments. The enteric pathogen Escherichia coli O157:H7 was selected as an example for this study due to its significance to public health. The framework for our work is consistent with the risk assessment components described by the National Research Council in 1983 (hazard identification; exposure assessment; dose-response assessment; and risk characterization). Exposure assessment focuses on hamburgers, cooked a range of temperatures from rare to well done, the latter typical for fast food restaurants. Features of the model include predictive microbiology components that account for random stochastic growth and death of organisms in hamburger. For dose-response modeling, Shigella data from human feeding studies were used as a surrogate for E. coli O157:H7. Risks were calculated using a threshold model and an alternative nonthreshold model. The 95% probability intervals for risk of illness for product cooked to a given internal temperature spanned five orders of magnitude for these models. The existence of even a small threshold has a dramatic impact on the estimated risk.  相似文献   

17.
Vulnerability of human beings exposed to a catastrophic disaster is affected by multiple factors that include hazard intensity, environment, and individual characteristics. The traditional approach to vulnerability assessment, based on the aggregate‐area method and unsupervised learning, cannot incorporate spatial information; thus, vulnerability can be only roughly assessed. In this article, we propose Bayesian network (BN) and spatial analysis techniques to mine spatial data sets to evaluate the vulnerability of human beings. In our approach, spatial analysis is leveraged to preprocess the data; for example, kernel density analysis (KDA) and accumulative road cost surface modeling (ARCSM) are employed to quantify the influence of geofeatures on vulnerability and relate such influence to spatial distance. The knowledge‐ and data‐based BN provides a consistent platform to integrate a variety of factors, including those extracted by KDA and ARCSM to model vulnerability uncertainty. We also consider the model's uncertainty and use the Bayesian model average and Occam's Window to average the multiple models obtained by our approach to robust prediction of the risk and vulnerability. We compare our approach with other probabilistic models in the case study of seismic risk and conclude that our approach is a good means to mining spatial data sets for evaluating vulnerability.  相似文献   

18.
Richard A. Canady 《Risk analysis》2010,30(11):1663-1670
A September 2008 workshop sponsored by the Society for Risk Analysis( 1 ) on risk assessment methods for nanoscale materials explored “nanotoxicology” in risk assessment. A general conclusion of the workshop was that, while research indicates that some nanoscale materials are toxic, the information presented at the workshop does not indicate the need for a conceptually different approach for risk assessment on nanoscale materials, compared to other materials. However, the toxicology discussions did identify areas of uncertainty that present a challenge for the assessment of nanoscale materials. These areas include novel metrics, characterizing multivariate dynamic mixtures, identification of toxicologically relevant properties and “impurities” for nanoscale characteristics, and characterizing persistence, toxicokinetics, and weight of evidence in consideration of the dynamic nature of the mixtures. The discussion also considered “nanomaterial uncertainty factors” for health risk values like the Environmental Protection Agency's reference dose (RfD). Similar to the general opinions for risk assessment, participants expressed that completing a data set regarding toxicity, or extrapolation between species, sensitive individuals, or durations of exposure, were not qualitatively different considerations for nanoscale materials in comparison to all chemicals, and therefore, a “nanomaterial uncertainty factor” for all nanomaterials does not seem appropriate. However, the quantitative challenges may require new methods and approaches to integrate the information and the uncertainty.  相似文献   

19.
Timely warning communication and decision making are critical for reducing harm from flash flooding. To help understand and improve extreme weather risk communication and management, this study uses a mental models research approach to investigate the flash flood warning system and its risk decision context. Data were collected in the Boulder, Colorado area from mental models interviews with forecasters, public officials, and media broadcasters, who each make important interacting decisions in the warning system, and from a group modeling session with forecasters. Analysis of the data informed development of a decision‐focused model of the flash flood warning system that integrates the professionals’ perspectives. Comparative analysis of individual and group data with this model characterizes how these professionals conceptualize flash flood risks and associated uncertainty; create and disseminate flash flood warning information; and perceive how warning information is (and should be) used in their own and others’ decisions. The analysis indicates that warning system functioning would benefit from professionals developing a clearer, shared understanding of flash flood risks and the warning system, across their areas of expertise and job roles. Given the challenges in risk communication and decision making for complex, rapidly evolving hazards such as flash floods, another priority is development of improved warning content to help members of the public protect themselves when needed. Also important is professional communication with members of the public about allocation of responsibilities for managing flash flood risks, as well as improved system‐wide management of uncertainty in decisions.  相似文献   

20.
Recent work in the assessment of risk in maritime transportation systems has used simulation-based probabilistic risk assessment techniques. In the Prince William Sound and Washington State Ferries risk assessments, the studies' recommendations were backed up by estimates of their impact made using such techniques and all recommendations were implemented. However, the level of uncertainty about these estimates was not available, leaving the decisionmakers unsure whether the evidence was sufficient to assess specific risks and benefits. The first step toward assessing the impact of uncertainty in maritime risk assessments is to model the uncertainty in the simulation models used. In this article, a study of the impact of proposed ferry service expansions in San Francisco Bay is used as a case study to demonstrate the use of Bayesian simulation techniques to propagate uncertainty throughout the analysis. The conclusions drawn in the original study are shown, in this case, to be robust to the inherent uncertainties. The main intellectual merit of this work is the development of Bayesian simulation technique to model uncertainty in the assessment of maritime risk. However, Bayesian simulations have been implemented only as theoretical demonstrations. Their use in a large, complex system may be considered state of the art in the field of computational sciences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号