首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The authors of this article have developed six probabilistic causal models for critical risks in tunnel works. The details of the models' development and evaluation were reported in two earlier publications of this journal. Accordingly, as a remaining step, this article is focused on the investigation into the use of these models in a real case study project. The use of the models is challenging given the need to provide information on risks that usually are both project and context dependent. The latter is of particular concern in underground construction projects. Tunnel risks are the consequences of interactions between site‐ and project‐ specific factors. Large variations and uncertainties in ground conditions as well as project singularities give rise to particular risk factors with very specific impacts. These circumstances mean that existing risk information, gathered from previous projects, is extremely difficult to use in other projects. This article considers these issues and addresses the extent to which prior risk‐related knowledge, in the form of causal models, as the models developed for the investigation, can be used to provide useful risk information for the case study project. The identification and characterization of the causes and conditions that lead to failures and their interactions as well as their associated probabilistic information is assumed to be risk‐related knowledge in this article. It is shown that, irrespective of existing constraints on using information and knowledge from past experiences, construction risk‐related knowledge can be transferred and used from project to project in the form of comprehensive models based on probabilistic‐causal relationships. The article also shows that the developed models provide guidance as to the use of specific remedial measures by means of the identification of critical risk factors, and therefore they support risk management decisions. Similarly, a number of limitations of the models are discussed.  相似文献   

2.
Land subsidence risk assessment (LSRA) is a multi‐attribute decision analysis (MADA) problem and is often characterized by both quantitative and qualitative attributes with various types of uncertainty. Therefore, the problem needs to be modeled and analyzed using methods that can handle uncertainty. In this article, we propose an integrated assessment model based on the evidential reasoning (ER) algorithm and fuzzy set theory. The assessment model is structured as a hierarchical framework that regards land subsidence risk as a composite of two key factors: hazard and vulnerability. These factors can be described by a set of basic indicators defined by assessment grades with attributes for transforming both numerical data and subjective judgments into a belief structure. The factor‐level attributes of hazard and vulnerability are combined using the ER algorithm, which is based on the information from a belief structure calculated by the Dempster‐Shafer (D‐S) theory, and a distributed fuzzy belief structure calculated by fuzzy set theory. The results from the combined algorithms yield distributed assessment grade matrices. The application of the model to the Xixi‐Chengnan area, China, illustrates its usefulness and validity for LSRA. The model utilizes a combination of all types of evidence, including all assessment information—quantitative or qualitative, complete or incomplete, and precise or imprecise—to provide assessment grades that define risk assessment on the basis of hazard and vulnerability. The results will enable risk managers to apply different risk prevention measures and mitigation planning based on the calculated risk states.  相似文献   

3.
Pesticide risk assessment for food products involves combining information from consumption and concentration data sets to estimate a distribution for the pesticide intake in a human population. Using this distribution one can obtain probabilities of individuals exceeding specified levels of pesticide intake. In this article, we present a probabilistic, Bayesian approach to modeling the daily consumptions of the pesticide Iprodione though multiple food products. Modeling data on food consumption and pesticide concentration poses a variety of problems, such as the large proportions of consumptions and concentrations that are recorded as zero, and correlation between the consumptions of different foods. We consider daily food consumption data from the Netherlands National Food Consumption Survey and concentration data collected by the Netherlands Ministry of Agriculture. We develop a multivariate latent‐Gaussian model for the consumption data that allows for correlated intakes between products. For the concentration data, we propose a univariate latent‐t model. We then combine predicted consumptions and concentrations from these models to obtain a distribution for individual daily Iprodione exposure. The latent‐variable models allow for both skewness and large numbers of zeros in the consumption and concentration data. The use of a probabilistic approach is intended to yield more robust estimates of high percentiles of the exposure distribution than an empirical approach. Bayesian inference is used to facilitate the treatment of data with a complex structure.  相似文献   

4.
In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time‐dependent critical risk level within a finite‐time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented.  相似文献   

5.
This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value‐at‐Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of “model risk” in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value‐at‐Risk model risk and compute the required regulatory capital add‐on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value‐at‐Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks.  相似文献   

6.
《Risk analysis》2018,38(6):1258-1278
Although individual behavior plays a major role in community flood risk, traditional flood risk models generally do not capture information on how community policies and individual decisions impact the evolution of flood risk over time. The purpose of this study is to improve the understanding of the temporal aspects of flood risk through a combined analysis of the behavioral, engineering, and physical hazard aspects of flood risk. Additionally, the study aims to develop a new modeling approach for integrating behavior, policy, flood hazards, and engineering interventions. An agent‐based model (ABM) is used to analyze the influence of flood protection measures, individual behavior, and the occurrence of floods and near‐miss flood events on community flood risk. The ABM focuses on the following decisions and behaviors: dissemination of flood management information, installation of community flood protection, elevation of household mechanical equipment, and elevation of homes. The approach is place based, with a case study area in Fargo, North Dakota, but is focused on generalizable insights. Generally, community mitigation results in reduced future damage, and individual action, including mitigation and movement into and out of high‐risk areas, can have a significant influence on community flood risk. The results of this study provide useful insights into the interplay between individual and community actions and how it affects the evolution of flood risk. This study lends insight into priorities for future work, including the development of more in‐depth behavioral and decision rules at the individual and community level.  相似文献   

7.
Management of contaminated sites is a critical environmental issue around the world due to the human health risk involved for many sites and scarcity of funding. Moreover, clean‐up costs of all contaminated sites to their background levels with existing engineering technologies may be financially infeasible and demand extended periods of operation time. Given these constraints, to achieve optimal utilization of available funds and prioritization of contaminated sites that need immediate attention, health‐risk‐based soil quality guidelines should be preferred over the traditional soil quality standards. For these reasons, traditional soil quality standards are being replaced by health‐risk‐based ones in many countries and in Turkey as well. The need for health‐risk‐based guidelines is clear, but developing these guidelines and implementation of them in contaminated site management is not a straightforward process. The goal of this study is to highlight the problems that are encountered at various stages of the development process of risk‐based soil quality guidelines for Turkey and how they are dealt with. Utilization of different definitions and methodologies at different countries, existence of inconsistent risk assessment tools, difficulties in accessing relevant documents and reports, and lack of specific data required for Turkey are among these problems. We believe that Turkey's experience may help other countries that are planning to develop health‐risk‐based guidelines achieve their goals in a more efficient manner.  相似文献   

8.
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic‐possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility‐probability (probability‐possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context.  相似文献   

9.
《Risk analysis》2018,38(2):410-424
This article proposes a rigorous mathematical approach, named a reliability‐based capability approach (RCA), to quantify the societal impact of a hazard. The starting point of the RCA is a capability approach in which capabilities refer to the genuine opportunities open to individuals to achieve valuable doings and beings (such as being mobile and being sheltered) called functionings. Capabilities depend on what individuals have and what they can do with what they have. The article develops probabilistic predictive models that relate the value of each functioning to a set of easily predictable or measurable quantities (regressors) in the aftermath of a hazard. The predicted values of selected functionings for an individual collectively determine the impact of a hazard on his/her state of well‐being. The proposed RCA integrates the predictive models of functionings into a system reliability problem to determine the probability that the state of well‐being is acceptable, tolerable, or intolerable. Importance measures are defined to quantify the contribution of each functioning to the state of well‐being. The information from the importance measures can inform decisions on optimal allocation of limited resources for risk mitigation and management.  相似文献   

10.
This article presents a flood risk analysis model that considers the spatially heterogeneous nature of flood events. The basic concept of this approach is to generate a large sample of flood events that can be regarded as temporal extrapolation of flood events. These are combined with cumulative flood impact indicators, such as building damages, to finally derive time series of damages for risk estimation. Therefore, a multivariate modeling procedure that is able to take into account the spatial characteristics of flooding, the regionalization method top‐kriging, and three different impact indicators are combined in a model chain. Eventually, the expected annual flood impact (e.g., expected annual damages) and the flood impact associated with a low probability of occurrence are determined for a study area. The risk model has the potential to augment the understanding of flood risk in a region and thereby contribute to enhanced risk management of, for example, risk analysts and policymakers or insurance companies. The modeling framework was successfully applied in a proof‐of‐concept exercise in Vorarlberg (Austria). The results of the case study show that risk analysis has to be based on spatially heterogeneous flood events in order to estimate flood risk adequately.  相似文献   

11.
Dose‐response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose‐response model parameters are estimated using limited epidemiological data is rarely quantified. Second‐order risk characterization approaches incorporating uncertainty in dose‐response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta‐Poisson dose‐response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta‐Poisson dose‐response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta‐Poisson dose‐response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta‐Poisson model are proposed, and simple algorithms to evaluate actual beta‐Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta‐Poisson dose‐response model parameters is attributable to the absence of low‐dose data. This region includes beta‐Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility.  相似文献   

12.
Terrorism could be treated as a hazard for design purposes. For instance, the terrorist hazard could be analyzed in a manner similar to the way that seismic hazard is handled. No matter how terrorism is dealt with in the design of systems, the need for predictions of the frequency and magnitude of the hazard will be required. And, if the human‐induced hazard is to be designed for in a manner analogous to natural hazards, then the predictions should be probabilistic in nature. The model described in this article is a prototype model that used agent‐based modeling (ABM) to analyze terrorist attacks. The basic approach in this article of using ABM to model human‐induced hazards has been preliminarily validated in the sense that the attack magnitudes seem to be power‐law distributed and attacks occur mostly in regions where high levels of wealth pass through, such as transit routes and markets. The model developed in this study indicates that ABM is a viable approach to modeling socioeconomic‐based infrastructure systems for engineering design to deal with human‐induced hazards.  相似文献   

13.
Dose–response modeling of biological agents has traditionally focused on describing laboratory‐derived experimental data. Limited consideration has been given to understanding those factors that are controlled in a laboratory, but are likely to occur in real‐world scenarios. In this study, a probabilistic framework is developed that extends Brookmeyer's competing‐risks dose–response model to allow for variation in factors such as dose‐dispersion, dose‐deposition, and other within‐host parameters. With data sets drawn from dose–response experiments of inhalational anthrax, plague, and tularemia, we illustrate how for certain cases, there is the potential for overestimation of infection numbers arising from models that consider only the experimental data in isolation.  相似文献   

14.
Security risk management is essential for ensuring effective airport operations. This article introduces AbSRiM, a novel agent‐based modeling and simulation approach to perform security risk management for airport operations that uses formal sociotechnical models that include temporal and spatial aspects. The approach contains four main steps: scope selection, agent‐based model definition, risk assessment, and risk mitigation. The approach is based on traditional security risk management methodologies, but uses agent‐based modeling and Monte Carlo simulation at its core. Agent‐based modeling is used to model threat scenarios, and Monte Carlo simulations are then performed with this model to estimate security risks. The use of the AbSRiM approach is demonstrated with an illustrative case study. This case study includes a threat scenario in which an adversary attacks an airport terminal with an improvised explosive device. The approach provides a promising way to include important elements, such as human aspects and spatiotemporal aspects, in the assessment of risk. More research is still needed to better identify the strengths and weaknesses of the AbSRiM approach in different case studies, but results demonstrate the feasibility of the approach and its potential.  相似文献   

15.
Knowledge on failure events and their associated factors, gained from past construction projects, is regarded as potentially extremely useful in risk management. However, a number of circumstances are constraining its wider use. Such knowledge is usually scarce, seldom documented, and even unavailable when it is required. Further, there exists a lack of proven methods to integrate and analyze it in a cost‐effective way. This article addresses possible options to overcome these difficulties. Focusing on limited but critical potential failure events, the article demonstrates how knowledge on a number of important potential failure events in tunnel works can be integrated. The problem of unavailable or incomplete information was addressed by gathering judgments from a group of experts. The elicited expert knowledge consisted of failure scenarios and associated probabilistic information. This information was integrated using Bayesian belief‐networks‐based models that were first customized in order to deal with the expected divergence in judgments caused by epistemic uncertainty of risks. The work described in the article shows that the developed models that integrate risk‐related knowledge provide guidance as to the use of specific remedial measures.  相似文献   

16.
This article proposes an intertemporal risk‐value (IRV) model that integrates probability‐time tradeoff, time‐value tradeoff, and risk‐value tradeoff into one unified framework. We obtain a general probability‐time tradeoff, which yields a formal representation form to reflect the psychological distance of a decisionmaker in evaluating a temporal lottery. This intuition of probability‐time tradeoff is supported by robust empirical findings as well as by psychological theory. Through an explicit formalization of probability‐time tradeoff, an IRV model taking into account three fundamental dimensions, namely, value, probability, and time, is established. The object of evaluation in our framework is a complex lottery. We also give some insights into the structure of the IRV model using a wildcatter problem.  相似文献   

17.
The purpose of this article is to provide a risk‐based predictive model to assess the impact of false mussel Mytilopsis sallei invasions on hard clam Meretrix lusoria farms in the southwestern region of Taiwan. The actual spread of invasive false mussel was predicted by using analytical models based on advection‐diffusion and gravity models. The proportion of hard clam colonized and infestation by false mussel were used to characterize risk estimates. A mortality model was parameterized to assess hard clam mortality risk characterized by false mussel density and infestation intensity. The published data were reanalyzed to parameterize a predictive threshold model described by a cumulative Weibull distribution function that can be used to estimate the exceeding thresholds of proportion of hard clam colonized and infestation. Results indicated that the infestation thresholds were 2–17 ind clam?1 for adult hard clams, whereas 4 ind clam?1 for nursery hard clams. The average colonization thresholds were estimated to be 81–89% for cultivated and nursery hard clam farms, respectively. Our results indicated that false mussel density and infestation, which caused 50% hard clam mortality, were estimated to be 2,812 ind m?2 and 31 ind clam?1, respectively. This study further indicated that hard clam farms that are close to the coastal area have at least 50% probability for 43% mortality caused by infestation. This study highlighted that a probabilistic risk‐based framework characterized by probability distributions and risk curves is an effective representation of scientific assessments for farmed hard clam in response to the nonnative false mussel invasion.  相似文献   

18.
This article focuses on conceptual and methodological developments allowing the integration of physical and social dynamics leading to model forecasts of circumstance‐specific human losses during a flash flood. To reach this objective, a random forest classifier is applied to assess the likelihood of fatality occurrence for a given circumstance as a function of representative indicators. Here, vehicle‐related circumstance is chosen as the literature indicates that most fatalities from flash flooding fall in this category. A database of flash flood events, with and without human losses from 2001 to 2011 in the United States, is supplemented with other variables describing the storm event, the spatial distribution of the sensitive characteristics of the exposed population, and built environment at the county level. The catastrophic flash floods of May 2015 in the states of Texas and Oklahoma are used as a case study to map the dynamics of the estimated probabilistic human risk on a daily scale. The results indicate the importance of time‐ and space‐dependent human vulnerability and risk assessment for short‐fuse flood events. The need for more systematic human impact data collection is also highlighted to advance impact‐based predictive models for flash flood casualties using machine‐learning approaches in the future.  相似文献   

19.
Microbial food safety risk assessment models can often at times be simplified by eliminating the need to integrate a complex dose‐response relationship across a distribution of exposure doses. This is possible if exposure pathways lead to pathogens at exposure that consistently have a small probability of causing illness. In this situation, the probability of illness will follow an approximately linear function of dose. Consequently, the predicted probability of illness per serving across all exposures is linear with respect to the expected value of dose. The majority of dose‐response functions are approximately linear when the dose is low. Nevertheless, what constitutes “low” is dependent on the parameters of the dose‐response function for a particular pathogen. In this study, a method is proposed to determine an upper bound of the exposure distribution for which the use of a linear dose‐response function is acceptable. If this upper bound is substantially larger than the expected value of exposure doses, then a linear approximation for probability of illness is reasonable. If conditions are appropriate for using the linear dose‐response approximation, for example, the expected value for exposure doses is two to three logs10 smaller than the upper bound of the linear portion of the dose‐response function, then predicting the risk‐reducing effectiveness of a proposed policy is trivial. Simple examples illustrate how this approximation can be used to inform policy decisions and improve an analyst's understanding of risk.  相似文献   

20.
《Risk analysis》2018,38(8):1718-1737
We developed a probabilistic mathematical model for the postharvest processing of leafy greens focusing on Escherichia coli O157:H7 contamination of fresh‐cut romaine lettuce as the case study. Our model can (i) support the investigation of cross‐contamination scenarios, and (ii) evaluate and compare different risk mitigation options. We used an agent‐based modeling framework to predict the pathogen prevalence and levels in bags of fresh‐cut lettuce and quantify spread of E. coli O157:H7 from contaminated lettuce to surface areas of processing equipment. Using an unbalanced factorial design, we were able to propagate combinations of random values assigned to model inputs through different processing steps and ranked statistically significant inputs with respect to their impacts on selected model outputs. Results indicated that whether contamination originated on incoming lettuce heads or on the surface areas of processing equipment, pathogen prevalence among bags of fresh‐cut lettuce and batches was most significantly impacted by the level of free chlorine in the flume tank and frequency of replacing the wash water inside the tank. Pathogen levels in bags of fresh‐cut lettuce were most significantly influenced by the initial levels of contamination on incoming lettuce heads or surface areas of processing equipment. The influence of surface contamination on pathogen prevalence or levels in fresh‐cut bags depended on the location of that surface relative to the flume tank. This study demonstrates that developing a flexible yet mathematically rigorous modeling tool, a “virtual laboratory,” can provide valuable insights into the effectiveness of individual and combined risk mitigation options.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号