首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
By building on a genetic‐inspired attribute‐based conceptual framework for safety risk analysis, we propose a novel approach to define, model, and simulate univariate and bivariate construction safety risk at the situational level. Our fully data‐driven techniques provide construction practitioners and academicians with an easy and automated way of getting valuable empirical insights from attribute‐based data extracted from unstructured textual injury reports. By applying our methodology on a data set of 814 injury reports, we first show the frequency‐magnitude distribution of construction safety risk to be very similar to that of many natural phenomena such as precipitation or earthquakes. Motivated by this observation, and drawing on state‐of‐the‐art techniques in hydroclimatology and insurance, we then introduce univariate and bivariate nonparametric stochastic safety risk generators based on kernel density estimators and copulas. These generators enable the user to produce large numbers of synthetic safety risk values faithful to the original data, allowing safety‐related decision making under uncertainty to be grounded on extensive empirical evidence. One of the implications of our study is that like natural phenomena, construction safety may benefit from being studied quantitatively by leveraging empirical data rather than strictly being approached through a managerial perspective using subjective data, which is the current industry standard. Finally, a side but interesting finding is that in our data set, attributes related to high energy levels (e.g., machinery, hazardous substance) and to human error (e.g., improper security of tools) emerge as strong risk shapers.  相似文献   

2.
A challenge for large‐scale environmental health investigations such as the National Children's Study (NCS), is characterizing exposures to multiple, co‐occurring chemical agents with varying spatiotemporal concentrations and consequences modulated by biochemical, physiological, behavioral, socioeconomic, and environmental factors. Such investigations can benefit from systematic retrieval, analysis, and integration of diverse extant information on both contaminant patterns and exposure‐relevant factors. This requires development, evaluation, and deployment of informatics methods that support flexible access and analysis of multiattribute data across multiple spatiotemporal scales. A new “Tiered Exposure Ranking” (TiER) framework, developed to support various aspects of risk‐relevant exposure characterization, is described here, with examples demonstrating its application to the NCS. TiER utilizes advances in informatics computational methods, extant database content and availability, and integrative environmental/exposure/biological modeling to support both “discovery‐driven” and “hypothesis‐driven” analyses. “Tier 1” applications focus on “exposomic” pattern recognition for extracting information from multidimensional data sets, whereas second and higher tier applications utilize mechanistic models to develop risk‐relevant exposure metrics for populations and individuals. In this article, “tier 1” applications of TiER explore identification of potentially causative associations among risk factors, for prioritizing further studies, by considering publicly available demographic/socioeconomic, behavioral, and environmental data in relation to two health endpoints (preterm birth and low birth weight). A “tier 2” application develops estimates of pollutant mixture inhalation exposure indices for NCS counties, formulated to support risk characterization for these endpoints. Applications of TiER demonstrate the feasibility of developing risk‐relevant exposure characterizations for pollutants using extant environmental and demographic/socioeconomic data.  相似文献   

3.
This study presents a new multidimensional methodology for tsunami vulnerability assessment that combines the morphological, structural, social, and tax component of vulnerability. This new approach can be distinguished from previous methodologies that focused primarily on the evaluation of potentially affected buildings and did not use tsunami numerical modeling. The methodology was applied to the Figueira da Foz and Vila do Bispo municipalities in Portugal. For each area, the potential tsunami‐inundated areas were calculated considering the 1755 Lisbon tsunami, which is the greatest disaster caused by natural hazards that ever occurred in Portugal. Furthermore, the four components of the vulnerability were calculated to obtain a composite vulnerability index. This methodology enables us to differentiate the two areas in their vulnerability, highlighting the characteristics of the territory components. This methodology can be a starting point for the creation of a local assessment framework at the municipal scale related to tsunami risk. In addition, the methodology is an important support for the different local stakeholders.  相似文献   

4.
Prediction of natural disasters and their consequences is difficult due to the uncertainties and complexity of multiple related factors. This article explores the use of domain knowledge and spatial data to construct a Bayesian network (BN) that facilitates the integration of multiple factors and quantification of uncertainties within a consistent system for assessment of catastrophic risk. A BN is chosen due to its advantages such as merging multiple source data and domain knowledge in a consistent system, learning from the data set, inference with missing data, and support of decision making. A key advantage of our methodology is the combination of domain knowledge and learning from the data to construct a robust network. To improve the assessment, we employ spatial data analysis and data mining to extend the training data set, select risk factors, and fine‐tune the network. Another major advantage of our methodology is the integration of an optimal discretizer, informative feature selector, learners, search strategies for local topologies, and Bayesian model averaging. These techniques all contribute to a robust prediction of risk probability of natural disasters. In the flood disaster's study, our methodology achieved a better probability of detection of high risk, a better precision, and a better ROC area compared with other methods, using both cross‐validation and prediction of catastrophic risk based on historic data. Our results suggest that BN is a good alternative for risk assessment and as a decision tool in the management of catastrophic risk.  相似文献   

5.
This article presents an iterative six‐step risk analysis methodology based on hybrid Bayesian networks (BNs). In typical risk analysis, systems are usually modeled as discrete and Boolean variables with constant failure rates via fault trees. Nevertheless, in many cases, it is not possible to perform an efficient analysis using only discrete and Boolean variables. The approach put forward by the proposed methodology makes use of BNs and incorporates recent developments that facilitate the use of continuous variables whose values may have any probability distributions. Thus, this approach makes the methodology particularly useful in cases where the available data for quantification of hazardous events probabilities are scarce or nonexistent, there is dependence among events, or when nonbinary events are involved. The methodology is applied to the risk analysis of a regasification system of liquefied natural gas (LNG) on board an FSRU (floating, storage, and regasification unit). LNG is becoming an important energy source option and the world's capacity to produce LNG is surging. Large reserves of natural gas exist worldwide, particularly in areas where the resources exceed the demand. Thus, this natural gas is liquefied for shipping and the storage and regasification process usually occurs at onshore plants. However, a new option for LNG storage and regasification has been proposed: the FSRU. As very few FSRUs have been put into operation, relevant failure data on FSRU systems are scarce. The results show the usefulness of the proposed methodology for cases where the risk analysis must be performed under considerable uncertainty.  相似文献   

6.
Research suggests that hurricane‐related risk perception is a critical predictor of behavioral response, such as evacuation. Less is known, however, about the precursors of these subjective risk judgments, especially when time has elapsed from a focal event. Drawing broadly from the risk communication, social psychology, and natural hazards literature, and specifically from concepts adapted from the risk information seeking and processing model and the protective action decision model, we examine how individuals’ distant recollections, including attribution of responsibility for the effects of a storm, attitude toward relevant information, and past hurricane experience, relate to risk judgment for a future, similar event. The present study reports on a survey involving U.S. residents in Connecticut, New Jersey, and New York (n = 619) impacted by Hurricane Sandy. While some results confirm past findings, such as that hurricane experience increases risk judgment, others suggest additional complexity, such as how various types of experience (e.g., having evacuated vs. having experienced losses) may heighten or attenuate individual‐level judgments of responsibility. We suggest avenues for future research, as well as implications for federal agencies involved in severe weather/natural hazard forecasting and communication with public audiences.  相似文献   

7.
Risk analysis involves people with different roles and competences. The validity of the outcome depends on that they are able to communicate; ideally between themselves, but at least with or via a risk analyst. The CORAS risk modeling language has been developed to facilitate communication between stakeholders involved in the various stages of risk analysis. This article reports the results from an empirical investigation among professionals, where the purpose was to investigate how graphical effects (size, color, shape) and text labels introduced in the CORAS risk modeling language affected the understanding. The results indicate that if graphical effects are used to illustrate important information, they should also be accompanied by informative textual labels.  相似文献   

8.
Helicobacter pylori is a microaerophilic, gram‐negative bacterium that is linked to adverse health effects including ulcers and gastrointestinal cancers. The goal of this analysis is to develop the necessary inputs for a quantitative microbial risk assessment (QMRA) needed to develop a potential guideline for drinking water at the point of ingestion (e.g., a maximum contaminant level, or MCL) that would be protective of human health to an acceptable level of risk while considering sources of uncertainty. Using infection and gastric cancer as two discrete endpoints, and calculating dose‐response relationships from experimental data on humans and monkeys, we perform both a forward and reverse risk assessment to determine the risk from current reported surface water concentrations of H. pylori and an acceptable concentration of H. pylori at the point of ingestion. This approach represents a synthesis of available information on human exposure to H. pylori via drinking water. A lifetime risk of cancer model suggests that a MCL be set at <1 organism/L given a 5‐log removal treatment because we cannot exclude the possibility that current levels of H. pylori in environmental source waters pose a potential public health risk. Research gaps include pathogen occurrence in source and finished water, treatment removal rates, and determination of H. pylori risks from other water sources such as groundwater and recreational water.  相似文献   

9.
The recent occurrence of severe major accidents has brought to light flaws and limitations of hazard identification (HAZID) processes performed for safety reports, as in the accidents at Toulouse (France) and Buncefield (UK), where the accident scenarios that occurred were not captured by HAZID techniques. This study focuses on this type of atypical accident scenario deviating from normal expectations. The main purpose is to analyze the examples of atypical accidents mentioned and to attempt to identify them through the application of a well-known methodology such as the bow-tie analysis. To these aims, the concept of atypical event is accurately defined. Early warnings, causes, consequences, and occurrence mechanisms of the specific events are widely studied and general failures of risk assessment, management, and governance isolated. These activities contribute to outline a set of targeted recommendations, addressing transversal common deficiencies and also demonstrating how a better management of knowledge from the study of past events can support future risk assessment processes in the identification of atypical accident scenarios. Thus, a new methodology is not suggested; rather, a specific approach coordinating a more effective use of experience and available information is described, to suggest that lessons to be learned from past accidents can be effectively translated into actions of prevention.  相似文献   

10.
11.
《Risk analysis》2018,38(6):1279-1305
Modern infrastructures are becoming increasingly dependent on electronic systems, leaving them more vulnerable to electrical surges or electromagnetic interference. Electromagnetic disturbances appear in nature, e.g., lightning and solar wind; however, they may also be generated by man‐made technology to maliciously damage or disturb electronic equipment. This article presents a systematic risk assessment framework for identifying possible, consequential, and plausible intentional electromagnetic interference (IEMI) attacks on an arbitrary distribution network infrastructure. In the absence of available data on IEMI occurrences, we find that a systems‐based risk assessment is more useful than a probabilistic approach. We therefore modify the often applied definition of risk, i.e., a set of triplets containing scenario, probability, and consequence, to a set of quadruplets: scenario, resource requirements, plausibility, and consequence. Probability is “replaced” by resource requirements and plausibility, where the former is the minimum amount and type of equipment necessary to successfully carry out an attack scenario and the latter is a subjective assessment of the extent of the existence of attackers who possess the motivation, knowledge, and resources necessary to carry out the scenario. We apply the concept of intrusion areas and classify electromagnetic source technology according to key attributes. Worst‐case scenarios are identified for different quantities of attacker resources. The most plausible and consequential of these are deemed the most important scenarios and should provide useful decision support in a countermeasures effort. Finally, an example of the proposed risk assessment framework, based on notional data, is provided on a hypothetical water distribution network.  相似文献   

12.
New features of natural disasters have been observed over the last several years. The factors that influence the disasters’ formation mechanisms, regularity of occurrence and main characteristics have been revealed to be more complicated and diverse in nature than previously thought. As the uncertainty involved increases, the variables need to be examined further. This article discusses the importance and the shortage of multivariate analysis of natural disasters and presents a method to estimate the joint probability of the return periods and perform a risk analysis. Severe dust storms from 1990 to 2008 in Inner Mongolia were used as a case study to test this new methodology, as they are normal and recurring climatic phenomena on Earth. Based on the 79 investigated events and according to the dust storm definition with bivariate, the joint probability distribution of severe dust storms was established using the observed data of maximum wind speed and duration. The joint return periods of severe dust storms were calculated, and the relevant risk was analyzed according to the joint probability. The copula function is able to simulate severe dust storm disasters accurately. The joint return periods generated are closer to those observed in reality than the univariate return periods and thus have more value in severe dust storm disaster mitigation, strategy making, program design, and improvement of risk management. This research may prove useful in risk‐based decision making. The exploration of multivariate analysis methods can also lay the foundation for further applications in natural disaster risk analysis.  相似文献   

13.
Losses due to natural hazard events can be extraordinarily high and difficult to cope with. Therefore, there is considerable interest to estimate the potential impact of current and future extreme events at all scales in as much detail as possible. As hazards typically spread over wider areas, risk assessment must take into account interrelations between regions. Neglecting such interdependencies can lead to a severe underestimation of potential losses, especially for extreme events. This underestimation of extreme risk can lead to the failure of riskmanagement strategies when they are most needed, namely, in times of unprecedented events. In this article, we suggest a methodology to incorporate such interdependencies in risk via the use of copulas. We demonstrate that by coupling losses, dependencies can be incorporated in risk analysis, avoiding the underestimation of risk. Based on maximum discharge data of river basins and stream networks, we present and discuss different ways to couple loss distributions of basins while explicitly incorporating tail dependencies. We distinguish between coupling methods that require river structure data for the analysis and those that do not. For the later approach we propose a minimax algorithm to choose coupled basin pairs so that the underestimation of risk is avoided and the use of river structure data is not needed. The proposed methodology is especially useful for large‐scale analysis and we motivate and apply our method using the case of Romania. The approach can be easily extended to other countries and natural hazards.  相似文献   

14.
A methodology is suggested for the estimation of the mass density and the cumulative ground deposition of a nonvolatile, nonneutrally buoyant, air pollutant (liquid or solid) released from a polluted column (following an explosion caused during routine operation in, e.g., the chemical industry or due to any kind of hostile act) and deposited on the ground via gravitational settling. In many cases, the deposited mass due to gravitational settling constitutes a significant fraction of the original inventory released from the source. Implementation of the methodology in preliminary risk assessments can serve as an efficient tool for emergency planning for both immediate and long‐term measures such as evacuation and decontamination. The methodology considers, inter alia, an estimation of the critical particle diameter, particle size, and mass distributions along the polluted column. This methodology was developed to apply in rural regions since proper application of relevant meteorological input data can be accomplished mainly for such areas.  相似文献   

15.
Although cumulative risk assessment by definition evaluates the joint effects of chemical and nonchemical stressors, studies to date have not considered both dimensions, in part because toxicological studies cannot capture many stressors of interest. Epidemiology can potentially include all relevant stressors, but developing and extracting the necessary information is challenging given some of the inherent limitations of epidemiology. In this article, I propose a conceptual framework within which epidemiological studies could be evaluated for their inclusion into cumulative risk assessment, including a problem formulation/planning and scoping step that focuses on stressors meaningful for risk management decisions, extension of the chemical mixtures framework to include nonchemical stressors, and formal consideration of vulnerability characteristics of the population. In the long term, broadening the applicability and informativeness of cumulative risk assessment will require enhanced communication and collaboration between epidemiologists and risk assessors, in which the structure of social and environmental epidemiological analyses may be informed in part by the needs of cumulative risk assessment.  相似文献   

16.
Partly because of the poor quality of exposure information on humans, most lifetime carcinogenic risk assessments have been based on animal data. There are, however, surrogate measures for exposure that have not been fully utilized. One of these is duration of exposure where data on mean exposure levels are available. A method is presented for the use of such data, and the method is illustrated by developing a risk assessment from the available epidemiologic literature on gasoline and kidney cancer. This risk assessment is fairly consistent across studies and close to a risk assessment based upon an experiment with rats. While there needs to be much improvement in the quality of environmental data available to epidemiologists, it is possible that a number of risk assessments can be made from existing epidemiologic data and efforts directed away from extrapolation from animal data.  相似文献   

17.
Statistical source attribution approaches of food‐related zoonoses can generally be based on reported diagnosed human cases and surveillance results from different food sources or reservoirs of bacteria. The attribution model, or probabilistic classifier, can thus be based on the (sub)typing information enabling comparison between human infections and samples derived from source surveillance. Having time series of both data allows analyzing temporal patterns over time providing a repeated natural experiment. A Bayesian approach combining both sources of information over a long time series is presented in the case of Campylobacter in Finland and Norway. The full model is transparently presented and derived from the Bayes theorem. Previous statistical source attribution approaches are here advanced (1) by explicit modeling of the cases not associated with any of the sources under surveillance over time, (2) by modeling uncertain prevalence in a food source by bacteria type over time, and (3) by implementing formal model fit assessment using posterior predictive discrepancy functions. Large proportion of all campylobacteriosis can be attributed to broiler, but considerable uncertainty remains over time. The source attribution is inherently incomplete if only the sources under surveillance are included in the model. All statistical source attribution approaches should include a model fit assessment for judgment of model performance with respect to relevant quantities of interest. It is especially relevant when the model aims at a synthesis of several incomplete information sources under significant uncertainty of explanatory variables.  相似文献   

18.
Ali Mosleh 《Risk analysis》2012,32(11):1888-1900
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from “nominal predictions” due to “upsetting events” such as the 2008 global banking crisis.  相似文献   

19.
Point source pollution is one of the main threats to regional environmental health. Based on a water quality model, a methodology to assess the regional risk of point source pollution is proposed. The assessment procedure includes five parts: (1) identifying risk source units and estimating source emissions using Monte Carlo algorithms; (2) observing hydrological and water quality data of the assessed area, and evaluating the selected water quality model; (3) screening out the assessment endpoints and analyzing receptor vulnerability with the Choquet fuzzy integral algorithm; (4) using the water quality model introduced in the second step to predict pollutant concentrations for various source emission scenarios and analyzing hazards of risk sources; and finally, (5) using the source hazard values and receptor vulnerability scores to estimate overall regional risk. The proposed method, based on the Water Quality Analysis Simulation Program (WASP), was applied in the region of the Taipu River, which is in the Taihu Basin, China. Results of source hazard and receptor vulnerability analysis allowed us to describe aquatic ecological, human health, and socioeconomic risks individually, and also integrated risks in the Taipu region, from a series of risk curves. Risk contributions of sources to receptors were ranked, and the spatial distribution of risk levels was presented. By changing the input conditions, we were able to estimate risks for a range of scenarios. Thus, the proposed procedure may also be used by decisionmakers for long‐term dynamic risk prediction.  相似文献   

20.
Land subsidence risk assessment (LSRA) is a multi‐attribute decision analysis (MADA) problem and is often characterized by both quantitative and qualitative attributes with various types of uncertainty. Therefore, the problem needs to be modeled and analyzed using methods that can handle uncertainty. In this article, we propose an integrated assessment model based on the evidential reasoning (ER) algorithm and fuzzy set theory. The assessment model is structured as a hierarchical framework that regards land subsidence risk as a composite of two key factors: hazard and vulnerability. These factors can be described by a set of basic indicators defined by assessment grades with attributes for transforming both numerical data and subjective judgments into a belief structure. The factor‐level attributes of hazard and vulnerability are combined using the ER algorithm, which is based on the information from a belief structure calculated by the Dempster‐Shafer (D‐S) theory, and a distributed fuzzy belief structure calculated by fuzzy set theory. The results from the combined algorithms yield distributed assessment grade matrices. The application of the model to the Xixi‐Chengnan area, China, illustrates its usefulness and validity for LSRA. The model utilizes a combination of all types of evidence, including all assessment information—quantitative or qualitative, complete or incomplete, and precise or imprecise—to provide assessment grades that define risk assessment on the basis of hazard and vulnerability. The results will enable risk managers to apply different risk prevention measures and mitigation planning based on the calculated risk states.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号