首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In work environments, the main aim of occupational safety risk assessment (OSRA) is to improve the safety level of an installation or site by either preventing accidents and injuries or minimizing their consequences. To this end, it is of paramount importance to identify all sources of hazards and assess their potential to cause problems in the respective context. If the OSRA process is inadequate and/or not applied effectively, it results in an ineffective safety prevention program and inefficient use of resources. An appropriate OSRA is an essential component of the occupational safety risk management process in industries. In this article, we performed a survey to elicit the relative importance for identified OSRA tasks to enable an in‐depth evaluation of the quality of risk assessments related to occupational safety aspects on industrial sites. The survey involved defining a questionnaire with the most important elements (tasks) for OSRA quality assessment, which was then presented to safety experts in the mining, electrical power production, transportation, and petrochemical industries. With this work, we expect to contribute to the main question of OSRA in industries: “What constitutes a good occupational safety risk assessment?” The results obtained from the questionnaire showed that experts agree with the proposed OSRA process decomposition in steps and tasks (taxonomy) and also with the importance of assigning weights to obtain knowledge about OSRA task relevance. The knowledge gained will enable us, in the near future, to build a framework to evaluate OSRA quality for industrial sites.  相似文献   

2.
To aid in their safety oversight of large‐scale, potentially dangerous energy and water infrastructure and transportation systems, public utility regulatory agencies increasingly seek to use formal risk assessment models. Yet some of the approaches to risk assessment used by utilities and their regulators may be less useful for this purpose than is supposed. These approaches often do not reflect the current state of the art in risk assessment strategy and methodology. This essay explores why utilities and regulatory agencies might embrace risk assessment techniques that do not sufficiently assess organizational and managerial factors as drivers of risk, nor that adequately represent important uncertainties surrounding risk calculations. Further, it describes why, in the special legal, political, and administrative world of the typical public utility regulator, strategies to identify and mitigate formally specified risks might actually diverge from the regulatory promotion of “safety.” Some improvements are suggested that can be made in risk assessment approaches to support more fully the safety oversight objectives of public regulatory agencies, with examples from “high‐reliability organizations” (HROs) that have successfully merged the management of safety with the management of risk. Finally, given the limitations of their current risk assessments and the lessons from HROs, four specific assurances are suggested that regulatory agencies should seek for themselves and the public as objectives in their safety oversight of public utilities.  相似文献   

3.
J. W. Owens 《Risk analysis》1997,17(3):359-365
A life-cycle approach takes a cradle-to-grave perspective of a product's numerous activities from the raw material extraction to final disposal. There have been recent efforts to develop life-cycle assessment (LCA) to assess both environmental and human health issues. The question then arises: what are the capabilities of LCA, especially in relation to risk assessment? To address this question, this paper first describes the LCA mass-based accounting system and then analyzes the use of this approach for environmental and human health assessment. The key LCA limitations in this respect are loss of spatial, temporal, dose-response, and threshold information. These limitations affect LCA's capability to assess several environmental issues, and human health in particular. This leads to the conclusion that LCA impact assessment does not predict or measure actual effects, quantitate risks, or address safety. Instead, LCA uses mass loadings with simplifying assumptions and subjective judgments to add independent effects and exposures into an overall score. As a result, LCA identifies possible human health issues on a systemwide basis from a worst case, hypothetical hazard perspective. Ideally, the identified issues would then be addressed by more detailed assessment methods, such as risk assessment.  相似文献   

4.
Sensitivity analysis (SA) methods are a valuable tool for identifying critical control points (CCPs), which is one of the important steps in the hazard analysis and CCP approach that is used to ensure safe food. There are many SA methods used across various disciplines. Furthermore, food safety process risk models pose challenges because they often are highly nonlinear, contain thresholds, and have discrete inputs. Therefore, it is useful to compare and evaluate SA methods based upon applications to an example food safety risk model. Ten SA methods were applied to a draft Vibrio parahaemolyticus (Vp) risk assessment model developed by the Food and Drug Administration. The model was modified so that all inputs were independent. Rankings of key inputs from different methods were compared. Inputs such as water temperature, number of oysters per meal, and the distributional assumption for the unrefrigerated time were the most important inputs, whereas time on water, fraction of pathogenic Vp, and the distributional assumption for the weight of oysters were the least important inputs. Most of the methods gave a similar ranking of key inputs even though the methods differed in terms of being graphical, mathematical, or statistical, accounting for individual effects or joint effect of inputs, and being model dependent or model independent. A key recommendation is that methods be further compared by application on different and more complex food safety models. Model independent methods, such as ANOVA, mutual information index, and scatter plots, are expected to be more robust than others evaluated.  相似文献   

5.
This study assesses the fire safety risks associated with compressed natural gas (CNG) vehicle systems, comprising primarily a typical school bus and supporting fuel infrastructure. The study determines the sensitivity of the results to variations in component failure rates and consequences of fire events. The components and subsystems that contribute most to fire safety risk are determined. Finally, the results are compared to fire risks of the present generation of diesel-fueled school buses. Direct computation of the safety risks associated with diesel-powered vehicles is possible because these are mature technologies for which historical performance data are available. Because of limited experience, fatal accident data for CNG bus fleets are minimal. Therefore, this study uses the probabilistic risk assessment (PRA) approach to model and predict fire safety risk of CNG buses. Generic failure data, engineering judgments, and assumptions are used in this study. This study predicts the mean fire fatality risk for typical CNG buses as approximately 0.23 fatalities per 100-million miles for all people involved, including bus passengers. The study estimates mean values of 0.16 fatalities per 100-million miles for bus passengers only. Based on historical data, diesel school bus mean fire fatality risk is 0.091 and 0.0007 per 100-million miles for all people and bus passengers, respectively. One can therefore conclude that CNG buses are more prone to fire fatality risk by 2.5 times that of diesel buses, with the bus passengers being more at risk by over two orders of magnitude. The study estimates a mean fire risk frequency of 2.2 x 10(-5) fatalities/bus per year. The 5% and 95% uncertainty bounds are 9.1 x 10(-6) and 4.0 x 10(-5), respectively. The risk result was found to be affected most by failure rates of pressure relief valves, CNG cylinders, and fuel piping.  相似文献   

6.
《Risk analysis》2018,38(9):1972-1987
Weed risk assessments (WRA) are used to identify plant invaders before introduction. Unfortunately, very few incorporate uncertainty ratings or evaluate the effects of uncertainty, a fundamental risk component. We developed a probabilistic model to quantitatively evaluate the effects of uncertainty on the outcomes of a question‐based WRA tool for the United States. In our tool, the uncertainty of each response is rated as Negligible, Low, Moderate, or High. We developed the model by specifying the likelihood of a response changing for each uncertainty rating. The simulations determine if responses change, select new responses, and sum the scores to determine the risk rating. The simulated scores reveal potential variation in WRA risk ratings. In testing with 204 species assessments, the ranges of simulated risk scores increased with greater uncertainty, and analyses for most species produced simulated risk ratings that differed from the baseline WRA rating. Still, the most frequent simulated rating matched the baseline rating for every High Risk species, and for 87% of all tested species. The remaining 13% primarily involved ambiguous Low Risk results. Changing final ratings based on the uncertainty analysis results was not justified here because accuracy (match between WRA tool and known risk rating) did not improve. Detailed analyses of three species assessments indicate that assessment uncertainty may be best reduced by obtaining evidence for unanswered questions, rather than obtaining additional evidence for questions with responses. This analysis represents an advance in interpreting WRA results, and has enhanced our regulation and management of potential weed species.  相似文献   

7.
Quantitative Risk Assessment for Developmental Neurotoxic Effects   总被引:4,自引:0,他引:4  
Developmental neurotoxicity concerns the adverse health effects of exogenous agents acting on neurodevelopment. Because human brain development is a delicate process involving many cellular events, the developing fetus is rather susceptible to compounds that can alter the structure and function of the brain. Today, there is clear evidence that early exposure to many neurotoxicants can severely damage the developing nervous system. Although in recent years, there has been much attention given to model development and risk assessment procedures for developmental toxicants, the area of developmental neurotoxicity has been largely ignored. Here, we consider the problem of risk estimation for developmental neurotoxicants from animal bioassay data. Since most responses from developmental neurotoxicity experiments are nonquantal in nature, an adverse health effect will be defined as a response that occurs with very small probability in unexposed animals. Using a two-stage hierarchical normal dose-response model, upper confidence limits on the excess risk due to a given level of added exposure are derived. Equivalently, the model is used to obtain lower confidence limits on dose for a small negligible level of risk. Our method is based on the asymptotic distribution of the likelihood ratio statistic (cf. Crump, 1995). An example is used to provide further illustration.  相似文献   

8.
We developed a quantitative risk assessment model using a discrete event framework to quantify and study the risk associated with norovirus transmission to consumers through food contaminated by infected food employees in a retail food setting. This study focused on the impact of ill food workers experiencing symptoms of diarrhea and vomiting and potential control measures for the transmission of norovirus to foods. The model examined the behavior of food employees regarding exclusion from work while ill and after symptom resolution and preventive measures limiting food contamination during preparation. The mean numbers of infected customers estimated for 21 scenarios were compared to the estimate for a baseline scenario representing current practices. Results show that prevention strategies examined could not prevent norovirus transmission to food when a symptomatic employee was present in the food establishment. Compliance with exclusion from work of symptomatic food employees is thus critical, with an estimated range of 75–226% of the baseline mean for full to no compliance, respectively. Results also suggest that efficient handwashing, handwashing frequency associated with gloving compliance, and elimination of contact between hands, faucets, and door handles in restrooms reduced the mean number of infected customers to 58%, 62%, and 75% of the baseline, respectively. This study provides quantitative data to evaluate the relative efficacy of policy and practices at retail to reduce norovirus illnesses and provides new insights into the interactions and interplay of prevention strategies and compliance in reducing transmission of foodborne norovirus.  相似文献   

9.
Quantitative microbial risk assessment was used to predict the likelihood and spatial organization of Mycobacterium tuberculosis ( Mtb ) transmission in a commercial aircraft. Passenger exposure was predicted via a multizone Markov model in four scenarios: seated or moving infectious passengers and with or without filtration of recirculated cabin air. The traditional exponential ( k  = 1) and a new exponential ( k  = 0.0218) dose-response function were used to compute infection risk. Emission variability was included by Monte Carlo simulation. Infection risks were higher nearer and aft of the source; steady state airborne concentration levels were not attained. Expected incidence was low to moderate, with the central 95% ranging from 10−6 to 10−1 per 169 passengers in the four scenarios. Emission rates used were low compared to measurements from active TB patients in wards, thus a "superspreader" emitting 44 quanta/h could produce 6.2 cases or more under these scenarios. Use of respiratory protection by the infectious source and/or susceptible passengers reduced infection incidence up to one order of magnitude.  相似文献   

10.
Risk characterization in a study population relies on cases of disease or death that are causally related to the exposure under study. The number of such cases, so-called "excess" cases, is not just an indicator of the impact of the risk factor in the study population, but also an important determinant of statistical power for assessing aspects of risk such as age-time trends and susceptible subgroups. In determining how large a population to study and/or how long to follow a study population to accumulate sufficient excess cases, it is necessary to predict future risk. In this study, focusing on models involving excess risk with possible effect modification, we describe a method for predicting the expected magnitude of numbers of excess cases and assess the uncertainty in those predictions. We do this by extending Bayesian APC models for rate projection to include exposure-related excess risk with possible effect modification by, e.g., age at exposure and attained age. The method is illustrated using the follow-up study of Japanese Atomic-Bomb Survivors, one of the primary bases for determining long-term health effects of radiation exposure and assessment of risk for radiation protection purposes. Using models selected by a predictive-performance measure obtained on test data reserved for cross-validation, we project excess counts due to radiation exposure and lifetime risk measures (risk of exposure-induced deaths (REID) and loss of life expectancy (LLE)) associated with cancer and noncancer disease deaths in the A-Bomb survivor cohort.  相似文献   

11.
The current quantitative risk assessment model followed the framework proposed by the Codex Alimentarius to provide an estimate of the risk of human salmonellosis due to consumption of chicken breasts which were bought from Canadian retail stores and prepared in Canadian domestic kitchens. The model simulated the level of Salmonella contamination on chicken breasts throughout the retail‐to‐table pathway. The model used Canadian input parameter values, where available, to represent risk of salmonellosis. From retail until consumption, changes in the concentration of Salmonella on each chicken breast were modeled using equations for growth and inactivation. The model predicted an average of 318 cases of salmonellosis per 100,000 consumers per year. Potential reasons for this overestimation were discussed. A sensitivity analysis showed that concentration of Salmonella on chicken breasts at retail and food hygienic practices in private kitchens such as cross‐contamination due to not washing cutting boards (or utensils) and hands after handling raw meat along with inadequate cooking contributed most significantly to the risk of human salmonellosis. The outcome from this model emphasizes that responsibility for protection from Salmonella hazard on chicken breasts is a shared responsibility. Data needed for a comprehensive Canadian Salmonella risk assessment were identified for future research.  相似文献   

12.
物联网环境下的农产品供应链风险评估与控制   总被引:2,自引:0,他引:2  
随着物联网技术的快速发展,物联网环境下的农产品供应链模式日趋成熟。我国农产品供应链正处在由传统农产品供应链向现代农产品供应链转型的关键时期。新技术手段的应用、新的供应链运作模式,在提升农产品供应链的价值和降低传统供应链风险的同时,也带来了新的风险。首先,根据物联网环境下的农产品供应链运作模式,按照物联网的三个层次对整个农产品供应链上的风险加以识别,总结出来物联网环境下的农产品供应链风险因素包括感知层风险、网络层风险、应用层风险以及其他风险。然后,使用OWA算子对风险因素进行定量评估与排序,接着依据风险评估的结果使用供应链风险扩散收敛模型找出衡量供应链风险波动的定量指标。最后,根据前面模型的计算结果,提出了物联网环境下的农产品供应链风险管理与控制的措施和建议。  相似文献   

13.
The Monte Carlo (MC) simulation approach is traditionally used in food safety risk assessment to study quantitative microbial risk assessment (QMRA) models. When experimental data are available, performing Bayesian inference is a good alternative approach that allows backward calculation in a stochastic QMRA model to update the experts’ knowledge about the microbial dynamics of a given food‐borne pathogen. In this article, we propose a complex example where Bayesian inference is applied to a high‐dimensional second‐order QMRA model. The case study is a farm‐to‐fork QMRA model considering genetic diversity of Bacillus cereus in a cooked, pasteurized, and chilled courgette purée. Experimental data are Bacillus cereus concentrations measured in packages of courgette purées stored at different time‐temperature profiles after pasteurization. To perform a Bayesian inference, we first built an augmented Bayesian network by linking a second‐order QMRA model to the available contamination data. We then ran a Markov chain Monte Carlo (MCMC) algorithm to update all the unknown concentrations and unknown quantities of the augmented model. About 25% of the prior beliefs are strongly updated, leading to a reduction in uncertainty. Some updates interestingly question the QMRA model.  相似文献   

14.
Probabilistic risk assessment (PRA) is a relatively new tool in the nuclear industry. The Reactor Safety Study started the present trend of conducting PRAs for nuclear power plants when it was published in 1975. Now, nine years later, those in the industry currently using PRA techniques are frequently asked the same question: Why should the nuclear utility industry, with so many accepted analytical tools already available, invest the time and manpower to develop a new technique with so many uncertainties?  相似文献   

15.
A model for the assessment of exposure to Listeria monocytogenes from cold-smoked salmon consumption in France was presented in the first of this pair of articles (Pouillot et al ., 2007, Risk Analysis, 27:683–700). In the present study, the exposure model output was combined with an internationally accepted hazard characterization model, adapted to the French situation, to assess the risk of invasive listeriosis from cold-smoked salmon consumption in France in a second-order Monte Carlo simulation framework. The annual number of cases of invasive listeriosis due to cold-smoked salmon consumption in France is estimated to be 307, with a very large credible interval ([10; 12,453]), reflecting data uncertainty. This uncertainty is mainly associated with the dose-response model. Despite the significant uncertainty associated with the predictions, this model provides a scientific base for risk managers and food business operators to manage the risk linked to cold-smoked salmon contaminated with L. monocytogenes. Under the modeling assumptions, risk would be efficiently reduced through a decrease in the prevalence of L. monocytogenes or better control of the last steps of the cold chain (shorter and/or colder storage during the consumer step), whereas reduction of the initial contamination levels of the contaminated products and improvement in the first steps of the cold chain do not seem to be promising strategies. An attempt to apply the recent risk-based concept of FSO (food safety objective) on this example underlines the ambiguity in practical implementation of the risk management metrics and the need for further elaboration on these concepts.  相似文献   

16.
Modeling Logistic Performance in Quantitative Microbial Risk Assessment   总被引:1,自引:0,他引:1  
In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times—mutually dependent in successive steps in the chain—cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for  Listeria monocytogenes  in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.  相似文献   

17.
Modeling Microbial Growth Within Food Safety Risk Assessments   总被引:5,自引:0,他引:5  
Risk estimates for food-borne infection will usually depend heavily on numbers of microorganisms present on the food at the time of consumption. As these data are seldom available directly, attention has turned to predictive microbiology as a means of inferring exposure at consumption. Codex guidelines recommend that microbiological risk assessment should explicitly consider the dynamics of microbiological growth, survival, and death in foods. This article describes predictive models and resources for modeling microbial growth in foods, and their utility and limitations in food safety risk assessment. We also aim to identify tools, data, and knowledge sources, and to provide an understanding of the microbial ecology of foods so that users can recognize model limits, avoid modeling unrealistic scenarios, and thus be able to appreciate the levels of confidence they can have in the outputs of predictive microbiology models. The microbial ecology of foods is complex. Developing reliable risk assessments involving microbial growth in foods will require the skills of both microbial ecologists and mathematical modelers. Simplifying assumptions will need to be made, but because of the potential for apparently small errors in growth rate to translate into very large errors in the estimate of risk, the validity of those assumptions should be carefully assessed. Quantitative estimates of absolute microbial risk within narrow confidence intervals do not yet appear to be possible. Nevertheless, the expression of microbial ecology knowledge in "predictive microbiology" models does allow decision support using the tools of risk assessment.  相似文献   

18.
Standard experimental designs for conducting developmental toxicity studies typically include three- or four-dose levels in addition to a control group. Some researchers have suggested that designs with more exposure groups would improve dose-response characterization and risk estimation. Such proposals have not, however, been supported by the results of simulation studies, which instead back the use of fewer dose levels. This discrepancy is partly due to using a known dose–response pattern to generate data, making model choice obvious. While the carcinogenicity literature has explored implications of different study designs, little attention has been given to the role of design in developmental toxicity risk assessment (or noncancer toxicology in general). In this research, we explore the implications of various experimental designs for developmental toxicity by resampling data from a large study of 2,4,5-trichlorophenoxyacetic acid in mice. We compare the properties of benchmark dose (BMD) estimation for different design strategies by randomly selecting animals within particular dose groups from the entire 2,4,5-T database of over 77,000 birth outcomes to create smaller "pseudo-studies" that are representative of standard bioassay sample sizes. Our results show that experimental designs which include more dose levels have advantages in terms of risk characterization and estimation.  相似文献   

19.
基于供应链的企业信贷风险评估研究   总被引:1,自引:0,他引:1  
提出了基于供应链的企业信贷风险评估指标体系,全面地评估贷款申请企业的偿贷能力,克服了当前企业信贷风险评估中存在的只对申贷企业孤立评判的不足.应用BP神经网络,基于新提出的企业信贷风险评估指标体系,开发了风险评估的数学模型.算例研究的结果表明,该模型具有良好的可操作性,能对企业贷款申请进行有效、精确地评估.  相似文献   

20.
As part of its preparation to review a potential license application from the U.S. Department of Energy (DOE), the U.S. Nuclear Regulatory Commission (NRC) is examining the performance of the proposed Yucca Mountain nuclear waste repository. In this regard, we evaluated postclosure repository performance using Monte Carlo analyses with an NRC-developed system model that has 950 input parameters, of which 330 are sampled to represent system uncertainties. The quantitative compliance criterion for dose was established by NRC to protect inhabitants who might be exposed to any releases from the repository. The NRC criterion limits the peak-of-the-mean dose, which in our analysis is estimated by averaging the potential exposure at any instant in time for all Monte Carlo realizations, and then determining the maximum value of the mean curve within 10000 years, the compliance period. This procedure contrasts in important ways with a more common measure of risk based on the mean of the ensemble of peaks from each Monte Carlo realization. The NRC chose the former (peak-of-the-mean) because it more correctly represents the risk to an exposed individual. Procedures for calculating risk in the expected case of slow repository degradation differ from those for low-probability cases of disruption by external forces such as volcanism. We also explored the possibility of risk dilution (i.e., lower calculated risk) that could result from arbitrarily defining wide probability distributions for certain parameters. Finally, our sensitivity analyses to identify influential parameters used two approaches: (1). the ensemble of doses from each Monte Carlo realization at the time of the peak risk (i.e., peak-of-the-mean) and (2). the ensemble of peak doses calculated from each realization within 10000 years. The latter measure appears to have more discriminatory power than the former for many parameters (based on the greater magnitude of the sensitivity coefficient), but can yield different rankings, especially for parameters that influence the timing of releases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号