共查询到20条相似文献,搜索用时 93 毫秒
1.
The individual plant analyses in the U.S. Nuclear Regulatory Commission's reassessment of the risk from commercial nuclear power plants (NUREG-1150) consist of four parts: systems analysis, accident-progression analysis, source-term analysis, and consequence analysis. Careful definition of the interfaces between these parts is necessary for both information flow and computational efficiency. This paper describes the procedure used to define the interface between the source-term analysis and the consequence analysis. This interface is accomplished by forming groups of source terms with similar properties and then performing one set of MACCS calculations for each group. 相似文献
2.
William Cannell 《Risk analysis》1987,7(3):311-319
Although unpublicized, the use of quantitative safety goals and probabilistic reliability analysis for licensing nuclear reactors has become a reality in the United Kingdom. This conclusion results from an examination of the process leading to the licensing of the Sizewell B PWR in England. The licensing process for this reactor has substantial implications for nuclear safety standards in Britain, and is examined in the context of the growing trend towards quantitative safety goals in the United States. 相似文献
3.
Probabilistic seismic risk analysis is a well‐established method in the insurance industry for modeling portfolio losses from earthquake events. In this context, precise exposure locations are often unknown. However, so far, location uncertainty has not been in the focus of a large amount of research. In this article, we propose a novel framework for treatment of location uncertainty. As a case study, a large number of synthetic portfolios resembling typical real‐world cases were created. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on the variability of loss frequency estimations. The results indicate that due to loss aggregation effects and spatial hazard variability, location uncertainty in isolation and in conjunction with ground motion uncertainty can induce significant variability to probabilistic loss results, especially for portfolios with a small number of risks. After quantifying its effect, we conclude that location uncertainty should not be neglected when assessing probabilistic seismic risk, but should be treated stochastically and the resulting variability should be visualized and interpreted carefully. 相似文献
4.
Bernard L. Cohen 《Risk analysis》1983,3(4):237-243
The differences between probabilistic risk assessment (PRA) and safety analysis (SA) are discussed, and it is shown that PRA is more suitable than SA for determining the acceptability of a technology. Since a PRA by the fault tree-event tree analysis method used for reactor safety studies does not seem to be practical for buried waste, an alternative approach is suggested using geochemical analogs. This method is illustrated for the cases of high-level and low-level radioactive waste and for chemical carcinogens released in coal burning. 相似文献
5.
Probabilistic Risk Analysis and Game Theory 总被引:6,自引:0,他引:6
Kjell Hausken 《Risk analysis》2002,22(1):17-27
The behavioral dimension matters in Probabilistic Risk Analysis (PRA) since players throughout a system incur costs to increase system reliability interpreted as a public good. Individual strategies at the subsystem level generally conflict with collective desires at the system level. Game theory, the natural tool to analyze individual-collective conflicts that affect risk, is integrated into PRA. Conflicts arise in series, parallel, and summation systems over which player(s) prefer(s) to incur the cost of risk reduction. Frequently, the series, parallel, and summation systems correspond to the four most common games in game theory, i.e., the coordination game, the battle of the sexes and the chicken game, and prisoner's dilemma, respectively. 相似文献
6.
Sensitivity Analysis of a Two-Dimensional Probabilistic Risk Assessment Model Using Analysis of Variance 总被引:1,自引:0,他引:1
This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices. 相似文献
7.
A model is constructed for the failure frequency of underground pipelines per kilometer year, as a function of pipe and environmental characteristics. The parameters in the model were quantified, with uncertainty, using historical data and structured expert judgment. Fifteen experts from institutes in The Netherlands, the United Kingdom, Italy, France, Germany, Belgium, Denmark, and Canada participated in the study. 相似文献
8.
Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies. 相似文献
9.
Wout Slob Martine I. Bakker Jan Dirk te Biesebeek Bas G. H. Bokkers 《Risk analysis》2014,34(8):1401-1422
Current methods for cancer risk assessment result in single values, without any quantitative information on the uncertainties in these values. Therefore, single risk values could easily be overinterpreted. In this study, we discuss a full probabilistic cancer risk assessment approach in which all the generally recognized uncertainties in both exposure and hazard assessment are quantitatively characterized and probabilistically evaluated, resulting in a confidence interval for the final risk estimate. The methodology is applied to three example chemicals (aflatoxin, N‐nitrosodimethylamine, and methyleugenol). These examples illustrate that the uncertainty in a cancer risk estimate may be huge, making single value estimates of cancer risk meaningless. Further, a risk based on linear extrapolation tends to be lower than the upper 95% confidence limit of a probabilistic risk estimate, and in that sense it is not conservative. Our conceptual analysis showed that there are two possible basic approaches for cancer risk assessment, depending on the interpretation of the dose‐incidence data measured in animals. However, it remains unclear which of the two interpretations is the more adequate one, adding an additional uncertainty to the already huge confidence intervals for cancer risk estimates. 相似文献
10.
Integration of Probabilistic Exposure Assessment and Probabilistic Hazard Characterization 总被引:4,自引:0,他引:4
A method is proposed for integrated probabilistic risk assessment where exposure assessment and hazard characterization are both included in a probabilistic way. The aim is to specify the probability that a random individual from a defined (sub)population will have an exposure high enough to cause a particular health effect of a predefined magnitude, the critical effect size ( CES ). The exposure level that results in exactly that CES in a particular person is that person's individual critical effect dose ( ICED ). Individuals in a population typically show variation, both in their individual exposure ( IEXP ) and in their ICED . Both the variation in IEXP and the variation in ICED are quantified in the form of probability distributions. Assuming independence between both distributions, they are combined (by Monte Carlo) into a distribution of the individual margin of exposure ( IMoE ). The proportion of the IMoE distribution below unity is the probability of critical exposure ( PoCE ) in the particular (sub)population. Uncertainties involved in the overall risk assessment (i.e., both regarding exposure and effect assessment) are quantified using Monte Carlo and bootstrap methods. This results in an uncertainty distribution for any statistic of interest, such as the probability of critical exposure ( PoCE ). The method is illustrated based on data for the case of dietary exposure to the organophosphate acephate. We present plots that concisely summarize the probabilistic results, retaining the distinction between variability and uncertainty. We show how the relative contributions from the various sources of uncertainty involved may be quantified. 相似文献
11.
Challenges to the Acceptance of Probabilistic Risk Analysis 总被引:3,自引:0,他引:3
This paper discusses a number of the key challenges to the acceptance and application of probabilistic risk analysis (PRA). Those challenges include: (a) the extensive reliance on subjective judgment in PRA, requiring the development of guidance for the use of PRA in risk-informed regulation, and possibly the development of robust or reference prior distributions to minimize the reliance on judgment; and (b) the treatment of human performance in PRA, including not only human error per se but also management and organizational factors more broadly. All of these areas are seen as presenting interesting research challenges at the interface between engineering and other disciplines. 相似文献
12.
There exists a growing desire to base safety criteria in different fields on the same principles. The current approach by the international Commission on Radiological Protection (ICRP) to control radiation exposure touches many aspects such as social, psychological, or economic factors that are important for such principles. This paper attempts to further explore possible ways of defining a common basis for dealing with radiation risks and other safety problems. Specifically, it introduces the following issues: (1) different types of risk are judged differently. To account for this, the concept of risk categories is introduced. (2) The dimension of time may play an important role. There is a difference between an immediate death and a death occurring 20 years after exposure to radiation. Effects such as reduced quality of life after exposure and reduction of lifetime expectancy are discussed. The paper suggests to introduce an individual risk equivalent which allows to compare risks as defined in various fields. Furthermore, it suggests the use of risk acceptance criteria which depend on the different categories of risk. 相似文献
13.
Edidiong Ekaette Robert C. Lee David L. Cooke Sandra Iftody Peter Craighead 《Risk analysis》2007,27(6):1395-1410
14.
Julius Goodman 《Risk analysis》1986,6(2):235-244
The comparison and ranking of risks is very important for safety and cost-benefit analysis. Most formats present risks in the form of probability distribution. Different ranking criteria for probability distributions are considered. It is demonstrated that significantly overlapping distributions lead to ambiguous results. For this reason, criteria of insignificant overlapping distributions are proposed. The first criterion uses the information theory approach; and the second criterion uses the statistical tests approach. Both approaches can be applied to decision theory to avoid questionable decisions based on statistically insignificant differences between two risks. 相似文献
15.
Site-Specific Applications of Probabilistic Health Risk Assessment: Review of the Literature Since 2000 总被引:1,自引:0,他引:1
Whether and to what extent contaminated sites harm ecologic and human health are topics of considerable interest, but also considerable uncertainty. Several federal and state agencies have approved the use of some or many aspects of probabilistic risk assessment (PRA), but its site-specific application has often been limited to high-profile sites and large projects. Nonetheless, times are changing: newly developed software tools, and recent federal and state guidance documents formalizing PRA procedures, now make PRA a readily available method of analysis for even small-scale projects. This article presents and discusses a broad review of PRA literature published since 2000. 相似文献
16.
This article presents a framework for using probabilistic terrorism risk modeling in regulatory analysis. We demonstrate the framework with an example application involving a regulation under consideration, the Western Hemisphere Travel Initiative for the Land Environment, (WHTI‐L). First, we estimate annualized loss from terrorist attacks with the Risk Management Solutions (RMS) Probabilistic Terrorism Model. We then estimate the critical risk reduction, which is the risk‐reducing effectiveness of WHTI‐L needed for its benefit, in terms of reduced terrorism loss in the United States, to exceed its cost. Our analysis indicates that the critical risk reduction depends strongly not only on uncertainties in the terrorism risk level, but also on uncertainty in the cost of regulation and how casualties are monetized. For a terrorism risk level based on the RMS standard risk estimate, the baseline regulatory cost estimate for WHTI‐L, and a range of casualty cost estimates based on the willingness‐to‐pay approach, our estimate for the expected annualized loss from terrorism ranges from $2.7 billion to $5.2 billion. For this range in annualized loss, the critical risk reduction for WHTI‐L ranges from 7% to 13%. Basing results on a lower risk level that results in halving the annualized terrorism loss would double the critical risk reduction (14–26%), and basing the results on a higher risk level that results in a doubling of the annualized terrorism loss would cut the critical risk reduction in half (3.5–6.6%). Ideally, decisions about terrorism security regulations and policies would be informed by true benefit‐cost analyses in which the estimated benefits are compared to costs. Such analyses for terrorism security efforts face substantial impediments stemming from the great uncertainty in the terrorist threat and the very low recurrence interval for large attacks. Several approaches can be used to estimate how a terrorism security program or regulation reduces the distribution of risks it is intended to manage. But, continued research to develop additional tools and data is necessary to support application of these approaches. These include refinement of models and simulations, engagement of subject matter experts, implementation of program evaluation, and estimating the costs of casualties from terrorism events. 相似文献
17.
Klaus Schneeberger Matthias Huttenlau Benjamin Winter Thomas Steinberger Stefan Achleitner Johann Sttter 《Risk analysis》2019,39(1):125-139
This article presents a flood risk analysis model that considers the spatially heterogeneous nature of flood events. The basic concept of this approach is to generate a large sample of flood events that can be regarded as temporal extrapolation of flood events. These are combined with cumulative flood impact indicators, such as building damages, to finally derive time series of damages for risk estimation. Therefore, a multivariate modeling procedure that is able to take into account the spatial characteristics of flooding, the regionalization method top‐kriging, and three different impact indicators are combined in a model chain. Eventually, the expected annual flood impact (e.g., expected annual damages) and the flood impact associated with a low probability of occurrence are determined for a study area. The risk model has the potential to augment the understanding of flood risk in a region and thereby contribute to enhanced risk management of, for example, risk analysts and policymakers or insurance companies. The modeling framework was successfully applied in a proof‐of‐concept exercise in Vorarlberg (Austria). The results of the case study show that risk analysis has to be based on spatially heterogeneous flood events in order to estimate flood risk adequately. 相似文献
18.
Olivier Catelinois Dominique Laurier Pierre Verger Agnès Rogel Marc Colonna Marianne Ignasiak Denis Hémon Margot Tirmarche 《Risk analysis》2005,25(2):243-252
The increase in the thyroid cancer incidence in France observed over the last 20 years has raised public concern about its association with the 1986 nuclear power plant accident at Chernobyl. At the request of French authorities, a first study sought to quantify the possible risk of thyroid cancer associated with the Chernobyl fallout in France. This study suffered from two limitations. The first involved the lack of knowledge of spontaneous thyroid cancer incidence rates (in the absence of exposure), which was especially necessary to take their trends into account for projections over time; the second was the failure to consider the uncertainties. The aim of this article is to enhance the initial thyroid cancer risk assessment for the period 1991-2007 in the area of France most exposed to the fallout (i.e., eastern France) and thereby mitigate these limitations. We consider the changes over time in the incidence of spontaneous thyroid cancer and conduct both uncertainty and sensitivity analyses. The number of spontaneous thyroid cancers was estimated from French cancer registries on the basis of two scenarios: one with a constant incidence, the other using the trend observed. Thyroid doses were estimated from all available data about contamination in France from Chernobyl fallout. Results from a 1995 pooled analysis published by Ron et al. were used to determine the dose-response relation. Depending on the scenario, the number of spontaneous thyroid cancer cases ranges from 894 (90% CI: 869-920) to 1,716 (90% CI: 1,691-1,741). The number of excess thyroid cancer cases predicted ranges from 5 (90% UI: 1-15) to 63 (90% UI: 12-180). All of the assumptions underlying the thyroid cancer risk assessment are discussed. 相似文献
19.
The life cycle assessment (LCA) framework has established itself as the leading tool for the assessment of the environmental impact of products. Several works have established the need of integrating the LCA and risk analysis methodologies, due to the several common aspects. One of the ways to reach such integration is through guaranteeing that uncertainties in LCA modeling are carefully treated. It has been claimed that more attention should be paid to quantifying the uncertainties present in the various phases of LCA. Though the topic has been attracting increasing attention of practitioners and experts in LCA, there is still a lack of understanding and a limited use of the available statistical tools. In this work, we introduce a protocol to conduct global sensitivity analysis in LCA. The article focuses on the life cycle impact assessment (LCIA), and particularly on the relevance of global techniques for the development of trustable impact assessment models. We use a novel characterization model developed for the quantification of the impacts of noise on humans as a test case. We show that global SA is fundamental to guarantee that the modeler has a complete understanding of: (i) the structure of the model and (ii) the importance of uncertain model inputs and the interaction among them. 相似文献
20.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS. 相似文献