首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this work, we study the effect of epistemic uncertainty in the ranking and categorization of elements of probabilistic safety assessment (PSA) models. We show that, while in a deterministic setting a PSA element belongs to a given category univocally, in the presence of epistemic uncertainty, a PSA element belongs to a given category only with a certain probability. We propose an approach to estimate these probabilities, showing that their knowledge allows to appreciate " the sensitivity of component categorizations to uncertainties in the parameter values " (U.S. NRC Regulatory Guide 1.174). We investigate the meaning and utilization of an assignment method based on the expected value of importance measures. We discuss the problem of evaluating changes in quality assurance, maintenance activities prioritization, etc. in the presence of epistemic uncertainty. We show that the inclusion of epistemic uncertainly in the evaluation makes it necessary to evaluate changes through their effect on PSA model parameters. We propose a categorization of parameters based on the Fussell-Vesely and differential importance (DIM) measures. In addition, issues in the calculation of the expected value of the joint importance measure are present when evaluating changes affecting groups of components. We illustrate that the problem can be solved using DIM. A numerical application to a case study concludes the work.  相似文献   

2.
Probabilistic seismic risk analysis is a well‐established method in the insurance industry for modeling portfolio losses from earthquake events. In this context, precise exposure locations are often unknown. However, so far, location uncertainty has not been in the focus of a large amount of research. In this article, we propose a novel framework for treatment of location uncertainty. As a case study, a large number of synthetic portfolios resembling typical real‐world cases were created. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on the variability of loss frequency estimations. The results indicate that due to loss aggregation effects and spatial hazard variability, location uncertainty in isolation and in conjunction with ground motion uncertainty can induce significant variability to probabilistic loss results, especially for portfolios with a small number of risks. After quantifying its effect, we conclude that location uncertainty should not be neglected when assessing probabilistic seismic risk, but should be treated stochastically and the resulting variability should be visualized and interpreted carefully.  相似文献   

3.
In risk analysis, the treatment of the epistemic uncertainty associated to the probability of occurrence of an event is fundamental. Traditionally, probabilistic distributions have been used to characterize the epistemic uncertainty due to imprecise knowledge of the parameters in risk models. On the other hand, it has been argued that in certain instances such uncertainty may be best accounted for by fuzzy or possibilistic distributions. This seems the case in particular for parameters for which the information available is scarce and of qualitative nature. In practice, it is to be expected that a risk model contains some parameters affected by uncertainties that may be best represented by probability distributions and some other parameters that may be more properly described in terms of fuzzy or possibilistic distributions. In this article, a hybrid method that jointly propagates probabilistic and possibilistic uncertainties is considered and compared with pure probabilistic and pure fuzzy methods for uncertainty propagation. The analyses are carried out on a case study concerning the uncertainties in the probabilities of occurrence of accident sequences in an event tree analysis of a nuclear power plant.  相似文献   

4.
A method is proposed for integrated probabilistic risk assessment where exposure assessment and hazard characterization are both included in a probabilistic way. The aim is to specify the probability that a random individual from a defined (sub)population will have an exposure high enough to cause a particular health effect of a predefined magnitude, the critical effect size ( CES ). The exposure level that results in exactly that CES in a particular person is that person's individual critical effect dose ( ICED ). Individuals in a population typically show variation, both in their individual exposure ( IEXP ) and in their ICED . Both the variation in IEXP and the variation in ICED are quantified in the form of probability distributions. Assuming independence between both distributions, they are combined (by Monte Carlo) into a distribution of the individual margin of exposure ( IMoE ). The proportion of the IMoE distribution below unity is the probability of critical exposure ( PoCE ) in the particular (sub)population. Uncertainties involved in the overall risk assessment (i.e., both regarding exposure and effect assessment) are quantified using Monte Carlo and bootstrap methods. This results in an uncertainty distribution for any statistic of interest, such as the probability of critical exposure ( PoCE ). The method is illustrated based on data for the case of dietary exposure to the organophosphate acephate. We present plots that concisely summarize the probabilistic results, retaining the distinction between variability and uncertainty. We show how the relative contributions from the various sources of uncertainty involved may be quantified.  相似文献   

5.
Current methods for cancer risk assessment result in single values, without any quantitative information on the uncertainties in these values. Therefore, single risk values could easily be overinterpreted. In this study, we discuss a full probabilistic cancer risk assessment approach in which all the generally recognized uncertainties in both exposure and hazard assessment are quantitatively characterized and probabilistically evaluated, resulting in a confidence interval for the final risk estimate. The methodology is applied to three example chemicals (aflatoxin, N‐nitrosodimethylamine, and methyleugenol). These examples illustrate that the uncertainty in a cancer risk estimate may be huge, making single value estimates of cancer risk meaningless. Further, a risk based on linear extrapolation tends to be lower than the upper 95% confidence limit of a probabilistic risk estimate, and in that sense it is not conservative. Our conceptual analysis showed that there are two possible basic approaches for cancer risk assessment, depending on the interpretation of the dose‐incidence data measured in animals. However, it remains unclear which of the two interpretations is the more adequate one, adding an additional uncertainty to the already huge confidence intervals for cancer risk estimates.  相似文献   

6.
Most public health risk assessments assume and combine a series of average, conservative, and worst-case values to derive a conservative point estimate of risk. This procedure has major limitations. This paper demonstrates a new methodology for extended uncertainty analyses in public health risk assessments using Monte Carlo techniques. The extended method begins as do some conventional methods--with the preparation of a spreadsheet to estimate exposure and risk. This method, however, continues by modeling key inputs as random variables described by probability density functions (PDFs). Overall, the technique provides a quantitative way to estimate the probability distributions for exposure and health risks within the validity of the model used. As an example, this paper presents a simplified case study for children playing in soils contaminated with benzene and benzo(a)pyrene (BaP).  相似文献   

7.
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic‐possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility‐probability (probability‐possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context.  相似文献   

8.
Concern about the degree of uncertainty and potential conservatism in deterministic point estimates of risk has prompted researchers to turn increasingly to probabilistic methods for risk assessment. With Monte Carlo simulation techniques, distributions of risk reflecting uncertainty and/or variability are generated as an alternative. In this paper the compounding of conservatism(1) between the level associated with point estimate inputs selected from probability distributions and the level associated with the deterministic value of risk calculated using these inputs is explored. Two measures of compounded conservatism are compared and contrasted. The first measure considered, F , is defined as the ratio of the risk value, R d, calculated deterministically as a function of n inputs each at the j th percentile of its probability distribution, and the risk value, R j that falls at the j th percentile of the simulated risk distribution (i.e., F=Rd/Rj). The percentile of the simulated risk distribution which corresponds to the deterministic value, Rd , serves as a second measure of compounded conservatism. Analytical results for simple products of lognormal distributions are presented. In addition, a numerical treatment of several complex cases is presented using five simulation analyses from the literature to illustrate. Overall, there are cases in which conservatism compounds dramatically for deterministic point estimates of risk constructed from upper percentiles of input parameters, as well as those for which the effect is less notable. The analytical and numerical techniques discussed are intended to help analysts explore the factors that influence the magnitude of compounding conservatism in specific cases.  相似文献   

9.
This paper demonstrates a new methodology for probabilistic public health risk assessment using the first-order reliability method. The method provides the probability that incremental lifetime cancer risk exceeds a threshold level, and the probabilistic sensitivity quantifying the relative impact of considering the uncertainty of each random variable on the exceedance probability. The approach is applied to a case study given by Thompson et al. (1) on cancer risk caused by ingestion of benzene-contaminated soil, and the results are compared to that of the Monte Carlo method. Parametric sensitivity analyses are conducted to assess the sensitivity of the probabilistic event with respect to the distribution parameters of the basic random variables, such as the mean and standard deviation. The technique is a novel approach to probabilistic risk assessment, and can be used in situations when Monte Carlo analysis is computationally expensive, such as when the simulated risk is at the tail of the risk probability distribution.  相似文献   

10.
Probabilistic safety analysis (PSA) has been used in nuclear, chemical, petrochemical, and several other industries. The probability and/or frequency results of most PSAs are based on average component unavailabilities during the mission of interest. While these average results are useful, they provide no indication of the significance of the facility's current status when one or more components are known to be out of service. Recently, several interactive computational models have been developed for nuclear power plants to allow the user to specify the plant's status at a particular time (i.e., to specify equipment known to be out of service) and then to receive updated PSA information. As with conventional PSA results, there are uncertainties associated with the numerical updated results. These uncertainties stem from a number of sources, including parameter uncertainty (uncertainty in equipment failure rates and human error probabilities). This paper presents an analysis of the impact of parameter uncertainty on updated PSA results.  相似文献   

11.
Risks from exposure to contaminated land are often assessed with the aid of mathematical models. The current probabilistic approach is a considerable improvement on previous deterministic risk assessment practices, in that it attempts to characterize uncertainty and variability. However, some inputs continue to be assigned as precise numbers, while others are characterized as precise probability distributions. Such precision is hard to justify, and we show in this article how rounding errors and distribution assumptions can affect an exposure assessment. The outcome of traditional deterministic point estimates and Monte Carlo simulations were compared to probability bounds analyses. Assigning all scalars as imprecise numbers (intervals prescribed by significant digits) added uncertainty to the deterministic point estimate of about one order of magnitude. Similarly, representing probability distributions as probability boxes added several orders of magnitude to the uncertainty of the probabilistic estimate. This indicates that the size of the uncertainty in such assessments is actually much greater than currently reported. The article suggests that full disclosure of the uncertainty may facilitate decision making in opening up a negotiation window. In the risk analysis process, it is also an ethical obligation to clarify the boundary between the scientific and social domains.  相似文献   

12.
Major industrial accidents occurring at so-called major hazard installations may cause domino accidents which are among the most destructive industrial accidents existing at present. As there may be many hazard installations in an area, a primary accident scenario may potentially propagate from one installation to another, and correlations exist in probability calculations of domino effects. In addition, during the propagation of a domino effect, accidents of diverse types may occur, some of them having a synergistic effect, while others do not. These characteristics make the analytical formulation of domino accidents very complex. In this work, a simple matrix-based modeling approach for domino effect analysis is proposed. Matrices can be used to represent the mutual influences of different escalation vectors between installations. On this basis, an analysis approach for accident propagation as well as a simulation-based algorithm for probability calculation of accidents and accident levels is provided. The applicability and flexibility of this approach is discussed while applying it to estimate domino probabilities in a case study.  相似文献   

13.
The uncertainty associated with estimates should be taken into account in quantitative risk assessment. Each input's uncertainty can be characterized through a probabilistic distribution for use under Monte Carlo simulations. In this study, the sampling uncertainty associated with estimating a low proportion on the basis of a small sample size was considered. A common application in microbial risk assessment is the estimation of a prevalence, proportion of contaminated food products, on the basis of few tested units. Three Bayesian approaches (based on beta(0, 0), beta(1/2, 1/2), and beta(l, 1)) and one frequentist approach (based on the frequentist confidence distribution) were compared and evaluated on the basis of simulations. For small samples, we demonstrated some differences between the four tested methods. We concluded that the better method depends on the true proportion of contaminated products, which is by definition unknown in common practice. When no prior information is available, we recommend the beta (1/2, 1/2) prior or the confidence distribution. To illustrate the importance of these differences, the four methods were used in an applied example. We performed two-dimensional Monte Carlo simulations to estimate the proportion of cold smoked salmon packs contaminated by Listeria monocytogenes, one dimension representing within-factory uncertainty, modeled by each of the four studied methods, and the other dimension representing variability between companies.  相似文献   

14.
Uncertainty Analysis in Multiplicative Models   总被引:3,自引:0,他引:3  
Wout Slob 《Risk analysis》1994,14(4):571-576
Uncertainties are usually evaluated by Monte Carlo analysis. However, multiplicative models with lognormal uncertainties, which are ubiquitous in risk assessments, allow for a simple and quick analytical uncertainty analysis. The necessary formulae are given, which may be evaluated by a desk calculator. Two examples illustrate the method.  相似文献   

15.
Treatment of Uncertainty in Performance Assessments for Complex Systems   总被引:13,自引:0,他引:13  
When viewed at a high level, performance assessments (PAs) for complex systems involve two types of uncertainty: stochastic uncertainty, which arises because the system under study can behave in many different ways, and subjective uncertainty, which arises from a lack of knowledge about quantities required within the computational implementation of the PA. Stochastic uncertainty is typically incorporated into a PA with an experimental design based on importance sampling and leads to the final results of the PA being expressed as a complementary cumulative distribution function (CCDF). Subjective uncertainty is usually treated with Monte Carlo techniques and leads to a distribution of CCDFs. This presentation discusses the use of the Kaplan/Garrick ordered triple representation for risk in maintaining a distinction between stochastic and subjective uncertainty in PAs for complex systems. The topics discussed include (1) the definition of scenarios and the calculation of scenario probabilities and consequences, (2) the separation of subjective and stochastic uncertainties, (3) the construction of CCDFs required in comparisons with regulatory standards (e.g., 40 CFR Part 191, Subpart B for the disposal of radioactive waste), and (4) the performance of uncertainty and sensitivity studies. Results obtained in a preliminary PA for the Waste Isolation Pilot Plant, an uncertainty and sensitivity analysis of the MACCS reactor accident consequence analysis model, and the NUREG-1150 probabilistic risk assessments are used for illustration.  相似文献   

16.
A general probabilistically-based approach is proposed for both cancer and noncancer risk/safety assessments. The familiar framework of the original ADI/RfD formulation is used, substituting in the numerator a benchmark dose derived from a hierarchical pharmacokinetic/pharmacodynamic model and in the denominator a unitary uncertainty factor derived from a hierarchical animal/average human/sensitive human model. The empirical probability distributions of the numerator and denominator can be combined to produce an empirical human-equivalent distribution for an animal-derived benchmark dose in external-exposure units.  相似文献   

17.
The aging domestic oil production infrastructure represents a high risk to the environment because of the type of fluids being handled (oil and brine) and the potential for accidental release of these fluids into sensitive ecosystems. Currently, there is not a quantitative risk model directly applicable to onshore oil exploration and production (E&P) facilities. We report on a probabilistic reliability model created for onshore exploration and production (E&P) facilities. Reliability theory, failure modes and effects analysis (FMEA), and event trees were used to develop the model estimates of the failure probability of typical oil production equipment. Monte Carlo simulation was used to translate uncertainty in input parameter values to uncertainty in the model output. The predicted failure rates were calibrated to available failure rate information by adjusting probability density function parameters used as random variates in the Monte Carlo simulations. The mean and standard deviation of normal variate distributions from which the Weibull distribution characteristic life was chosen were used as adjustable parameters in the model calibration. The model was applied to oil production leases in the Tallgrass Prairie Preserve, Oklahoma. We present the estimated failure probability due to the combination of the most significant failure modes associated with each type of equipment (pumps, tanks, and pipes). The results show that the estimated probability of failure for tanks is about the same as that for pipes, but that pumps have much lower failure probability. The model can provide necessary equipment reliability information for proactive risk management at the lease level by providing quantitative information to base allocation of maintenance resources to high-risk equipment that will minimize both lost production and ecosystem damage.  相似文献   

18.
Monte Carlo simulation requires a pseudo-random number generator with good statistical properties. Linear congruential generators (LCGs) are the most popular and well-studied computer method for generating pseudo-random numbers used in Monte Carlo studies. High quality LCGs are available with sufficient statistical quality to satisfy all but the most demanding needs of risk assessors. However, because of the discrete, deterministic nature of LCGs, it is important to evaluate the randomness and uniformity of the specific pseudo-random number subsequences used in important risk assessments. Recommended statistical tests for uniformity and randomness include the Kolmogorov-Smirnov test, extreme values test, and the runs test, including runs above and runs below the mean tests. Risk assessors should evaluate the stability of their risk model's output statistics, paying particular attention to instabilities in the mean and variance. When instabilities in the mean and variance are observed, more stable statistics, e.g., percentiles, should be reported. Analyses should be repeated using several non-overlapping pseudo-random number subsequences. More simulations than those traditionally used are also recommended for each analysis.  相似文献   

19.
The application of an ISO standard procedure (Guide to the Expression of Uncertainty in Measurement (GUM)) is here discussed to quantify uncertainty in human risk estimation under chronic exposure to hazardous chemical compounds. The procedure was previously applied to a simple model; in this article a much more complex model is used, i.e., multiple compound and multiple exposure pathways. Risk was evaluated using the usual methodologies: the deterministic reasonable maximum exposure (RME) and the statistical Monte Carlo method. In both cases, the procedures to evaluate uncertainty on risk values are detailed. Uncertainties were evaluated by different methodologies to account for the peculiarity of information about the single variable. The GUM procedure enables the ranking of variables by their contribution to uncertainty; it provides a criterion for choosing variables for deeper analysis. The obtained results show that the application of GUM procedure is easy and straightforward to quantify uncertainty and variability of risk estimation. Health risk estimation is based on literature data on a water table contaminated by three volatile organic compounds. Daily intake was considered by either ingestion of water or inhalation during showering. The results indicate one of the substances as the main contaminant, and give a criterion to identify the key component on which the treatment selection may be performed and the treatment process may be designed in order to reduce risk.  相似文献   

20.
Peanut allergy is a public health concern, owing to the high prevalence in France and the severity of the reactions. Despite peanut-containing product avoidance diets, a risk may exist due to the adventitious presence of peanut allergens in a wide range of food products. Peanut is not mentioned in their ingredients list, but precautionary labeling is often present. A method of quantifying the risk of allergic reactions following the consumption of such products is developed, taking the example of peanut in chocolate tablets. The occurrence of adventitious peanut proteins in chocolate and the dose-response relationship are estimated with a Bayesian approach using available published data. The consumption pattern is described by the French individual consumption survey INCA2. Risk simulations are performed using second-order Monte Carlo simulations, which separately propagates variability and uncertainty of the model input variables. Peanut allergens occur in approximately 36% of the chocolates, leading to a mean exposure level of 0.2 mg of peanut proteins per eating occasion. The estimated risk of reaction averages 0.57% per eating occasion for peanut-allergic adults. The 95% values of the risk stand between 0 and 3.61%, which illustrates the risk variability. The uncertainty, represented by the 95% credible intervals, is concentrated around these risk estimates. Children have similar results. The conclusion is that adventitious peanut allergens induce a risk of reaction for a part of the French peanut-allergic population. The method developed can be generalized to assess the risk due to the consumption of every foodstuff potentially contaminated by allergens.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号