首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A. E. Ades  G. Lu 《Risk analysis》2003,23(6):1165-1172
Monte Carlo simulation has become the accepted method for propagating parameter uncertainty through risk models. It is widely appreciated, however, that correlations between input variables must be taken into account if models are to deliver correct assessments of uncertainty in risk. Various two-stage methods have been proposed that first estimate a correlation structure and then generate Monte Carlo simulations, which incorporate this structure while leaving marginal distributions of parameters unchanged. Here we propose a one-stage alternative, in which the correlation structure is estimated from the data directly by Bayesian Markov Chain Monte Carlo methods. Samples from the posterior distribution of the outputs then correctly reflect the correlation between parameters, given the data and the model. Besides its computational simplicity, this approach utilizes the available evidence from a wide variety of structures, including incomplete data and correlated and uncorrelated repeat observations. The major advantage of a Bayesian approach is that, rather than assuming the correlation structure is fixed and known, it captures the joint uncertainty induced by the data in all parameters, including variances and covariances, and correctly propagates this through the decision or risk model. These features are illustrated with examples on emissions of dioxin congeners from solid waste incinerators.  相似文献   

2.
An integrated, quantitative approach to incorporating both uncertainty and interindividual variability into risk prediction models is described. Individual risk R is treated as a variable distributed in both an uncertainty dimension and a variability dimension, whereas population risk I (the number of additional cases caused by R) is purely uncertain. I is shown to follow a compound Poisson-binomial distribution, which in low-level risk contexts can often be approximated well by a corresponding compound Poisson distribution. The proposed analytic framework is illustrated with an application to cancer risk assessment for a California population exposed to 1,2-dibromo-3-chloropropane from ground water.  相似文献   

3.
Methods to Approximate Joint Uncertainty and Variability in Risk   总被引:3,自引:0,他引:3  
As interest in quantitative analysis of joint uncertainty and interindividual variability (JUV) in risk grows, so does the need for related computational shortcuts. To quantify JUV in risk, Monte Carlo methods typically require nested sampling of JUV in distributed inputs, which is cumbersome and time-consuming. Two approximation methods proposed here allow simpler and more rapid analysis. The first consists of new upper-bound JUV estimators that involve only uncertainty or variability, not both, and so never require nested sampling to calculate. The second is a discrete-probability-calculus procedure that uses only the mean and one upper-tail mean for each input in order to estimate mean and upper-bound risk, which procedure is simpler and more intuitive than similar ones in use. Application of these methods is illustrated in an assessment of cancer risk from residential exposures to chloroform in Kanawah Valley, West Virginia. Because each of the multiple exposure pathways considered in this assessment had separate modeled sources of uncertainty and variability, the assessment illustrates a realistic case where a standard Monte Carlo approach to JUV analysis requires nested sampling. In the illustration, the first proposed method quantified JUV in cancer risk much more efficiently than corresponding nested Monte Carlo calculations. The second proposed method also nearly duplicated JUV-related and other estimates of risk obtained using Monte Carlo methods. Both methods were thus found adequate to obtain basic risk estimates accounting for JUV in a realistically complex risk assessment. These methods make routine JUV analysis more convenient and practical.  相似文献   

4.
The performance of a probabilistic risk assessment (PRA) for a nuclear power plant is a complex undertaking, involving the assembly of an accident frequency analysis, an accident progression analysis, a source term analysis, and a consequence analysis. Each of these analyses is, in itself, quite complex. Uncertainties enter into a PRA from each of these analyses. An important focus in recent PRAs has been to incorporate these uncertainties at each stage of the analysis, propagate the subsequent uncertainties through the entire analysis, and include uncertainty in the final results. Monte Carlo procedures based on Latin hypercube sampling provide one way to perform propagations of this type. In this paper, the results of two complete and independent Monte Carlo calculations for a recently completed PRA for a nuclear power plant are compared as a means of providing empirical evidence on the repeatability of uncertainty and sensitivity analyses for large-scale PRA calculations. These calculations use the same variables and analysis structure with two independently generated Latin hypercube samples. The results of the two calculations show a high degree of repeatability for the analysis of a very complex system.  相似文献   

5.
Most public health risk assessments assume and combine a series of average, conservative, and worst-case values to derive a conservative point estimate of risk. This procedure has major limitations. This paper demonstrates a new methodology for extended uncertainty analyses in public health risk assessments using Monte Carlo techniques. The extended method begins as do some conventional methods--with the preparation of a spreadsheet to estimate exposure and risk. This method, however, continues by modeling key inputs as random variables described by probability density functions (PDFs). Overall, the technique provides a quantitative way to estimate the probability distributions for exposure and health risks within the validity of the model used. As an example, this paper presents a simplified case study for children playing in soils contaminated with benzene and benzo(a)pyrene (BaP).  相似文献   

6.
7.
We investigate, through modeling, the impact of interindividual heterogeneity in the metabolism of 4-aminobiphenyl (ABP) and in physiological factors on human cancer risk: A physiological pharmacokinetic model was used to quantify the time course of the formation of the proximate carcinogen, N-hydroxy-4-ABP and the DNA-binding of the active species in the bladder. The metabolic and physiologic model parameters were randomly varied, via Monte Carlo simulations, to reproduce interindividual variability. The sampling means for most parameters were scaled from values developed by Kadlubar et al. (Cancer Res., 51 : 4371, 1991) for dogs; variances were obtained primarily from published human data (e.g., measurements of ABP N-oxidation, and arylamine N-acetylation in human liver tissue). In 500 simulations, theoretically representing 500 humans, DNA-adduct levels in the bladder of the most susceptible individuals are ten thousand times higher than for the least susceptible, and the 5th and 95th percentiles differ by a factor of 160. DNA binding for the most susceptible individual (with low urine pH, low N-acetylation and high N-oxidation activities) is theoretically one million-fold higher than for the least susceptible (with high urine pH, high N-acetylation and low N-oxidation activities). The simulations also suggest that the four factors contributing most significantly to interindividual differences in DNA-binding of ABP in human bladder are urine pH, ABP N-oxidation, ABP N-acetylation and urination frequency.  相似文献   

8.
A general discussion of knowledge dependence in risk calculations shows that the assumption of independence underlying standard Monte Carlo simulation in uncertainty analysis is frequently violated. A model is presented for performing Monte Carlo simulation when the variabilities of the component failure probabilities are either negatively or positively coupled. The model is applied to examples in human reliability analysis and the results are compared to the results of Sandia Laboratories as published in the Peer Review Study and to recalculations using more recent methods of uncertainty analysis.  相似文献   

9.
Terje Aven 《Risk analysis》2010,30(3):354-360
It is common perspective in risk analysis that there are two kinds of uncertainties: i) variability as resulting from heterogeneity and stochasticity (aleatory uncertainty) and ii) partial ignorance or epistemic uncertainties resulting from systematic measurement error and lack of knowledge. Probability theory is recognized as the proper tool for treating the aleatory uncertainties, but there are different views on what is the best approach for describing partial ignorance and epistemic uncertainties. Subjective probabilities are often used for representing this type of ignorance and uncertainties, but several alternative approaches have been suggested, including interval analysis, probability bound analysis, and bounds based on evidence theory. It is argued that probability theory generates too precise results when the background knowledge of the probabilities is poor. In this article, we look more closely into this issue. We argue that this critique of probability theory is based on a conception of risk assessment being a tool to objectively report on the true risk and variabilities. If risk assessment is seen instead as a method for describing the analysts’ (and possibly other stakeholders’) uncertainties about unknown quantities, the alternative approaches (such as the interval analysis) often fail in providing the necessary decision support.  相似文献   

10.
The application of an ISO standard procedure (Guide to the Expression of Uncertainty in Measurement (GUM)) is here discussed to quantify uncertainty in human risk estimation under chronic exposure to hazardous chemical compounds. The procedure was previously applied to a simple model; in this article a much more complex model is used, i.e., multiple compound and multiple exposure pathways. Risk was evaluated using the usual methodologies: the deterministic reasonable maximum exposure (RME) and the statistical Monte Carlo method. In both cases, the procedures to evaluate uncertainty on risk values are detailed. Uncertainties were evaluated by different methodologies to account for the peculiarity of information about the single variable. The GUM procedure enables the ranking of variables by their contribution to uncertainty; it provides a criterion for choosing variables for deeper analysis. The obtained results show that the application of GUM procedure is easy and straightforward to quantify uncertainty and variability of risk estimation. Health risk estimation is based on literature data on a water table contaminated by three volatile organic compounds. Daily intake was considered by either ingestion of water or inhalation during showering. The results indicate one of the substances as the main contaminant, and give a criterion to identify the key component on which the treatment selection may be performed and the treatment process may be designed in order to reduce risk.  相似文献   

11.
This paper presents a general model for exposure to homegrown foods that is used with a Monte Carlo analysis to determine the relative contributions of variability (Type A uncertainty) and true uncertainty (Type B uncertainty) to the overall variance in prediction of the dose-to-concentration ratio. Although classification of exposure inputs as uncertain or variable is somewhat subjective, food consumption rates and exposure duration are judged to have a predicted variance that is dominated by variability among individuals by age, income, culture, and geographical region. Whereas, biotransfer factors and partition factors are inputs that, to a large extent, involve uncertainty. Using ingestion of fruits, vegetables, grains, dairy products, and meat and soils assumed to be contaminated by hexachlorbenzene (HCB) and benzo(a)pyrene (BaP) as cases studies, a Monte Carlo analysis is used to explore the relative contribution of uncertainty and variability to overall variance in the estimated distribution of potential dose within the population that consumes homegrown foods. It is found that, when soil concentrations are specified, variances in ratios of dose-to-concentration for HCB are equally attributable to uncertainty and variability, whereas for BaP, variance in these ratios is dominated by true uncertainty.  相似文献   

12.
Currently, there is a trend away from the use of single (often conservative) estimates of risk to summarize the results of risk analyses in favor of stochastic methods which provide a more complete characterization of risk. The use of such stochastic methods leads to a distribution of possible values of risk, taking into account both uncertainty and variability in all of the factors affecting risk. In this article, we propose a general framework for the analysis of uncertainty and variability for use in the commonly encountered case of multiplicative risk models, in which risk may be expressed as a product of two or more risk factors. Our analytical methods facilitate the evaluation of overall uncertainty and variability in risk assessment, as well as the contributions of individual risk factors to both uncertainty and variability which is cumbersome using Monte Carlo methods. The use of these methods is illustrated in the analysis of potential cancer risks due to the ingestion of radon in drinking water.  相似文献   

13.
Indirect exposures to 2,3,7,8-tetrachlorodibenzo- p -dioxin (TCDD) and other toxic materials released in incinerator emissions have been identified as a significant concern for human health. As a result, regulatory agencies and researchers have developed specific approaches for evaluating exposures from indirect pathways. This paper presents a quantitative assessment of the effect of uncertainty and variation in exposure parameters on the resulting estimates of TCDD dose rates received by individuals indirectly exposed to incinerator emissions through the consumption of home-grown beef. The assessment uses a nested Monte Carlo model that separately characterizes uncertainty and variation in dose rate estimates. Uncertainty resulting from limited data on the fate and transport of TCDD are evaluated, and variations in estimated dose rates in the exposed population that result from location-specific parameters and individuals'behaviors are characterized. The analysis indicates that lifetime average daily dose rates for individuals living within 10 km of a hypothetical incinerator range over three orders of magnitude. In contrast, the uncertainty in the dose rate distribution appears to vary by less than one order of magnitude, based on the sources of uncertainty included in this analysis. Current guidance for predicting exposures from indirect exposure pathways was found to overestimate the intakes for typical and high-end individuals.  相似文献   

14.
Probabilistic seismic risk analysis is a well‐established method in the insurance industry for modeling portfolio losses from earthquake events. In this context, precise exposure locations are often unknown. However, so far, location uncertainty has not been in the focus of a large amount of research. In this article, we propose a novel framework for treatment of location uncertainty. As a case study, a large number of synthetic portfolios resembling typical real‐world cases were created. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on the variability of loss frequency estimations. The results indicate that due to loss aggregation effects and spatial hazard variability, location uncertainty in isolation and in conjunction with ground motion uncertainty can induce significant variability to probabilistic loss results, especially for portfolios with a small number of risks. After quantifying its effect, we conclude that location uncertainty should not be neglected when assessing probabilistic seismic risk, but should be treated stochastically and the resulting variability should be visualized and interpreted carefully.  相似文献   

15.
Much attention has been paid to the treatment of dependence and to the characterization of uncertainty and variability (including the issue of dependence among inputs) in performing risk assessments to avoid misleading results. However, with relatively little progress in communicating about the effects and implications of dependence, the effort involved in performing relatively sophisticated risk analyses (e.g., two‐dimensional Monte Carlo analyses that separate variability from uncertainty) may be largely wasted, if the implications of those analyses are not clearly understood by decisionmakers. This article emphasizes that epistemic uncertainty can introduce dependence among related risks (e.g., risks to different individuals, or at different facilities), and illustrates the potential importance of such dependence in the context of two important types of decisions—evaluations of risk acceptability for a single technology, and comparisons of the risks for two or more technologies. We also present some preliminary ideas on how to communicate the effects of dependence to decisionmakers in a clear and easily comprehensible manner, and suggest future research directions in this area.  相似文献   

16.
Information of exposure factors used in quantitative risk assessments has previously been compiled and reported for U.S. and European populations. However, due to the advancement of science and knowledge, these reports are in continuous need of updating with new data. Equally important is the change over time of many exposure factors related to both physiological characteristics and human behavior. Body weight, skin surface, time use, and dietary habits are some of the most obvious examples covered here. A wealth of data is available from literature not primarily gathered for the purpose of risk assessment. Here we review a number of key exposure factors and compare these factors between northern Europe—here represented by Sweden—and the United States. Many previous compilations of exposure factor data focus on interindividual variability and variability between sexes and age groups, while uncertainty is mainly dealt with in a qualitative way. In this article variability is assessed along with uncertainty. As estimates of central tendency and interindividual variability, mean, standard deviation, skewness, kurtosis, and multiple percentiles were calculated, while uncertainty was characterized using 95% confidence intervals for these parameters. The presented statistics are appropriate for use in deterministic analyses using point estimates for each input parameter as well as in probabilistic assessments.  相似文献   

17.
We propose 14 principles of good practice to assist people in performing and reviewing probabilistic or Monte Carlo risk assessments, especially in the context of the federal and state statutes concerning chemicals in the environment. Monte Carlo risk assessments for hazardous waste sites that follow these principles will be easier to understand, will explicitly distinguish assumptions from data, and will consider and quantify effects that could otherwise lead to misinterpretation of the results. The proposed principles are neither mutually exclusive nor collectively exhaustive. We think and hope that these principles will evolve as new ideas arise and come into practice.  相似文献   

18.
Standard statistical methods understate the uncertainty one should attach to effect estimates obtained from observational data. Among the methods used to address this problem are sensitivity analysis, Monte Carlo risk analysis (MCRA), and Bayesian uncertainty assessment. Estimates from MCRAs have been presented as if they were valid frequentist or Bayesian results, but examples show that they need not be either in actual applications. It is concluded that both sensitivity analyses and MCRA should begin with the same type of prior specification effort as Bayesian analysis.  相似文献   

19.
Variability arises due to differences in the value of a quantity among different members of a population. Uncertainty arises due to lack of knowledge regarding the true value of a quantity for a given member of a population. We describe and evaluate two methods for quantifying both variability and uncertainty. These methods, bootstrap simulation and a likelihood-based method, are applied to three datasets. The datasets include a synthetic sample of 19 values from a Lognormal distribution, a sample of nine values obtained from measurements of the PCB concentration in leafy produce, and a sample of five values for the partitioning of chromium in the flue gas desulfurization system of coal-fired power plants. For each of these datasets, we employ the two methods to characterize uncertainty in the arithmetic mean and standard deviation, cumulative distribution functions based upon fitted parametric distributions, the 95th percentile of variability, and the 63rd percentile of uncertainty for the 81st percentile of variability. The latter is intended to show that it is possible to describe any point within the uncertain frequency distribution by specifying an uncertainty percentile and a variability percentile. Using the bootstrap method, we compare results based upon use of the method of matching moments and the method of maximum likelihood for fitting distributions to data. Our results indicate that with only 5–19 data points as in the datasets we have evaluated, there is substantial uncertainty based upon random sampling error. Both the boostrap and likelihood-based approaches yield comparable uncertainty estimates in most cases.  相似文献   

20.
Although the parameters for contaminant bioaccumulation models most likely vary over time, lack of data makes it impossible to quantify this variability. As a consequence, Monte Carlo models of contaminant bioaccumulation often treat all parameters as having fixed true values that are unknown. This can lead to biased distributions of predicted contaminant concentrations. This article demonstrates this phenomenon with a case study of selenium accumulation in the mussel Mytilus edulis in San Francisco Bay. "Ignorance-only" simulations (in which phytoplankton and bioavailable selenium concentrations are constant over time, but sampled from distributions of field measurements taken at different times), which an analyst might be forced to use due to lack of data, were compared with "variability and ignorance" simulations (sampling phytoplankton and bioavailable selenium concentrations each month). It was found that ignorance-only simulations may underestimate or overestimate the median predicted contaminant concentration at any time, relative to variability and ignorance simulations. However, over a long enough time period (such as the complete seasonal cycle in a seasonal model), treating temporal variability as if it were ignorance at least gave a range of predicted concentrations that enclosed the range predicted by explicit treatment of temporal variability. Comparing the temporal variability in field data with that predicted by simulations may indicate whether the right amount of temporal variability is being included in input variables. Sensitivity analysis combined with biological knowledge suggests which parameters might make important contributions to temporal variability. Temporal variability is potentially more complicated to deal with than other types of stochastic variability, because of the range of time scales over which parameters may vary.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号