首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It has recently been suggested that "standard" data distributions for key exposure variables should be developed wherever appropriate for use in probabilistic or "Monte Carlo" exposure analyses. Soil-on-skin adherence estimates represent an ideal candidate for development of a standard data distribution: There are several readily available studies which offer a consistent pattern of reported results, and more importantly, soil adherence to skin is likely to vary little from site-to-site. In this paper, we thoroughly review each of the published soil adherence studies with respect to study design, sampling, and analytical methods, and level of confidence in the reported results. Based on these studies, probability density functions (PDF) of soil adherence values were examined for different age groups and different sampling techniques. The soil adherence PDF developed from adult data was found to resemble closely the soil adherence PDF based on child data in terms of both central tendency (mean = 0.49 and 0.63 mg-soil/cm2-skin, respectively) and 95th percentile values (1.6 and 2.4 mg-soil/cm2-skin, respectively). Accordingly, a single, "standard" PDF is presented based on all data collected for all age groups. This standard PDF is lognormally distributed; the arithmetic mean and standard deviation are 0.52 ± 0.9 mg-soil/cm2-skin. Since our review of the literature indicates that soil adherence under environmental conditions will be minimally influenced by age, sex, soil type, or particle size, this PDF should be considered applicable to all settings. The 50th and 95th percentile values of the standard PDF (0.25 and 1.7 mg-soil/cm2-skin, respectively) are very similar to recent U.S. EPA estimates of "average" and "upper-bound" soil adherence (0.2 and 1.0 mg-soil/cm2-skin, respectively).  相似文献   

2.
Monte Carlo simulations are commonplace in quantitative risk assessments (QRAs). Designed to propagate the variability and uncertainty associated with each individual exposure input parameter in a quantitative risk assessment, Monte Carlo methods statistically combine the individual parameter distributions to yield a single, overall distribution. Critical to such an assessment is the representativeness of each individual input distribution. The authors performed a literature review to collect and compare the distributions used in published QRAs for the parameters of body weight, food consumption, soil ingestion rates, breathing rates, and fluid intake. To provide a basis for comparison, all estimated exposure parameter distributions were evaluated with respect to four properties: consistency, accuracy, precision, and specificity. The results varied depending on the exposure parameter. Even where extensive, well-collected data exist, investigators used a variety of different distributional shapes to approximate these data. Where such data do not exist, investigators have collected their own data, often leading to substantial disparity in parameter estimates and subsequent choice of distribution. The present findings indicate that more attention must be paid to the data underlying these distributional choices. More emphasis should be placed on sensitivity analyses, quantifying the impact of assumptions, and on discussion of sources of variation as part of the presentation of any risk assessment results. If such practices and disclosures are followed, it is believed that Monte Carlo simulations can greatly enhance the accuracy and appropriateness of specific risk assessments. Without such disclosures, researchers will be increasing the size of the risk assessment "black box," a concern already raised by many critics of more traditional risk assessments.  相似文献   

3.
Recently developed large sample inference procedures for least absolute value (LAV) regression are examined via Monte Carlo simulation to determine when sample sizes are large enough for the procedures to work effectively. A variety of different experimental settings were created by varying the disturbance distribution, the number of explanatory variables and the way the explanatory variables were generated. Necessary sample sizes range from as small as 20 when disturbances are normal to as large as 200 in extreme outlier-producing distributions.  相似文献   

4.
There is a need for plant-specific distributions of incidence and failure rates rather than distributions from pooled data which are based on the "common incidence rate" assumption. The so-called superpopulation model satisfies this need through a practically appealing approach that accounts for the variability over the population of plants. Unfortunately, the chosen order in which the integrals with respect to the individual plant rates λi, ( i = 0, 1…, m ) and the parameters a , β of the Γ-population distribution are solved seems to drive the solution close to the common incidence rate distribution. It is shown that the solution obtained from interchanging the order and solving the integrals with respect to the individual plant rates by Monte Carlo simulation very quickly provides the plant specific distribution. This differing solution behaviour may be due to the lack of uniform convergence over (α, β, λI, ( i = 1,…, m ))-space. Examples illustrate the difference that may be observed.  相似文献   

5.
Application of Monte Carlo simulation methods to quantitative risk assessment are becoming increasingly popular. With this methodology, investigators have become concerned about correlations among input variables which might affect the resulting distribution of risk. We show that the choice of input distributions in these simulations likely has a larger effect on the resultant risk distribution than does the inclusion or exclusion of correlations. Previous investigators have studied the effect of correlated input variables for the addition of variables with any underlying distribution and for the product of lognormally distributed variables. The effects in the main part of the distribution are small unless the correlation and variances are large. We extend this work by considering addition, multiplication and division of two variables with assumed normal, lognormal, uniform and triangular distributions. For all possible pairwise combinations, we find that the effects of correlated input variables are similar to those observed for lognormal distributions, and thus relatively small overall. The effect of using different distributions, however, can be large.  相似文献   

6.
In quantitative uncertainty analysis, it is essential to define rigorously the endpoint or target of the assessment. Two distinctly different approaches using Monte Carlo methods are discussed: (1) the end point is a fixed but unknown value (e.g., the maximally exposed individual, the average individual, or a specific individual) or (2) the end point is an unknown distribution of values (e.g., the variability of exposures among unspecified individuals in the population). In the first case, values are sampled at random from distributions representing various "degrees of belief" about the unknown "fixed" values of the parameters to produce a distribution of model results. The distribution of model results represents a subjective confidence statement about the true but unknown assessment end point. The important input parameters are those that contribute most to the spread in the distribution of the model results. In the second case, Monte Carlo calculations are performed in two dimensions producing numerous alternative representations of the true but unknown distribution. These alternative distributions permit subject confidence statements to be made from two perspectives: (1) for the individual exposure occurring at a specified fractile of the distribution or (2) for the fractile of the distribution associated with a specified level of individual exposure. The relative importance of input parameters will depend on the fractile or exposure level of interest. The quantification of uncertainty for the simulation of a true but unknown distribution of values represents the state-of-the-art in assessment modeling.  相似文献   

7.
Roger Cooke 《Risk analysis》2010,30(3):330-339
The practice of uncertainty factors as applied to noncancer endpoints in the IRIS database harkens back to traditional safety factors. In the era before risk quantification, these were used to build in a “margin of safety.” As risk quantification takes hold, the safety factor methods yield to quantitative risk calculations to guarantee safety. Many authors believe that uncertainty factors can be given a probabilistic interpretation as ratios of response rates, and that the reference values computed according to the IRIS methodology can thus be converted to random variables whose distributions can be computed with Monte Carlo methods, based on the distributions of the uncertainty factors. Recent proposals from the National Research Council echo this view. Based on probabilistic arguments, several authors claim that the current practice of uncertainty factors is overprotective. When interpreted probabilistically, uncertainty factors entail very strong assumptions on the underlying response rates. For example, the factor for extrapolating from animal to human is the same whether the dosage is chronic or subchronic. Together with independence assumptions, these assumptions entail that the covariance matrix of the logged response rates is singular. In other words, the accumulated assumptions entail a log‐linear dependence between the response rates. This in turn means that any uncertainty analysis based on these assumptions is ill‐conditioned; it effectively computes uncertainty conditional on a set of zero probability. The practice of uncertainty factors is due for a thorough review. Two directions are briefly sketched, one based on standard regression models, and one based on nonparametric continuous Bayesian belief nets.  相似文献   

8.
Discrete Probability Distributions for Probabilistic Fracture Mechanics   总被引:1,自引:0,他引:1  
Recently, discrete probability distributions (DPDs) have been suggested for use in risk analysis calculations to simplify the numerical computations which must be performed to determine failure probabilities. Specifically, DPDs have been developed to investigate probabilistic functions, that is, functions whose exact form is uncertain. The analysis of defect growth in materials by probabilistic fracture mechanics (PFM) models provides an example in which probabilistic functions play an important role. This paper compares and contrasts Monte Carlo simulation and DPDs as tools for calculating material failure due to fatigue crack growth. For the problem studied, the DPD method takes approximately one third the computation time of the Monte Carlo approach for comparable accuracy. It is concluded that the DPD method has considerable promise in low-failure-probability calculations of importance in risk assessment. In contrast to Monte Carlo, the computation time for the DPD approach is relatively insensitive to the magnitude of the probability being estimated.  相似文献   

9.
Accuracy of the Pearson-Tukey three-point approximation is measured in units of standard deviation and compared with that of Monte Carlo simulation. Using a variety of well-known distributions, comparisons are made for the mean of a random variable and for common functions of one and two random variables. Comparisons are also made for the mean of an assortment of risk-analysis (Monte Carlo) models drawn from the literature. The results suggest that the Pearson-Tukey approximation is a useful alternative to simulation in risk-analysis situations.  相似文献   

10.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

11.
Concern about the degree of uncertainty and potential conservatism in deterministic point estimates of risk has prompted researchers to turn increasingly to probabilistic methods for risk assessment. With Monte Carlo simulation techniques, distributions of risk reflecting uncertainty and/or variability are generated as an alternative. In this paper the compounding of conservatism(1) between the level associated with point estimate inputs selected from probability distributions and the level associated with the deterministic value of risk calculated using these inputs is explored. Two measures of compounded conservatism are compared and contrasted. The first measure considered, F , is defined as the ratio of the risk value, R d, calculated deterministically as a function of n inputs each at the j th percentile of its probability distribution, and the risk value, R j that falls at the j th percentile of the simulated risk distribution (i.e., F=Rd/Rj). The percentile of the simulated risk distribution which corresponds to the deterministic value, Rd , serves as a second measure of compounded conservatism. Analytical results for simple products of lognormal distributions are presented. In addition, a numerical treatment of several complex cases is presented using five simulation analyses from the literature to illustrate. Overall, there are cases in which conservatism compounds dramatically for deterministic point estimates of risk constructed from upper percentiles of input parameters, as well as those for which the effect is less notable. The analytical and numerical techniques discussed are intended to help analysts explore the factors that influence the magnitude of compounding conservatism in specific cases.  相似文献   

12.
The aging domestic oil production infrastructure represents a high risk to the environment because of the type of fluids being handled (oil and brine) and the potential for accidental release of these fluids into sensitive ecosystems. Currently, there is not a quantitative risk model directly applicable to onshore oil exploration and production (E&P) facilities. We report on a probabilistic reliability model created for onshore exploration and production (E&P) facilities. Reliability theory, failure modes and effects analysis (FMEA), and event trees were used to develop the model estimates of the failure probability of typical oil production equipment. Monte Carlo simulation was used to translate uncertainty in input parameter values to uncertainty in the model output. The predicted failure rates were calibrated to available failure rate information by adjusting probability density function parameters used as random variates in the Monte Carlo simulations. The mean and standard deviation of normal variate distributions from which the Weibull distribution characteristic life was chosen were used as adjustable parameters in the model calibration. The model was applied to oil production leases in the Tallgrass Prairie Preserve, Oklahoma. We present the estimated failure probability due to the combination of the most significant failure modes associated with each type of equipment (pumps, tanks, and pipes). The results show that the estimated probability of failure for tanks is about the same as that for pipes, but that pumps have much lower failure probability. The model can provide necessary equipment reliability information for proactive risk management at the lease level by providing quantitative information to base allocation of maintenance resources to high-risk equipment that will minimize both lost production and ecosystem damage.  相似文献   

13.
Bayesian Monte Carlo (BMC) decision analysis adopts a sampling procedure to estimate likelihoods and distributions of outcomes, and then uses that information to calculate the expected performance of alternative strategies, the value of information, and the value of including uncertainty. These decision analysis outputs are therefore subject to sample error. The standard error of each estimate and its bias, if any, can be estimated by the bootstrap procedure. The bootstrap operates by resampling (with replacement) from the original BMC sample, and redoing the decision analysis. Repeating this procedure yields a distribution of decision analysis outputs. The bootstrap approach to estimating the effect of sample error upon BMC analysis is illustrated with a simple value-of-information calculation along with an analysis of a proposed control structure for Lake Erie. The examples show that the outputs of BMC decision analysis can have high levels of sample error and bias.  相似文献   

14.
Fish consumption rates play a critical role in the assessment of human health risks posed by the consumption of fish from chemically contaminated water bodies. Based on data from the 1989 Michigan Sport Anglers Fish Consumption Survey, we examined total fish consumption, consumption of self-caught fish, and consumption of Great Lakes fish for all adults, men, women, and certain higher risk subgroups such as anglers. We present average daily consumption rates as compound probability distributions consisting of a Bernoulli trial (to distinguish those who ate fish from those who did not) combined with a distribution (both empirical and parametric) for those who ate fish. We found that the average daily consumption rates for adults who ate fish are reasonably well fit by lognormal distributions. The compound distributions may be used as input variables for Monte Carlo simulations in public health risk assessments.  相似文献   

15.
Interest in examining both the uncertainty and variability in environmental health risk assessments has led to increased use of methods for propagating uncertainty. While a variety of approaches have been described, the advent of both powerful personal computers and commercially available simulation software have led to increased use of Monte Carlo simulation. Although most analysts and regulators are encouraged by these developments, some are concerned that Monte Carlo analysis is being applied uncritically. The validity of any analysis is contingent on the validity of the inputs to the analysis. In the propagation of uncertainty or variability, it is essential that the statistical distribution of input variables are properly specified. Furthermore, any dependencies among the input variables must be considered in the analysis. In light of the potential difficulty in specifying dependencies among input variables, it is useful to consider whether there exist rules of thumb as to when correlations can be safely ignored (i.e., when little overall precision is gained by an additional effort to improve upon an estimation of correlation). We make use of well-known error propagation formulas to develop expressions intended to aid the analyst in situations wherein normally and lognormally distributed variables are linearly correlated.  相似文献   

16.
Probabilistic risk assessments are enjoying increasing popularity as a tool to characterize the health hazards associated with exposure to chemicals in the environment. Because probabilistic analyses provide much more information to the risk manager than standard “point” risk estimates, this approach has generally been heralded as one which could significantly improve the conduct of health risk assessments. The primary obstacles to replacing point estimates with probabilistic techniques include a general lack of familiarity with the approach and a lack of regulatory policy and guidance. This paper discusses some of the advantages and disadvantages of the point estimate vs. probabilistic approach. Three case studies are presented which contrast and compare the results of each. The first addresses the risks associated with household exposure to volatile chemicals in tapwater. The second evaluates airborne dioxin emissions which can enter the food-chain. The third illustrates how to derive health-based cleanup levels for dioxin in soil. It is shown that, based on the results of Monte Carlo analyses of probability density functions (PDFs), the point estimate approach required by most regulatory agencies will nearly always overpredict the risk for the 95th percentile person by a factor of up to 5. When the assessment requires consideration of 10 or more exposure variables, the point estimate approach will often predict risks representative of the 99.9th percentile person rather than the 50th or 95th percentile person. This paper recommends a number of data distributions for various exposure variables that we believe are now sufficiently well understood to be used with confidence in most exposure assessments. A list of exposure variables that may require additional research before adequate data distributions can be developed are also discussed.  相似文献   

17.
An Alternative Approach to Dietary Exposure Assessment   总被引:4,自引:0,他引:4  
The method of dietary exposure assessment currently used by the Environmental Protection Agency (EPA), the Dietary Residue Evaluation System (DRES), combines a consumption distribution derived from the United States Department of Agriculture (USDA) 1977-1978 Nationwide Food Consumption Survey (NFCS) with a single estimate of residue level. The National Academy of Sciences'1' recommended that EPA incorporate both the distribution of residues and the distribution of consumption into their exposure assessment methodology and proposed using a Monte Carlo approach. This paper presents an alternative method, the Joint Distributional Analysis (JDA), that combines the consumption and residue distributions, without relying on random sampling or fitting theoretical distributions like the Monte Carlo method. This method permits simultaneous analysis of the entire diet, including assessing exposure from residues in different foods.  相似文献   

18.
Methods for Uncertainty Analysis: A Comparative Survey   总被引:1,自引:0,他引:1  
This paper presents a survey and comparative evaluation of methods which have been developed for the determination of uncertainties in accident consequences and probabilities, for use in probabilistic risk assessment. The methods considered are: analytic techniques, Monte Carlo simulation, response surface approaches, differential sensitivity techniques, and evaluation of classical statistical confidence bounds. It is concluded that only the response surface and differential sensitivity approaches are sufficiently general and flexible for use as overall methods of uncertainty analysis in probabilistic risk assessment. The other methods considered, however, are very useful in particular problems.  相似文献   

19.
In this article, Bayesian networks are used to model semiconductor lifetime data obtained from a cyclic stress test system. The data of interest are a mixture of log‐normal distributions, representing two dominant physical failure mechanisms. Moreover, the data can be censored due to limited test resources. For a better understanding of the complex lifetime behavior, interactions between test settings, geometric designs, material properties, and physical parameters of the semiconductor device are modeled by a Bayesian network. Statistical toolboxes in MATLAB® have been extended and applied to find the best structure of the Bayesian network and to perform parameter learning. Due to censored observations Markov chain Monte Carlo (MCMC) simulations are employed to determine the posterior distributions. For model selection the automatic relevance determination (ARD) algorithm and goodness‐of‐fit criteria such as marginal likelihoods, Bayes factors, posterior predictive density distributions, and sum of squared errors of prediction (SSEP) are applied and evaluated. The results indicate that the application of Bayesian networks to semiconductor reliability provides useful information about the interactions between the significant covariates and serves as a reliable alternative to currently applied methods.  相似文献   

20.
基于远期LIBOR利率的随机波动与无限跳跃特征,针对标准化LIBOR市场模型(LMM)和随机波动率LIBOR市场模型(SV-LMM)应用局限,进一步引入Levy无限跳跃过程,建立多因子非标准化Levy 跳跃随机波动率LIBOR市场模型(SVLEVY-LMM)。在此基础上,基于非参数化相关矩阵假设,运用互换期权(Swaption)、利率上限(Cap)等主要市场校准工具和蒙特卡罗模拟技术,对模型局部波动率和瞬间相关系数等参数进行有效市场校准;应用自适应马尔科夫链蒙特卡罗模拟方法(A-MCMC)对Levy跳跃与随机波动参数进行有效理论估计。实证认为,对远期利率波动率校准,分段固定波动率结构较为符合市场实际情况;对远期利率相关系数矩阵校准,非参数化相关系数矩阵具有最小估计误差和最佳的市场适应性;SVLEVY-LMM能够最好拟合远期LIBOR利率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号