首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Accuracy of the Pearson-Tukey three-point approximation is measured in units of standard deviation and compared with that of Monte Carlo simulation. Using a variety of well-known distributions, comparisons are made for the mean of a random variable and for common functions of one and two random variables. Comparisons are also made for the mean of an assortment of risk-analysis (Monte Carlo) models drawn from the literature. The results suggest that the Pearson-Tukey approximation is a useful alternative to simulation in risk-analysis situations.  相似文献   

2.
Traditionally, microbial risk assessors have used point estimates to evaluate the probability that an individual will become infected. We developed a quantitative approach that shifts the risk characterization perspective from point estimate to distributional estimate, and from individual to population. To this end, we first designed and implemented a dynamic model that tracks traditional epidemiological variables such as the number of susceptible, infected, diseased, and immune, and environmental variables such as pathogen density. Second, we used a simulation methodology that explicitly acknowledges the uncertainty and variability associated with the data. Specifically, the approach consists of assigning probability distributions to each parameter, sampling from these distributions for Monte Carlo simulations, and using a binary classification to assess the output of each simulation. A case study is presented that explores the uncertainties in assessing the risk of giardiasis when swimming in a recreational impoundment using reclaimed water. Using literature-based information to assign parameters ranges, our analysis demonstrated that the parameter describing the shedding of pathogens by infected swimmers was the factor that contributed most to the uncertainty in risk. The importance of other parameters was dependent on reducing the a priori range of this shedding parameter. By constraining the shedding parameter to its lower subrange, treatment efficiency was the parameter most important in predicting whether a simulation resulted in prevalences above or below non outbreak levels. Whereas parameters associated with human exposure were important when the shedding parameter was constrained to a higher subrange. This Monte Carlo simulation technique identified conditions in which outbreaks and/or nonoutbreaks are likely and identified the parameters that most contributed to the uncertainty associated with a risk prediction.  相似文献   

3.
DETECT     
DETECT is an inexpensive, easy to use, general-purpose, Monte Carlo simulation program for IBM and compatible personal computers. It can be used to quickly analyze fault trees or functions of random variables. DETECT provides a wide variety of input distributions to choose from and a dependency (correlation) option. The result of the analysis is a probability distribution over the variable of interest. We look forward to further improvements (e.g., graphics, full-screen editing, ability to inspect intermediate results) that will make DETECT even more useful and attractive.  相似文献   

4.
Most public health risk assessments assume and combine a series of average, conservative, and worst-case values to derive a conservative point estimate of risk. This procedure has major limitations. This paper demonstrates a new methodology for extended uncertainty analyses in public health risk assessments using Monte Carlo techniques. The extended method begins as do some conventional methods--with the preparation of a spreadsheet to estimate exposure and risk. This method, however, continues by modeling key inputs as random variables described by probability density functions (PDFs). Overall, the technique provides a quantitative way to estimate the probability distributions for exposure and health risks within the validity of the model used. As an example, this paper presents a simplified case study for children playing in soils contaminated with benzene and benzo(a)pyrene (BaP).  相似文献   

5.
A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.  相似文献   

6.
The aging domestic oil production infrastructure represents a high risk to the environment because of the type of fluids being handled (oil and brine) and the potential for accidental release of these fluids into sensitive ecosystems. Currently, there is not a quantitative risk model directly applicable to onshore oil exploration and production (E&P) facilities. We report on a probabilistic reliability model created for onshore exploration and production (E&P) facilities. Reliability theory, failure modes and effects analysis (FMEA), and event trees were used to develop the model estimates of the failure probability of typical oil production equipment. Monte Carlo simulation was used to translate uncertainty in input parameter values to uncertainty in the model output. The predicted failure rates were calibrated to available failure rate information by adjusting probability density function parameters used as random variates in the Monte Carlo simulations. The mean and standard deviation of normal variate distributions from which the Weibull distribution characteristic life was chosen were used as adjustable parameters in the model calibration. The model was applied to oil production leases in the Tallgrass Prairie Preserve, Oklahoma. We present the estimated failure probability due to the combination of the most significant failure modes associated with each type of equipment (pumps, tanks, and pipes). The results show that the estimated probability of failure for tanks is about the same as that for pipes, but that pumps have much lower failure probability. The model can provide necessary equipment reliability information for proactive risk management at the lease level by providing quantitative information to base allocation of maintenance resources to high-risk equipment that will minimize both lost production and ecosystem damage.  相似文献   

7.
This paper demonstrates a new methodology for probabilistic public health risk assessment using the first-order reliability method. The method provides the probability that incremental lifetime cancer risk exceeds a threshold level, and the probabilistic sensitivity quantifying the relative impact of considering the uncertainty of each random variable on the exceedance probability. The approach is applied to a case study given by Thompson et al. (1) on cancer risk caused by ingestion of benzene-contaminated soil, and the results are compared to that of the Monte Carlo method. Parametric sensitivity analyses are conducted to assess the sensitivity of the probabilistic event with respect to the distribution parameters of the basic random variables, such as the mean and standard deviation. The technique is a novel approach to probabilistic risk assessment, and can be used in situations when Monte Carlo analysis is computationally expensive, such as when the simulated risk is at the tail of the risk probability distribution.  相似文献   

8.
Discrete Probability Distributions for Probabilistic Fracture Mechanics   总被引:1,自引:0,他引:1  
Recently, discrete probability distributions (DPDs) have been suggested for use in risk analysis calculations to simplify the numerical computations which must be performed to determine failure probabilities. Specifically, DPDs have been developed to investigate probabilistic functions, that is, functions whose exact form is uncertain. The analysis of defect growth in materials by probabilistic fracture mechanics (PFM) models provides an example in which probabilistic functions play an important role. This paper compares and contrasts Monte Carlo simulation and DPDs as tools for calculating material failure due to fatigue crack growth. For the problem studied, the DPD method takes approximately one third the computation time of the Monte Carlo approach for comparable accuracy. It is concluded that the DPD method has considerable promise in low-failure-probability calculations of importance in risk assessment. In contrast to Monte Carlo, the computation time for the DPD approach is relatively insensitive to the magnitude of the probability being estimated.  相似文献   

9.
A. E. Ades  G. Lu 《Risk analysis》2003,23(6):1165-1172
Monte Carlo simulation has become the accepted method for propagating parameter uncertainty through risk models. It is widely appreciated, however, that correlations between input variables must be taken into account if models are to deliver correct assessments of uncertainty in risk. Various two-stage methods have been proposed that first estimate a correlation structure and then generate Monte Carlo simulations, which incorporate this structure while leaving marginal distributions of parameters unchanged. Here we propose a one-stage alternative, in which the correlation structure is estimated from the data directly by Bayesian Markov Chain Monte Carlo methods. Samples from the posterior distribution of the outputs then correctly reflect the correlation between parameters, given the data and the model. Besides its computational simplicity, this approach utilizes the available evidence from a wide variety of structures, including incomplete data and correlated and uncorrelated repeat observations. The major advantage of a Bayesian approach is that, rather than assuming the correlation structure is fixed and known, it captures the joint uncertainty induced by the data in all parameters, including variances and covariances, and correctly propagates this through the decision or risk model. These features are illustrated with examples on emissions of dioxin congeners from solid waste incinerators.  相似文献   

10.
This article presents the methodology and the simulation results concerning the quantitative assessment of exposure to the fungus toxin named Ochratoxin A (OA) in food, in humans in France. We show that is possible to provide reliable calculations of exposure to OA with the conjugate means of a nonparametric-type method of simulation, a parametric-type method of simulation, and the use of bootstrap confidence intervals. In the context of the Monte Carlo simulation, the nonparametric method takes into account the consumptions and the contaminations in the simulations only via the raw data whereas the parametric method depends on the random samplings from distribution functions fitted to consumption and contamination data. Our conclusions are based on eight types of food only. Nevertheless, they are meaningful due to the major importance of these foodstuffs in human nourishment in France. This methodology can be applied whatever the food contaminant (pesticides, other mycotoxins, Cadmium, etc.) when data are available.  相似文献   

11.
We consider a robust optimization model of determining a joint optimal bundle of price and order quantity for a retailer in a two-stage supply chain under uncertainty of parameters in demand and purchase cost functions. Demand is modeled as a decreasing power function of product price, and unit purchase cost is modeled as a decreasing power function of order quantity and demand. While the general form of the power functions are given, it is assumed that parameters defining the two power functions involve a certain degree of uncertainty and their possible values can be characterized by ellipsoids. We show that the robust optimization problem can be transformed into an equivalent convex optimization which can be solved efficiently and effectively using interior-point methods. In addition, we propose a practical implementation of the model, where the stochastic characteristics of parameters are obtained from regression analysis on past sales and production data, and ellipsoidal representations of the parameter uncertainties are obtained based on a combined use of genetic algorithm and Monte Carlo simulation. An illustrative example is provided to demonstrate the model and its implementation.  相似文献   

12.
《Omega》2014,42(6):998-1007
We consider a robust optimization model of determining a joint optimal bundle of price and order quantity for a retailer in a two-stage supply chain under uncertainty of parameters in demand and purchase cost functions. Demand is modeled as a decreasing power function of product price, and unit purchase cost is modeled as a decreasing power function of order quantity and demand. While the general form of the power functions are given, it is assumed that parameters defining the two power functions involve a certain degree of uncertainty and their possible values can be characterized by ellipsoids. We show that the robust optimization problem can be transformed into an equivalent convex optimization which can be solved efficiently and effectively using interior-point methods. In addition, we propose a practical implementation of the model, where the stochastic characteristics of parameters are obtained from regression analysis on past sales and production data, and ellipsoidal representations of the parameter uncertainties are obtained based on a combined use of genetic algorithm and Monte Carlo simulation. An illustrative example is provided to demonstrate the model and its implementation.  相似文献   

13.
Mills  William B.  Lew  Christine S.  Hung  Cheng Y. 《Risk analysis》1999,19(3):511-525
This paper describes the application of two multimedia models, PRESTO and MMSOILS, to predict contaminant migration from a landfill that contains an organic chemical (methylene chloride) and a radionuclide (uranium-238). Exposure point concentrations and human health risks are predicted, and distributions of those predictions are generated using Monte Carlo techniques. Analysis of exposure point concentrations shows that predictions of uranium-238 in groundwater differ by more than one order of magnitude between models. These differences occur mainly because PRESTO simulates uranium-238 transport through the groundwater using a one-dimensional algorithm and vertically mixes the plume over an effective mixing depth, whereas MMSOILS uses a three-dimensional algorithm and simulates a plume that resides near the surface of the aquifer.A sensitivity analysis, using stepwise multiple linear regression, is performed to evaluate which of the random variables are most important in producing the predicted distributions of exposure point concentrations and health risks. The sensitivity analysis shows that the predicted distributions can be accurately reproduced using a small subset of the random variables. Simple regression techniques are applied, for comparison, to the same scenarios, and results are similar. The practical implication of this analysis is the ability to distinguish between important versus unimportant random variables in terms of their sensitivity to selected endpoints.  相似文献   

14.
This paper establishes that instruments enable the identification of nonparametric regression models in the presence of measurement error by providing a closed form solution for the regression function in terms of Fourier transforms of conditional expectations of observable variables. For parametrically specified regression functions, we propose a root n consistent and asymptotically normal estimator that takes the familiar form of a generalized method of moments estimator with a plugged‐in nonparametric kernel density estimate. Both the identification and the estimation methodologies rely on Fourier analysis and on the theory of generalized functions. The finite‐sample properties of the estimator are investigated through Monte Carlo simulations.  相似文献   

15.
On Modeling Correlated Random Variables in Risk Assessment   总被引:1,自引:0,他引:1  
Haas  Charles N. 《Risk analysis》1999,19(6):1205-1214
Monte Carlo methods in risk assessment are finding increasingly widespread application. With the recognition that inputs may be correlated, the incorporation of such correlations into the simulation has become important. Most implementations rely upon the method of Iman and Conover for generating correlated random variables. In this work, alternative methods using copulas are presented for deriving correlated random variables. It is further shown that the particular algorithm or assumption used may have a substantial effect on the output results, due to differences in higher order bivariate moments.  相似文献   

16.
Li R  Englehardt JD  Li X 《Risk analysis》2012,32(2):345-359
Multivariate probability distributions, such as may be used for mixture dose‐response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose‐response biomarker and genetic information. In this article, a new two‐stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn‐in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose‐response function (DRF). Results are shown for the five‐parameter common‐mode and seven‐parameter dissimilar‐mode models, based on published data for eight benzene–toluene dose pairs. The common mode conditional DRF is obtained with a 21‐fold reduction in data requirement versus MCMC. Example common‐mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126‐PCB 153 mixture. Applicability is analyzed and discussed. Matlab® computer programs are provided.  相似文献   

17.
Today, chemical risk and safety assessments rely heavily on the estimation of environmental fate by models. The key compound‐related properties in such models describe partitioning and reactivity. Uncertainty in determining these properties can be separated into random and systematic (incompleteness) components, requiring different types of representation. Here, we evaluate two approaches that are suitable to treat also systematic errors, fuzzy arithmetic, and probability bounds analysis. When a best estimate (mode) and a range can be computed for an input parameter, then it is possible to characterize the uncertainty with a triangular fuzzy number (possibility distribution) or a corresponding probability box bound by two uniform distributions. We use a five‐compartment Level I fugacity model and reported empirical data from the literature for three well‐known environmental pollutants (benzene, pyrene, and DDT) as illustrative cases for this evaluation. Propagation of uncertainty by discrete probability calculus or interval arithmetic can be done at a low computational cost and gives maximum flexibility in applying different approaches. Our evaluation suggests that the difference between fuzzy arithmetic and probability bounds analysis is small, at least for this specific case. The fuzzy arithmetic approach can, however, be regarded as less conservative than probability bounds analysis if the assumption of independence is removed. Both approaches are sensitive to repeated parameters that may inflate the uncertainty estimate. Uncertainty described by probability boxes was therefore also propagated through the model by Monte Carlo simulation to show how this problem can be avoided.  相似文献   

18.
Risk analysis often depends on complex, computer-based models to describe links between policies (e.g., required emission-control equipment) and consequences (e.g., probabilities of adverse health effects). Appropriate specification of many model aspects is uncertain, including details of the model structure; transport, reaction-rate, and other parameters; and application-specific inputs such as pollutant-release rates. Because these uncertainties preclude calculation of the precise consequences of a policy, it is important to characterize the plausible range of effects. In principle, a probability distribution function for the effects can be constructed using Monte Carlo analysis, but the combinatorics of multiple uncertainties and the often high cost of model runs quickly exhaust available resources. This paper presents and applies a method to choose sets of input conditions (scenarios) that efficiently represent knowledge about the joint probability distribution of inputs. A simple score function approximately relating inputs to a policy-relevant output—in this case, globally averaged stratospheric ozone depletion—is developed. The probability density function for the score-function value is analytically derived from a subjective joint probability density for the inputs. Scenarios are defined by selected quantiles of the score function. Using this method, scenarios can be systematically selected in terms of the approximate probability distribution function for the output of concern, and probability intervals for the joint effect of the inputs can be readily constructed.  相似文献   

19.
A wide range of uncertainties will be introduced inevitably during the process of performing a safety assessment of engineering systems. The impact of all these uncertainties must be addressed if the analysis is to serve as a tool in the decision-making process. Uncertainties present in the components (input parameters of model or basic events) of model output are propagated to quantify its impact in the final results. There are several methods available in the literature, namely, method of moments, discrete probability analysis, Monte Carlo simulation, fuzzy arithmetic, and Dempster-Shafer theory. All the methods are different in terms of characterizing at the component level and also in propagating to the system level. All these methods have different desirable and undesirable features, making them more or less useful in different situations. In the probabilistic framework, which is most widely used, probability distribution is used to characterize uncertainty. However, in situations in which one cannot specify (1) parameter values for input distributions, (2) precise probability distributions (shape), and (3) dependencies between input parameters, these methods have limitations and are found to be not effective. In order to address some of these limitations, the article presents uncertainty analysis in the context of level-1 probabilistic safety assessment (PSA) based on a probability bounds (PB) approach. PB analysis combines probability theory and interval arithmetic to produce probability boxes (p-boxes), structures that allow the comprehensive propagation through calculation in a rigorous way. A practical case study is also carried out with the developed code based on the PB approach and compared with the two-phase Monte Carlo simulation results.  相似文献   

20.
本文提出了基于贝叶斯神经网络(BNN)短期负荷预测模型。根据气象影响因素和电力负荷的样本数据,针对权向量参数的先验分布分别为正态分布和柯西分布两种情况,应用混合蒙特卡洛(HMC)算法学习了BNN的权向量参数。由HMC算法和Laplace算法学习的贝叶斯神经网络以及BP算法学习的传统神经网络分别对4月 (春)、8月 (夏)、10月 (秋)和1月(冬)每月25天的每个整点时刻的负荷进行了预测。这些神经网络的输入层有11个节点,它们分别与每个整点时刻和的气象因素、上一个整点时刻的气象因素和时间变量相对应,输出层只有一个节点,它与负荷变量对应。试验结果表明HMC算法学习的BNN的预测结果的百分比平均绝对误差( MAPE)和平方根平均误差( RSME )取值远远小于由Laplace 算法学习的BNN和BP算法学习的人工神经网络的 MAPE和RMSE。 而且,HMC算法学习的BNN在测试集和训练集上的预测误差MAPE和RMSE的相差很小。 实验结果充分说明HMC算法学习的BNN具有较高的预测精度和较强的泛化能力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号