首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
Estimation of uncertainties associated with model predictions is an important component of the application of environmental and biological models. "Traditional" methods for propagating uncertainty, such as standard Monte Carlo and Latin Hypercube Sampling, however, often require performing a prohibitive number of model simulations, especially for complex, computationally intensive models. Here, a computationally efficient method for uncertainty propagation, the Stochastic Response Surface Method (SRSM) is coupled with another method, the Automatic Differentiation of FORTRAN (ADIFOR). The SRSM is based on series expansions of model inputs and outputs in terms of a set of "well-behaved" standard random variables. The ADIFOR method is used to transform the model code into one that calculates the derivatives of the model outputs with respect to inputs or transformed inputs. The calculated model outputs and the derivatives at a set of sample points are used to approximate the unknown coefficients in the series expansions of outputs. A framework for the coupling of the SRSM and ADIFOR is developed and presented here. Two case studies are presented, involving (1) a physiologically based pharmacokinetic model for perchloroethylene for humans, and (2) an atmospheric photochemical model, the Reactive Plume Model. The results obtained agree closely with those of traditional Monte Carlo and Latin hypercube sampling methods, while reducing the required number of model simulations by about two orders of magnitude.  相似文献   

2.
Methods to Approximate Joint Uncertainty and Variability in Risk   总被引:3,自引:0,他引:3  
As interest in quantitative analysis of joint uncertainty and interindividual variability (JUV) in risk grows, so does the need for related computational shortcuts. To quantify JUV in risk, Monte Carlo methods typically require nested sampling of JUV in distributed inputs, which is cumbersome and time-consuming. Two approximation methods proposed here allow simpler and more rapid analysis. The first consists of new upper-bound JUV estimators that involve only uncertainty or variability, not both, and so never require nested sampling to calculate. The second is a discrete-probability-calculus procedure that uses only the mean and one upper-tail mean for each input in order to estimate mean and upper-bound risk, which procedure is simpler and more intuitive than similar ones in use. Application of these methods is illustrated in an assessment of cancer risk from residential exposures to chloroform in Kanawah Valley, West Virginia. Because each of the multiple exposure pathways considered in this assessment had separate modeled sources of uncertainty and variability, the assessment illustrates a realistic case where a standard Monte Carlo approach to JUV analysis requires nested sampling. In the illustration, the first proposed method quantified JUV in cancer risk much more efficiently than corresponding nested Monte Carlo calculations. The second proposed method also nearly duplicated JUV-related and other estimates of risk obtained using Monte Carlo methods. Both methods were thus found adequate to obtain basic risk estimates accounting for JUV in a realistically complex risk assessment. These methods make routine JUV analysis more convenient and practical.  相似文献   

3.
谢建辉  李勇军  梁樑  吴记 《管理科学》2018,21(11):50-60
传统的DEA模型假设观测样本的投入产出都是确定型数据, 这使得DEA在实际应用中受到限制, 本文提出的基于拟似然估计的多投入多产出随机非参数包络数据 (PLE-StoNED) 方法拓展了这个假设, 能够估计随机环境下的生产前沿面.本文证明, 生产可能集假设条件下的前沿面可以用一个有凹凸性和单调性限制的函数来表示.相较之前的StoNED方法, 本文提出的方法可以估计随机环境下多投入多产出决策单元 (DMU) 的前沿面.通过Monte Carlo实验, 多投入多产出PLE-StoNED方法的有效性得到验证, 它可纠正DEA等传统方法产生的偏误.最后, 实证研究部分运用这一新提出的方法估计了中国大陆商业银行的生产前沿面和效率.本文提出的方法弥补了DEA缺乏统计性的不足, 可为决策者在随机环境下对多投入多产出决策单元进行生产力和效率评估提供决策参考.  相似文献   

4.
A. E. Ades  G. Lu 《Risk analysis》2003,23(6):1165-1172
Monte Carlo simulation has become the accepted method for propagating parameter uncertainty through risk models. It is widely appreciated, however, that correlations between input variables must be taken into account if models are to deliver correct assessments of uncertainty in risk. Various two-stage methods have been proposed that first estimate a correlation structure and then generate Monte Carlo simulations, which incorporate this structure while leaving marginal distributions of parameters unchanged. Here we propose a one-stage alternative, in which the correlation structure is estimated from the data directly by Bayesian Markov Chain Monte Carlo methods. Samples from the posterior distribution of the outputs then correctly reflect the correlation between parameters, given the data and the model. Besides its computational simplicity, this approach utilizes the available evidence from a wide variety of structures, including incomplete data and correlated and uncorrelated repeat observations. The major advantage of a Bayesian approach is that, rather than assuming the correlation structure is fixed and known, it captures the joint uncertainty induced by the data in all parameters, including variances and covariances, and correctly propagates this through the decision or risk model. These features are illustrated with examples on emissions of dioxin congeners from solid waste incinerators.  相似文献   

5.
Bayesian Forecasting via Deterministic Model   总被引:1,自引:0,他引:1  
Rational decision making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of two Bayesian methods for producing a probabilistic forecast via any deterministic model. The Bayesian Processor of Forecast (BPF) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution. The BFS is compared with Monte Carlo simulation and ensemble forecasting technique, none of which can alone produce a probabilistic forecast that quantifies the total uncertainty, but each can serve as a component of the BFS.  相似文献   

6.
The conceptual and computational structure of a performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) is described. Important parts of this structure are (1) maintenance of a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the many possible disruptions that could occur over the 10,000-year regulatory period that applies to the WIPP, and subjective uncertainty arising from the imprecision with which many of the quantities required in the analysis are known, (2) use of Latin hypercube sampling to incorporate the effects of subjective uncertainty, (3) use of Monte Carlo (i.e., random) sampling to incorporate the effects of stochastic uncertainty, and (4) efficient use of the necessarily limited number of mechanistic calculations that can be performed to support the analysis. The WIPP is under development by the U.S. Department of Energy (DOE) for the geologic (i.e., deep underground) disposal of transuranic (TRU) waste, with the indicated PA supporting a Compliance Certification Application (CCA) by the DOE to the U.S. Environmental Protection Agency (EPA) in October 1996 for the necessary certifications for the WIPP to begin operation. The EPA certified the WIPP for the disposal of TRU waste in May 1998, with the result that the WIPP will be the first operational facility in the United States for the geologic disposal of radioactive waste.  相似文献   

7.
The treatment of uncertainties associated with modeling and risk assessment has recently attracted significant attention. The methodology and guidance for dealing with parameter uncertainty have been fairly well developed and quantitative tools such as Monte Carlo modeling are often recommended. However, the issue of model uncertainty is still rarely addressed in practical applications of risk assessment. The use of several alternative models to derive a range of model outputs or risks is one of a few available techniques. This article addresses the often-overlooked issue of what we call "modeler uncertainty," i.e., difference in problem formulation, model implementation, and parameter selection originating from subjective interpretation of the problem at hand. This study uses results from the Fruit Working Group, which was created under the International Atomic Energy Agency (IAEA) BIOMASS program (BIOsphere Modeling and ASSessment). Model-model and model-data intercomparisons reviewed in this study were conducted by the working group for a total of three different scenarios. The greatest uncertainty was found to result from modelers' interpretation of scenarios and approximations made by modelers. In scenarios that were unclear for modelers, the initial differences in model predictions were as high as seven orders of magnitude. Only after several meetings and discussions about specific assumptions did the differences in predictions by various models merge. Our study shows that parameter uncertainty (as evaluated by a probabilistic Monte Carlo assessment) may have contributed over one order of magnitude to the overall modeling uncertainty. The final model predictions ranged between one and three orders of magnitude, depending on the specific scenario. This study illustrates the importance of problem formulation and implementation of an analytic-deliberative process in risk characterization.  相似文献   

8.
This paper presents a general model for exposure to homegrown foods that is used with a Monte Carlo analysis to determine the relative contributions of variability (Type A uncertainty) and true uncertainty (Type B uncertainty) to the overall variance in prediction of the dose-to-concentration ratio. Although classification of exposure inputs as uncertain or variable is somewhat subjective, food consumption rates and exposure duration are judged to have a predicted variance that is dominated by variability among individuals by age, income, culture, and geographical region. Whereas, biotransfer factors and partition factors are inputs that, to a large extent, involve uncertainty. Using ingestion of fruits, vegetables, grains, dairy products, and meat and soils assumed to be contaminated by hexachlorbenzene (HCB) and benzo(a)pyrene (BaP) as cases studies, a Monte Carlo analysis is used to explore the relative contribution of uncertainty and variability to overall variance in the estimated distribution of potential dose within the population that consumes homegrown foods. It is found that, when soil concentrations are specified, variances in ratios of dose-to-concentration for HCB are equally attributable to uncertainty and variability, whereas for BaP, variance in these ratios is dominated by true uncertainty.  相似文献   

9.
The uncertainty associated with estimates should be taken into account in quantitative risk assessment. Each input's uncertainty can be characterized through a probabilistic distribution for use under Monte Carlo simulations. In this study, the sampling uncertainty associated with estimating a low proportion on the basis of a small sample size was considered. A common application in microbial risk assessment is the estimation of a prevalence, proportion of contaminated food products, on the basis of few tested units. Three Bayesian approaches (based on beta(0, 0), beta(1/2, 1/2), and beta(l, 1)) and one frequentist approach (based on the frequentist confidence distribution) were compared and evaluated on the basis of simulations. For small samples, we demonstrated some differences between the four tested methods. We concluded that the better method depends on the true proportion of contaminated products, which is by definition unknown in common practice. When no prior information is available, we recommend the beta (1/2, 1/2) prior or the confidence distribution. To illustrate the importance of these differences, the four methods were used in an applied example. We performed two-dimensional Monte Carlo simulations to estimate the proportion of cold smoked salmon packs contaminated by Listeria monocytogenes, one dimension representing within-factory uncertainty, modeled by each of the four studied methods, and the other dimension representing variability between companies.  相似文献   

10.
A quantitative assessment of the exposure to Listeria monocytogenes from cold-smoked salmon (CSS) consumption in France is developed. The general framework is a second-order (or two-dimensional) Monte Carlo simulation, which characterizes the uncertainty and variability of the exposure estimate. The model takes into account the competitive bacterial growth between L. monocytogenes and the background competitive flora from the end of the production line to the consumer phase. An original algorithm is proposed to integrate this growth in conditions of varying temperature. As part of a more general project led by the French Food Safety Agency (Afssa), specific data were acquired and modeled for this quantitative exposure assessment model, particularly time-temperature profiles, prevalence data, and contamination-level data. The sensitivity analysis points out the main influence of the mean temperature in household refrigerators and the prevalence of contaminated CSS on the exposure level. The outputs of this model can be used as inputs for further risk assessment.  相似文献   

11.
Currently, there is a trend away from the use of single (often conservative) estimates of risk to summarize the results of risk analyses in favor of stochastic methods which provide a more complete characterization of risk. The use of such stochastic methods leads to a distribution of possible values of risk, taking into account both uncertainty and variability in all of the factors affecting risk. In this article, we propose a general framework for the analysis of uncertainty and variability for use in the commonly encountered case of multiplicative risk models, in which risk may be expressed as a product of two or more risk factors. Our analytical methods facilitate the evaluation of overall uncertainty and variability in risk assessment, as well as the contributions of individual risk factors to both uncertainty and variability which is cumbersome using Monte Carlo methods. The use of these methods is illustrated in the analysis of potential cancer risks due to the ingestion of radon in drinking water.  相似文献   

12.
Quasiextinction Probabilities as a Measure of Impact on Population Growth   总被引:3,自引:0,他引:3  
A probabilistic language based on stochastic models of population growth is proposed for a standard language to be used in environmental assessment. Environmental impact on a population is measured by the probability of quasiextinction. Density-dependent and independent models are discussed. A review of one-dimensional stochastic population growth models, the implications of environmental autocorrelation, finite versus "infinite" time results, age-structured models, and Monte Carlo simulations are included. The finite time probability of quasiextinction is presented for the logistic model. The sensitivity of the result with respect to the mean growth rate and the amplitude of environmental fluctuations are examined. Stochastic models of population growth form a basis for formulating reasonable criteria for environmental impact estimates.  相似文献   

13.
本文采用CGMY和GIG过程对非高斯OU随机波动率模型进行扩展,建立连续叠加Lévy过程驱动的非高斯OU随机波动率模型,并给出模型的散粒噪声(Shot-Noise)表现方式与近似。在此基础上,为了反映的波动率相关性,本文把回顾抽样(Retrospective Sampling)方法扩展到连续叠加的Lévy过程驱动的非高斯OU随机波动模型中,设计了Lévy过程驱动的非高斯OU随机波动模型的贝叶斯参数统计推断方法。最后,采用金融市场实际数据对不同模型和参数估计方法进行验证和比较研究。本文理论和实证研究均表明采用CGMY和GIG过程对非高斯OU随机波动率模型进行扩展之后,模型的绩效得到明显提高,更能反映金融资产收益率波动率变化特征,本文设计的Lévy过程驱动的非高斯OU随机波动模型的贝叶斯参数统计推断方法效率也较高,克服了已有研究的不足。同时,实证研究发现上证指数收益率和波动率跳跃的特征以及波动率序列具有明显的长记忆特性。  相似文献   

14.
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10‐bar structure for achieving a targeted 50% reduction of the model output variance.  相似文献   

15.
Quantitative risk assessment (QRA) models are used to estimate the risks of transporting dangerous goods and to assess the merits of introducing alternative risk reduction measures for different transportation scenarios and assumptions. A comprehensive QRA model recently was developed in Europe for application to road tunnels. This model can assess the merits of a limited number of "native safety measures." In this article, we introduce a procedure for extending its scope to include the treatment of a number of important "nonnative safety measures" of interest to tunnel operators and decisionmakers. Nonnative safety measures were not included in the original model specification. The suggested procedure makes use of expert judgment and Monte Carlo simulation methods to model uncertainty in the revised risk estimates. The results of a case study application are presented that involve the risks of transporting a given volume of flammable liquid through a 10-km road tunnel.  相似文献   

16.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

17.
The performance of a probabilistic risk assessment (PRA) for a nuclear power plant is a complex undertaking, involving the assembly of an accident frequency analysis, an accident progression analysis, a source term analysis, and a consequence analysis. Each of these analyses is, in itself, quite complex. Uncertainties enter into a PRA from each of these analyses. An important focus in recent PRAs has been to incorporate these uncertainties at each stage of the analysis, propagate the subsequent uncertainties through the entire analysis, and include uncertainty in the final results. Monte Carlo procedures based on Latin hypercube sampling provide one way to perform propagations of this type. In this paper, the results of two complete and independent Monte Carlo calculations for a recently completed PRA for a nuclear power plant are compared as a means of providing empirical evidence on the repeatability of uncertainty and sensitivity analyses for large-scale PRA calculations. These calculations use the same variables and analysis structure with two independently generated Latin hypercube samples. The results of the two calculations show a high degree of repeatability for the analysis of a very complex system.  相似文献   

18.
Bayesian methods are presented for updating the uncertainty in the predictions of an integrated Environmental Health Risk Assessment (EHRA) model. The methods allow the estimation of posterior uncertainty distributions based on the observation of different model outputs along the chain of the linked assessment framework. Analytical equations are derived for the case of the multiplicative lognormal risk model where the sequential log outputs (log ambient concentration, log applied dose, log delivered dose, and log risk) are each normally distributed. Given observations of a log output made with a normally distributed measurement error, the posterior distributions of the log outputs remain normal, but with modified means and variances, and induced correlations between successive log outputs and log inputs. The analytical equations for forward and backward propagation of the updates are generally applicable to sums of normally distributed variables. The Bayesian Monte-Carlo (BMC) procedure is presented to provide an approximate, but more broadly applicable method for numerically updating uncertainty with concurrent backward and forward propagation. Illustrative examples, presented for the multiplicative lognormal model, demonstrate agreement between the analytical and BMC methods, and show how uncertainty updates can propagate through a linked EHRA. The Bayesian updating methods facilitate the pooling of knowledge encoded in predictive models with that transmitted by research outcomes (e.g., field measurements), and thereby support the practice of iterative risk assessment and value of information appraisals.  相似文献   

19.
基于远期LIBOR利率的随机波动与无限跳跃特征,针对标准化LIBOR市场模型(LMM)和随机波动率LIBOR市场模型(SV-LMM)应用局限,进一步引入Levy无限跳跃过程,建立多因子非标准化Levy 跳跃随机波动率LIBOR市场模型(SVLEVY-LMM)。在此基础上,基于非参数化相关矩阵假设,运用互换期权(Swaption)、利率上限(Cap)等主要市场校准工具和蒙特卡罗模拟技术,对模型局部波动率和瞬间相关系数等参数进行有效市场校准;应用自适应马尔科夫链蒙特卡罗模拟方法(A-MCMC)对Levy跳跃与随机波动参数进行有效理论估计。实证认为,对远期利率波动率校准,分段固定波动率结构较为符合市场实际情况;对远期利率相关系数矩阵校准,非参数化相关系数矩阵具有最小估计误差和最佳的市场适应性;SVLEVY-LMM能够最好拟合远期LIBOR利率。  相似文献   

20.
Most public health risk assessments assume and combine a series of average, conservative, and worst-case values to derive a conservative point estimate of risk. This procedure has major limitations. This paper demonstrates a new methodology for extended uncertainty analyses in public health risk assessments using Monte Carlo techniques. The extended method begins as do some conventional methods--with the preparation of a spreadsheet to estimate exposure and risk. This method, however, continues by modeling key inputs as random variables described by probability density functions (PDFs). Overall, the technique provides a quantitative way to estimate the probability distributions for exposure and health risks within the validity of the model used. As an example, this paper presents a simplified case study for children playing in soils contaminated with benzene and benzo(a)pyrene (BaP).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号