首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices.  相似文献   

2.
Uncertainty importance measures are quantitative tools aiming at identifying the contribution of uncertain inputs to output uncertainty. Their application ranges from food safety (Frey & Patil (2002)) to hurricane losses (Iman et al. (2005a, 2005b)). Results and indications an analyst derives depend on the method selected for the study. In this work, we investigate the assumptions at the basis of various indicator families to discuss the information they convey to the analyst/decisionmaker. We start with nonparametric techniques, and then present variance-based methods. By means of an example we show that output variance does not always reflect a decisionmaker state of knowledge of the inputs. We then examine the use of moment-independent approaches to global sensitivity analysis, i.e., techniques that look at the entire output distribution without a specific reference to its moments. Numerical results demonstrate that both moment-independent and variance-based indicators agree in identifying noninfluential parameters. However, differences in the ranking of the most relevant factors show that inputs that influence variance the most are not necessarily the ones that influence the output uncertainty distribution the most.  相似文献   

3.
Measures of sensitivity and uncertainty have become an integral part of risk analysis. Many such measures have a conditional probabilistic structure, for which a straightforward Monte Carlo estimation procedure has a double‐loop form. Recently, a more efficient single‐loop procedure has been introduced, and consistency of this procedure has been demonstrated separately for particular measures, such as those based on variance, density, and information value. In this work, we give a unified proof of single‐loop consistency that applies to any measure satisfying a common rationale. This proof is not only more general but invokes less restrictive assumptions than heretofore in the literature, allowing for the presence of correlations among model inputs and of categorical variables. We examine numerical convergence of such an estimator under a variety of sensitivity measures. We also examine its application to a published medical case study.  相似文献   

4.
Risk analysis often depends on complex, computer-based models to describe links between policies (e.g., required emission-control equipment) and consequences (e.g., probabilities of adverse health effects). Appropriate specification of many model aspects is uncertain, including details of the model structure; transport, reaction-rate, and other parameters; and application-specific inputs such as pollutant-release rates. Because these uncertainties preclude calculation of the precise consequences of a policy, it is important to characterize the plausible range of effects. In principle, a probability distribution function for the effects can be constructed using Monte Carlo analysis, but the combinatorics of multiple uncertainties and the often high cost of model runs quickly exhaust available resources. This paper presents and applies a method to choose sets of input conditions (scenarios) that efficiently represent knowledge about the joint probability distribution of inputs. A simple score function approximately relating inputs to a policy-relevant output—in this case, globally averaged stratospheric ozone depletion—is developed. The probability density function for the score-function value is analytically derived from a subjective joint probability density for the inputs. Scenarios are defined by selected quantiles of the score function. Using this method, scenarios can be systematically selected in terms of the approximate probability distribution function for the output of concern, and probability intervals for the joint effect of the inputs can be readily constructed.  相似文献   

5.
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10‐bar structure for achieving a targeted 50% reduction of the model output variance.  相似文献   

6.
This paper addresses the use of data for identifying and characterizing uncertainties in model parameters and predictions. The Bayesian Monte Carlo method is formally presented and elaborated, and applied to the analysis of the uncertainty in a predictive model for global mean sea level change. The method uses observations of output variables, made with an assumed error structure, to determine a posterior distribution of model outputs. This is used to derive a posterior distribution for the model parameters. Results demonstrate the resolution of the uncertainty that is obtained as a result of the Bayesian analysis and also indicate the key contributors to the uncertainty in the sea level rise model. While the technique is illustrated with a simple, preliminary model, the analysis provides an iterative framework for model refinement. The methodology developed in this paper provides a mechanism for the incorporation of ongoing data collection and research in decision-making for problems involving uncertain environmental change.  相似文献   

7.
Pest risk maps can provide useful decision support in invasive species management, but most do not adequately consider the uncertainty associated with predicted risk values. This study explores how increased uncertainty in a risk model's numeric assumptions might affect the resultant risk map. We used a spatial stochastic model, integrating components for entry, establishment, and spread, to estimate the risks of invasion and their variation across a two-dimensional landscape for Sirex noctilio , a nonnative woodwasp recently detected in the United States and Canada. Here, we present a sensitivity analysis of the mapped risk estimates to variation in key model parameters. The tested parameter values were sampled from symmetric uniform distributions defined by a series of nested bounds (±5%, … , ±40%) around the parameters' initial values. The results suggest that the maximum annual spread distance, which governs long-distance dispersal, was by far the most sensitive parameter. At ±15% or larger variability bound increments for this parameter, there were noteworthy shifts in map risk values, but no other parameter had a major effect, even at wider bounds of variation. The methodology presented here is generic and can be used to assess the impact of uncertainties on the stability of pest risk maps as well as to identify geographic areas for which management decisions can be made confidently, regardless of uncertainty.  相似文献   

8.
How can risk analysts help to improve policy and decision making when the correct probabilistic relation between alternative acts and their probable consequences is unknown? This practical challenge of risk management with model uncertainty arises in problems from preparing for climate change to managing emerging diseases to operating complex and hazardous facilities safely. We review constructive methods for robust and adaptive risk analysis under deep uncertainty. These methods are not yet as familiar to many risk analysts as older statistical and model‐based methods, such as the paradigm of identifying a single “best‐fitting” model and performing sensitivity analyses for its conclusions. They provide genuine breakthroughs for improving predictions and decisions when the correct model is highly uncertain. We demonstrate their potential by summarizing a variety of practical risk management applications.  相似文献   

9.
In risk assessment, the moment‐independent sensitivity analysis (SA) technique for reducing the model uncertainty has attracted a great deal of attention from analysts and practitioners. It aims at measuring the relative importance of an individual input, or a set of inputs, in determining the uncertainty of model output by looking at the entire distribution range of model output. In this article, along the lines of Plischke et al., we point out that the original moment‐independent SA index (also called delta index) can also be interpreted as the dependence measure between model output and input variables, and introduce another moment‐independent SA index (called extended delta index) based on copula. Then, nonparametric methods for estimating the delta and extended delta indices are proposed. Both methods need only a set of samples to compute all the indices; thus, they conquer the problem of the “curse of dimensionality.” At last, an analytical test example, a risk assessment model, and the levelE model are employed for comparing the delta and the extended delta indices and testing the two calculation methods. Results show that the delta and the extended delta indices produce the same importance ranking in these three test examples. It is also shown that these two proposed calculation methods dramatically reduce the computational burden.  相似文献   

10.
Introduction of classical swine fever virus (CSFV) is a continuing threat to the pig production sector in the European Union. A scenario tree model was developed to obtain more insight into the main risk factors determining the probability of CSFV introduction (P(CSFV)). As this model contains many uncertain input parameters, sensitivity analysis was used to indicate which of these parameters influence model results most. Group screening combined with the statistical techniques of design of experiments and meta-modeling was applied to detect the most important uncertain input parameters among a total of 257 parameters. The response variable chosen was the annual P(CSFV) into the Netherlands. Only 128 scenario calculations were needed to specify the final meta-model. A consecutive one-at-a-time sensitivity analysis was performed with the main effects of this meta-model to explore their impact on the ranking of risk factors contributing most to the annual P(CSFV). The results indicated that model outcome is most sensitive to the uncertain input parameters concerning the expected number of classical swine fever epidemics in Germany, Belgium, and the United Kingdom and the probability that CSFV survives in an empty livestock truck traveling over a distance of 0-900 km.  相似文献   

11.
Marc Kennedy  Andy Hart 《Risk analysis》2009,29(10):1427-1442
We propose new models for dealing with various sources of variability and uncertainty that influence risk assessments for dietary exposure. The uncertain or random variables involved can interact in complex ways, and the focus is on methodology for integrating their effects and on assessing the relative importance of including different uncertainty model components in the calculation of dietary exposures to contaminants, such as pesticide residues. The combined effect is reflected in the final inferences about the population of residues and subsequent exposure assessments. In particular, we show how measurement uncertainty can have a significant impact on results and discuss novel statistical options for modeling this uncertainty. The effect of measurement error is often ignored, perhaps due to the laboratory process conforming to the relevant international standards, for example, or is treated in an  ad hoc  way. These issues are common to many dietary risk analysis problems, and the methods could be applied to any food and chemical of interest. An example is presented using data on carbendazim in apples and consumption surveys of toddlers.  相似文献   

12.
Biomagnification of organochlorine and other persistent organic contaminants by higher trophic level organisms represents one of the most significant sources of uncertainty and variability in evaluating potential risks associated with disposal of dredged materials. While it is important to distinguish between population variability (e.g., true population heterogeneity in fish weight, and lipid content) and uncertainty (e.g., measurement error), they can be operationally difficult to define separately in probabilistic estimates of human health and ecological risk. We propose a disaggregation of uncertain and variable parameters based on: (1) availability of supporting data; (2) the specific management and regulatory context (in this case, of the U.S. Army Corps of Engineers/U.S. Environmental Protection Agency tiered approach to dredged material management); and (3) professional judgment and experience in conducting probabilistic risk assessments. We describe and quantitatively evaluate several sources of uncertainty and variability in estimating risk to human health from trophic transfer of polychlorinated biphenyls (PCBs) using a case study of sediments obtained from the New York-New Jersey Harbor and being evaluated for disposal at an open water off-shore disposal site within the northeast region. The estimates of PCB concentrations in fish and dietary doses of PCBs to humans ingesting fish are expressed as distributions of values, of which the arithmetic mean or mode represents a particular fractile. The distribution of risk values is obtained using a food chain biomagnification model developed by Gobas by specifying distributions for input parameters disaggregated to represent either uncertainty or variability. Only those sources of uncertainty that could be quantified were included in the analysis. Results for several different two-dimensional Latin Hypercube analyses are provided to evaluate the influence of the uncertain versus variable disaggregation of model parameters. The analysis suggests that variability in human exposure parameters is greater than the uncertainty bounds on any particular fractile, given the described assumptions.  相似文献   

13.
A better understanding of the uncertainty that exists in models used for seismic risk assessment is critical to improving risk-based decisions pertaining to earthquake safety. Current models estimating the probability of collapse of a building do not consider comprehensively the nature and impact of uncertainty. This article presents a model framework to enhance seismic risk assessment and thus gives decisionmakers a fuller understanding of the nature and limitations of the estimates. This can help ensure that risks are not over- or underestimated and the value of acquiring accurate data is appreciated fully. The methodology presented provides a novel treatment of uncertainties in input variables, their propagation through the model, and their effect on the results. The study presents ranges of possible annual collapse probabilities for different case studies on buildings in different parts of the world, exposed to different levels of seismicity, and with different vulnerabilities. A global sensitivity analysis was conducted to determine the significance of uncertain variables. Two key outcomes are (1) that the uncertainty in ground-motion conversion equations has the largest effect on the uncertainty in the calculation of annual collapse probability; and (2) the vulnerability of a building appears to have an effect on the range of annual collapse probabilities produced, i.e., the level of uncertainty in the estimate of annual collapse probability, with less vulnerable buildings having a smaller uncertainty.  相似文献   

14.
Treatment of Uncertainty in Performance Assessments for Complex Systems   总被引:13,自引:0,他引:13  
When viewed at a high level, performance assessments (PAs) for complex systems involve two types of uncertainty: stochastic uncertainty, which arises because the system under study can behave in many different ways, and subjective uncertainty, which arises from a lack of knowledge about quantities required within the computational implementation of the PA. Stochastic uncertainty is typically incorporated into a PA with an experimental design based on importance sampling and leads to the final results of the PA being expressed as a complementary cumulative distribution function (CCDF). Subjective uncertainty is usually treated with Monte Carlo techniques and leads to a distribution of CCDFs. This presentation discusses the use of the Kaplan/Garrick ordered triple representation for risk in maintaining a distinction between stochastic and subjective uncertainty in PAs for complex systems. The topics discussed include (1) the definition of scenarios and the calculation of scenario probabilities and consequences, (2) the separation of subjective and stochastic uncertainties, (3) the construction of CCDFs required in comparisons with regulatory standards (e.g., 40 CFR Part 191, Subpart B for the disposal of radioactive waste), and (4) the performance of uncertainty and sensitivity studies. Results obtained in a preliminary PA for the Waste Isolation Pilot Plant, an uncertainty and sensitivity analysis of the MACCS reactor accident consequence analysis model, and the NUREG-1150 probabilistic risk assessments are used for illustration.  相似文献   

15.
Cox LA 《Risk analysis》2011,31(10):1530-3; discussion 1538-42
Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.  相似文献   

16.
The life cycle assessment (LCA) framework has established itself as the leading tool for the assessment of the environmental impact of products. Several works have established the need of integrating the LCA and risk analysis methodologies, due to the several common aspects. One of the ways to reach such integration is through guaranteeing that uncertainties in LCA modeling are carefully treated. It has been claimed that more attention should be paid to quantifying the uncertainties present in the various phases of LCA. Though the topic has been attracting increasing attention of practitioners and experts in LCA, there is still a lack of understanding and a limited use of the available statistical tools. In this work, we introduce a protocol to conduct global sensitivity analysis in LCA. The article focuses on the life cycle impact assessment (LCIA), and particularly on the relevance of global techniques for the development of trustable impact assessment models. We use a novel characterization model developed for the quantification of the impacts of noise on humans as a test case. We show that global SA is fundamental to guarantee that the modeler has a complete understanding of: (i) the structure of the model and (ii) the importance of uncertain model inputs and the interaction among them.  相似文献   

17.
A probabilistic model (SHEDS-Wood) was developed to examine children's exposure and dose to chromated copper arsenate (CCA)-treated wood, as described in Part 1 of this two-part article. This Part 2 article discusses sensitivity and uncertainty analyses conducted to assess the key model inputs and areas of needed research for children's exposure to CCA-treated playsets and decks. The following types of analyses were conducted: (1) sensitivity analyses using a percentile scaling approach and multiple stepwise regression; and (2) uncertainty analyses using the bootstrap and two-stage Monte Carlo techniques. The five most important variables, based on both sensitivity and uncertainty analyses, were: wood surface residue-to-skin transfer efficiency; wood surface residue levels; fraction of hand surface area mouthed per mouthing event; average fraction of nonresidential outdoor time a child plays on/around CCA-treated public playsets; and frequency of hand washing. In general, there was a factor of 8 for the 5th and 95th percentiles and a factor of 4 for the 50th percentile in the uncertainty of predicted population dose estimates due to parameter uncertainty. Data were available for most of the key model inputs identified with sensitivity and uncertainty analyses; however, there were few or no data for some key inputs. To evaluate and improve the accuracy of model results, future measurement studies should obtain longitudinal time-activity diary information on children, spatial and temporal measurements of residue and soil concentrations on or near CCA-treated playsets and decks, and key exposure factors. Future studies should also address other sources of uncertainty in addition to parameter uncertainty, such as scenario and model uncertainty.  相似文献   

18.
This paper demonstrates a new methodology for probabilistic public health risk assessment using the first-order reliability method. The method provides the probability that incremental lifetime cancer risk exceeds a threshold level, and the probabilistic sensitivity quantifying the relative impact of considering the uncertainty of each random variable on the exceedance probability. The approach is applied to a case study given by Thompson et al. (1) on cancer risk caused by ingestion of benzene-contaminated soil, and the results are compared to that of the Monte Carlo method. Parametric sensitivity analyses are conducted to assess the sensitivity of the probabilistic event with respect to the distribution parameters of the basic random variables, such as the mean and standard deviation. The technique is a novel approach to probabilistic risk assessment, and can be used in situations when Monte Carlo analysis is computationally expensive, such as when the simulated risk is at the tail of the risk probability distribution.  相似文献   

19.
Saltelli  Andrea  Tarantola  Stefano  Chan  Karen 《Risk analysis》1998,18(6):799-803
The motivation of the present work is to provide an auxiliary tool for the decision-maker (DM) faced with predictive model uncertainty. The tool is especially suited for the allocation of R&Dresources. When taking decisions under uncertainties, making use of the output from mathematical or computational models, the DM might be helped if the uncertainty in model predictions be decomposed in a quantitative-rather than qualitativefashion, apportioning uncertainty according to source. This would allow optimal use of resources to reduce the imprecision in the prediction. For complex models, such a decomposition of the uncertainty into constituent elements could be impractical as such, due to the large number of parameters involved. If instead parameters could be grouped into logical subsets, then the analysis could be more useful, also because the decision maker might likely have different perceptions (and degrees of acceptance) for different kinds of uncertainty. For instance, the decomposition in groups could involve one subset of factors for each constituent module of the model; or one set for the weights, and one for the factors in a multicriteria analysis; or phenomenological parameters of the model vs. factors driving the model configuratiodstructure aggregation level, etc.); finally, one might imagine that a partition of the uncertainty could be sought between stochastic (or aleatory) and subjective (or epistemic) uncertainty. The present note shows how to compute rigorous decomposition of the output's variance with grouped parameters, and how this approach may be beneficial for the efficiency and transparency of the analysis.  相似文献   

20.
Jan F. Van Impe 《Risk analysis》2011,31(8):1295-1307
The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo‐randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used—that is, an ANOVA‐like model and Sobol sensitivity indices—to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号