首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Congress is currently considering adopting a mathematical formula to assign shares in cancer causation to specific doses of radiation, for use in establishing liability and compensation awards. The proposed formula, if it were sound, would allow difficult problems in tort law and public policy to be resolved by reference to tabulated "probabilities of causation." This article examines the statistical and conceptual bases for the proposed methodology. We find that the proposed formula is incorrect as an expression for "probability and causation," that it implies hidden, debatable policy judgments in its treatment of factor interactions and uncertainties, and that it can not in general be quantified with sufficient precision to be useful. Three generic sources of statistical uncertainty are identified--sampling variability, population heterogeneity, and error propagation--that prevent accurate quantification of "assigned shares." These uncertainties arise whenever aggregate epidemiological or risk data are used to draw causal inferences about individual cases.  相似文献   

2.
The role of the risk analyst is critical in understanding and managing uncertainty. However, there is another type of uncertainty that is rarely discussed: The legal, social, and reputational liabilities of the risk analyst. Recent events have shown that professionals participating in risk analysis can be held personally liable. It is timely and important to ask: How can risk science guide risk analysis with consideration of those liabilities, particularly in response to emerging and unprecedented risk. This paper studies this topic by: (1) Categorizing how professionals with risk analysis responsibilities have historically been held liable, and (2) developing a framework to address uncertainty related to those potential liabilities. The result of this framework will enable individual analysts and organizations to investigate and manage the expectations of risk analysts and others as they apply risk principles and methods. This paper will be of interest to risk researchers, risk professionals, and industry professionals who seek maturity within their risk programs.  相似文献   

3.
This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices.  相似文献   

4.
M. C. Kennedy 《Risk analysis》2011,31(10):1597-1609
Two‐dimensional Monte Carlo simulation is frequently used to implement probabilistic risk models, as it allows for uncertainty and variability to be quantified separately. In many cases, we are interested in the proportion of individuals from a variable population exceeding a critical threshold, together with uncertainty about this proportion. In this article we introduce a new method that can accurately estimate these quantities much more efficiently than conventional algorithms. We also show how those model parameters having the greatest impact on the probabilities of rare events can be quickly identified via this method. The algorithm combines elements from well‐established statistical techniques in extreme value theory and Bayesian analysis of computer models. We demonstrate the practical application of these methods with a simple example, in which the true distributions are known exactly, and also with a more realistic model of microbial contamination of milk with seven parameters. For the latter, sensitivity analysis (SA) is shown to identify the two inputs explaining the majority of variation in distribution tail behavior. In the subsequent prediction of probabilities of large contamination events, similar results are obtained using the new approach taking 43 seconds or the conventional simulation that requires more than 3 days.  相似文献   

5.
Standard statistical methods understate the uncertainty one should attach to effect estimates obtained from observational data. Among the methods used to address this problem are sensitivity analysis, Monte Carlo risk analysis (MCRA), and Bayesian uncertainty assessment. Estimates from MCRAs have been presented as if they were valid frequentist or Bayesian results, but examples show that they need not be either in actual applications. It is concluded that both sensitivity analyses and MCRA should begin with the same type of prior specification effort as Bayesian analysis.  相似文献   

6.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

7.
Interest in examining both the uncertainty and variability in environmental health risk assessments has led to increased use of methods for propagating uncertainty. While a variety of approaches have been described, the advent of both powerful personal computers and commercially available simulation software have led to increased use of Monte Carlo simulation. Although most analysts and regulators are encouraged by these developments, some are concerned that Monte Carlo analysis is being applied uncritically. The validity of any analysis is contingent on the validity of the inputs to the analysis. In the propagation of uncertainty or variability, it is essential that the statistical distribution of input variables are properly specified. Furthermore, any dependencies among the input variables must be considered in the analysis. In light of the potential difficulty in specifying dependencies among input variables, it is useful to consider whether there exist rules of thumb as to when correlations can be safely ignored (i.e., when little overall precision is gained by an additional effort to improve upon an estimation of correlation). We make use of well-known error propagation formulas to develop expressions intended to aid the analyst in situations wherein normally and lognormally distributed variables are linearly correlated.  相似文献   

8.
Advocates of quantitative uncertainty analysis (QUA) have invested substantial effort in explaining why uncertainty is a crucial aspect of risk and yet have devoted much less effort to explaining how QUA can improve the risk manager's performance. This paper develops a teaching example, using a personal decision problem with subtle parallels to societal risk management, to show how choices made with increasing appreciation of uncertainty are superior ones. In the hypothetical, five analysts explain the same uncertain prospect (whether to invest in a volatile stock issue), with increasing attention to the nuances of uncertainty. The path through these different perspectives on the decision demonstrates four general points applicable to environmental risk management: (1) Various point estimates with equal claim to being "best estimates" can differ markedly from each other and lead to diametrically different choices; (2) "conservatism" has both relative and absolute meanings, with different implications for decision-making; (3) both inattention to and fixation on "outliers" in the uncertainty distribution can lead the manager astray; and (4) the best QUA is one that helps discriminate among real options, that points to optimum pathways toward new information, and that spurs on the iterative search for new decision options that may outperform any of the initial ones offered.  相似文献   

9.
Brand  Kevin P.  Rhomberg  Lorenz  Evans  John S. 《Risk analysis》1999,19(2):295-308
The prominent role of animal bioassay evidence in environmental regulatory decisions compels a careful characterization of extrapolation uncertainties. In noncancer risk assessment, uncertainty factors are incorporated to account for each of several extrapolations required to convert a bioassay outcome into a putative subthreshold dose for humans. Measures of relative toxicity taken between different dosing regimens, different endpoints, or different species serve as a reference for establishing the uncertainty factors. Ratios of no observed adverse effect levels (NOAELs) have been used for this purpose; statistical summaries of such ratios across sets of chemicals are widely used to guide the setting of uncertainty factors. Given the poor statistical properties of NOAELs, the informativeness of these summary statistics is open to question. To evaluate this, we develop an approach to calibrate the ability of NOAEL ratios to reveal true properties of a specified distribution for relative toxicity. A priority of this analysis is to account for dependencies of NOAEL ratios on experimental design and other exogenous factors. Our analysis of NOAEL ratio summary statistics finds (1) that such dependencies are complex and produce pronounced systematic errors and (2) that sampling error associated with typical sample sizes (50 chemicals) is non-negligible. These uncertainties strongly suggest that NOAEL ratio summary statistics cannot be taken at face value; conclusions based on such ratios reported in well over a dozen published papers should be reconsidered.  相似文献   

10.
This article reviews five published "second-order" risk comparisons from the past four decades that implied precise understanding, and hence clear relationships or orderings, of the underlying risks. "Second order" here refers to efforts that extract information from original sources with the goal of relating diverse findings. All five of these publications have frequently been cited in the peer-reviewed literature and/or in risk regulatory debate in the United States. Each is associated with at least one contemporaneous critique that the findings were excessively precise. None of these critiques suggested that an alternative relationship or ordering of the risks evaluated was more appropriate. Instead, each critique concluded that alternative, contradictory relationships were at least as plausible given data and/or analytical limitations. In one case, the critique led to the withdrawal of the original publication. The original findings have been propagated or used uncritically in subsequent literature, including political support for cost-effectiveness analysis. In other cases, the critiques have been used to discredit quantitative risk analysis in general, especially in the cases of nuclear power and cost-benefit analysis. Both of these outcomes are undesirable. Future risk comparisons should avoid excessive precision, include explicit discussion of uncertainty, and differentiate between plausible estimates and expected values.  相似文献   

11.
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time‐sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation‐based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix.  相似文献   

12.
13.
Scott Janzwood 《Risk analysis》2023,43(10):2004-2016
Outside of the field of risk analysis, an important theoretical conversation on the slippery concept of uncertainty has unfolded over the last 40 years within the adjacent field of environmental risk. This literature has become increasingly standardized behind the tripartite distinction between uncertainty location, the nature of uncertainty, and uncertainty level, popularized by the “W&H framework.” This article introduces risk theorists and practitioners to the conceptual literature on uncertainty with the goal of catalyzing further development and clarification of the uncertainty concept within the field of risk analysis. It presents two critiques of the W&H framework's dimension of uncertainty level—the dimension that attempts to define the characteristics separating greater uncertainties from lesser uncertainties. First, I argue the framework's conceptualization of uncertainty level lacks a clear and consistent epistemological position and fails to acknowledge or reconcile the tensions between Bayesian and frequentist perspectives present within the framework. This article reinterprets the dimension of uncertainty level from a Bayesian perspective, which understands uncertainty as a mental phenomenon arising from “confidence deficits” as opposed to the ill-defined notion of “knowledge deficits” present in the framework. And second, I elaborate the undertheorized concept of uncertainty “reducibility.” These critiques inform a clarified conceptualization of uncertainty level that can be integrated with risk analysis concepts and usefully applied by modelers and decisionmakers engaged in model-based decision support.  相似文献   

14.
Guikema S 《Risk analysis》2012,32(7):1117-1121
Intelligent adversary modeling has become increasingly important for risk analysis, and a number of different approaches have been proposed for incorporating intelligent adversaries in risk analysis models. However, these approaches are based on a range of often-implicit assumptions about the desirable properties of intelligent adversary models. This "Perspective" paper aims to further risk analysis for situations involving intelligent adversaries by fostering a discussion of the desirable properties for these models. A set of four basic necessary conditions for intelligent adversary models is proposed and discussed. These are: (1) behavioral accuracy to the degree possible, (2) computational tractability to support decision making, (3) explicit consideration of uncertainty, and (4) ability to gain confidence in the model. It is hoped that these suggested necessary conditions foster discussion about the goals and assumptions underlying intelligent adversary modeling in risk analysis.  相似文献   

15.
This article responds to the call advancing risk science as an independent research field, by introducing a conceptual model for risk analysis based on distributed sensemaking. Significant advances in recent decades have expanded the use of risk analysis to almost every organization globally. Continued improvements have been made to our understanding of risk, placing a wide range of contexts under organizational control. This article argues that four dimensions are central in how organizations make sense of uncertainty in their context and hence do risk analysis: the activities the organization engages in, their sensory systems, the role and competence of individuals, and the ability to coordinate information through organizational structures. The structure enables insight into the decision-making process and the dimensions contributing to how organizations perceive risks and uncertainty in a given context. Three examples from the Arctic context illustrate the network risk analysis model's practical application and how it will expose weaknesses in these organizations’ risk analysis and decision-making processes. Finally, the article discusses sensemaking in network risk analysis and how such an approach supports organizations’ ability to perceive, collect, process, and decide on changes in context.  相似文献   

16.
Marc Kennedy  Andy Hart 《Risk analysis》2009,29(10):1427-1442
We propose new models for dealing with various sources of variability and uncertainty that influence risk assessments for dietary exposure. The uncertain or random variables involved can interact in complex ways, and the focus is on methodology for integrating their effects and on assessing the relative importance of including different uncertainty model components in the calculation of dietary exposures to contaminants, such as pesticide residues. The combined effect is reflected in the final inferences about the population of residues and subsequent exposure assessments. In particular, we show how measurement uncertainty can have a significant impact on results and discuss novel statistical options for modeling this uncertainty. The effect of measurement error is often ignored, perhaps due to the laboratory process conforming to the relevant international standards, for example, or is treated in an  ad hoc  way. These issues are common to many dietary risk analysis problems, and the methods could be applied to any food and chemical of interest. An example is presented using data on carbendazim in apples and consumption surveys of toddlers.  相似文献   

17.
Biomagnification of organochlorine and other persistent organic contaminants by higher trophic level organisms represents one of the most significant sources of uncertainty and variability in evaluating potential risks associated with disposal of dredged materials. While it is important to distinguish between population variability (e.g., true population heterogeneity in fish weight, and lipid content) and uncertainty (e.g., measurement error), they can be operationally difficult to define separately in probabilistic estimates of human health and ecological risk. We propose a disaggregation of uncertain and variable parameters based on: (1) availability of supporting data; (2) the specific management and regulatory context (in this case, of the U.S. Army Corps of Engineers/U.S. Environmental Protection Agency tiered approach to dredged material management); and (3) professional judgment and experience in conducting probabilistic risk assessments. We describe and quantitatively evaluate several sources of uncertainty and variability in estimating risk to human health from trophic transfer of polychlorinated biphenyls (PCBs) using a case study of sediments obtained from the New York-New Jersey Harbor and being evaluated for disposal at an open water off-shore disposal site within the northeast region. The estimates of PCB concentrations in fish and dietary doses of PCBs to humans ingesting fish are expressed as distributions of values, of which the arithmetic mean or mode represents a particular fractile. The distribution of risk values is obtained using a food chain biomagnification model developed by Gobas by specifying distributions for input parameters disaggregated to represent either uncertainty or variability. Only those sources of uncertainty that could be quantified were included in the analysis. Results for several different two-dimensional Latin Hypercube analyses are provided to evaluate the influence of the uncertain versus variable disaggregation of model parameters. The analysis suggests that variability in human exposure parameters is greater than the uncertainty bounds on any particular fractile, given the described assumptions.  相似文献   

18.
Several major risk studies have been performed in recent years in the maritime transportation domain. These studies have had significant impact on management practices in the industry. The first, the Prince William Sound risk assessment, was reviewed by the National Research Council and found to be promising but incomplete, as the uncertainty in its results was not assessed. The difficulty in assessing this uncertainty is the different techniques that need to be used to model risk in this dynamic and data-scarce application area. In previous articles, we have developed the two pieces of methodology necessary to assess uncertainty in maritime risk assessment, a Bayesian simulation of the occurrence of situations with accident potential and a Bayesian multivariate regression analysis of the relationship between factors describing these situations and expert judgments of accident risk. In this article, we combine the methods to perform a full-scale assessment of risk and uncertainty for two case studies. The first is an assessment of the effects of proposed ferry service expansions in San Francisco Bay. The second is an assessment of risk for the Washington State Ferries, the largest ferry system in the United States.  相似文献   

19.
20.
Recent work in the assessment of risk in maritime transportation systems has used simulation-based probabilistic risk assessment techniques. In the Prince William Sound and Washington State Ferries risk assessments, the studies' recommendations were backed up by estimates of their impact made using such techniques and all recommendations were implemented. However, the level of uncertainty about these estimates was not available, leaving the decisionmakers unsure whether the evidence was sufficient to assess specific risks and benefits. The first step toward assessing the impact of uncertainty in maritime risk assessments is to model the uncertainty in the simulation models used. In this article, a study of the impact of proposed ferry service expansions in San Francisco Bay is used as a case study to demonstrate the use of Bayesian simulation techniques to propagate uncertainty throughout the analysis. The conclusions drawn in the original study are shown, in this case, to be robust to the inherent uncertainties. The main intellectual merit of this work is the development of Bayesian simulation technique to model uncertainty in the assessment of maritime risk. However, Bayesian simulations have been implemented only as theoretical demonstrations. Their use in a large, complex system may be considered state of the art in the field of computational sciences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号