首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nanomaterials are finding application in many different environmentally relevant products and processes due to enhanced catalytic, antimicrobial, and oxidative properties of materials at this scale. As the market share of nano‐functionalized products increases, so too does the potential for environmental exposure and contamination. This study presents some exposure ranking methods that consider potential metallic nanomaterial surface water exposure and fate, due to nano‐functionalized products, through a number of exposure pathways. These methods take into account the limited and disparate data currently available for metallic nanomaterials and apply variability and uncertainty principles, together with qualitative risk assessment principles, to develop a scientific ranking. Three exposure scenarios with three different nanomaterials were considered to demonstrate these assessment methods: photo‐catalytic exterior paint (nano‐scale TiO2), antimicrobial food packaging (nano‐scale Ag), and particulate‐reducing diesel fuel additives (nano‐scale CeO2). Data and hypotheses from literature relating to metallic nanomaterial aquatic behavior (including the behavior of materials that may relate to nanomaterials in aquatic environments, e.g., metals, pesticides, surfactants) were used together with commercial nanomaterial characteristics and Irish natural aquatic environment characteristics to rank the potential concentrations, transport, and persistence behaviors within subjective categories. These methods, and the applied scenarios, reveal where data critical to estimating exposure and risk are lacking. As research into the behavior of metallic nanomaterials in different environments emerges, the influence of material and environmental characteristics on nanomaterial behavior within these exposure‐ and risk‐ranking methods may be redefined on a quantitative basis.  相似文献   

2.
Ten years ago, the National Academy of Science released its risk assessment/risk management (RA/RM) “paradigm” that served to crystallize much of the early thinking about these concepts. By defining RA as a four-step process, operationally independent from RM, the paradigm has presented society with a scheme, or a conceptually common framework, for addressing many risky situations (e.g., carcinogens, noncarcinogens, and chemical mixtures). The procedure has facilitated decision-making in a wide variety of situations and has identified the most important research needs. The past decade, however, has revealed that additional progress is needed. These areas include addressing the appropriate interaction (not isolation) between RA and RM, improving the methods for assessing risks from mixtures, dealing with “adversity of effect,” deciding whether “hazard” should imply an exposure to environmental conditions or to laboratory conditions, and evolving the concept to include both health and ecological risk. Interest in and expectations of risk assessment are increasing rapidly. The emerging concept of “comparative risk” (i.e., distinguishing between large risks and smaller risks that may be qualitatively different) is at a level comparable to that held by the concept of “risk” just 10 years ago. Comparative risk stands in need of a paradigm of its own, especially given the current economic limitations. “Times are tough; Brother, can you paradigm?”  相似文献   

3.
Since the 1997 EC – Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia – Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a “negligible” probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10?6. The Australia – Apples Panel found that the use of this distribution would tend to overestimate the likelihood of “negligible” events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10?6 would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10?7 but did not consider that the triangular distribution has an expected value of 3.3 × 10?7. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement.  相似文献   

4.
The Petroleum Safety Authority Norway (PSA‐N) has recently adopted a new definition of risk: “the consequences of an activity with the associated uncertainty.” The PSA‐N has also been using “deficient risk assessment” for some time as a basis for assigning nonconformities in audit reports. This creates an opportunity to study the link between risk perspective and risk assessment quality in a regulatory context, and, in the present article, we take a hard look at the term “deficient risk assessment” both normatively and empirically. First, we perform a conceptual analysis of how a risk assessment can be deficient in light of a particular risk perspective consistent with the new PSA‐N risk definition. Then, we examine the usages of the term “deficient” in relation to risk assessments in PSA‐N audit reports and classify these into a set of categories obtained from the conceptual analysis. At an overall level, we were able to identify on what aspects of the risk assessment the PSA‐N is focusing and where deficiencies are being identified in regulatory practice. A key observation is that there is a diversity in how the agency officials approach the risk assessments in audits. Hence, we argue that improving the conceptual clarity of what the authorities characterize as “deficient” in relation to the uncertainty‐based risk perspective may contribute to the development of supervisory practices and, eventually, potentially strengthen the learning outcome of the audit reports.  相似文献   

5.
Qualitative systems for rating animal antimicrobial risks using ordered categorical labels such as “high,”“medium,” and “low” can potentially simplify risk assessment input requirements used to inform risk management decisions. But do they improve decisions? This article compares the results of qualitative and quantitative risk assessment systems and establishes some theoretical limitations on the extent to which they are compatible. In general, qualitative risk rating systems satisfying conditions found in real‐world rating systems and guidance documents and proposed as reasonable make two types of errors: (1) Reversed rankings, i.e., assigning higher qualitative risk ratings to situations that have lower quantitative risks; and (2) Uninformative ratings, e.g., frequently assigning the most severe qualitative risk label (such as “high”) to situations with arbitrarily small quantitative risks and assigning the same ratings to risks that differ by many orders of magnitude. Therefore, despite their appealing consensus‐building properties, flexibility, and appearance of thoughtful process in input requirements, qualitative rating systems as currently proposed often do not provide sufficient information to discriminate accurately between quantitatively small and quantitatively large risks. The value of information (VOI) that they provide for improving risk management decisions can be zero if most risks are small but a few are large, since qualitative ratings may then be unable to confidently distinguish the large risks from the small. These limitations suggest that it is important to continue to develop and apply practical quantitative risk assessment methods, since qualitative ones are often unreliable.  相似文献   

6.
With the increasing use of nanomaterials incorporated into consumer products, there is a need for developing approaches to establish “quantitative structure‐activity relationships” (QSARs). These relationships could be used to predict various biological responses after exposure to nanomaterials for the purposes of risk analysis. This risk analysis is applicable to manufacturers of nanomaterials in an effort to determine potential hazards. Because metal oxide materials are some of the most widely applicable and studied nanoparticle types for incorporation into cosmetics, food packaging, and paints and coatings, we focused on comparing different approaches for establishing QSARs for this class of materials. Metal oxide nanoparticles are believed, by some, to cause alterations in cellular function due to their size and/or surface area. Others have said that these nanomaterials, because of the oxidized state of the metal, do not induce stress in biological tests systems. This controversy highlights the need to systematically develop structure‐activity relationships (i.e., the relationship between physicochemical features to the cellular responses) and tools for predicting potential biological effects after a metal oxide nanomaterial exposure. Here, we attempt to identify a set of properties of two specific metal oxide nanomaterials—TiO2 and ZnO—that could be used to characterize and predict the induced cellular membrane damage of immortalized human lung epithelial cells. We adopt a mathematical modeling approach that uses the engineered nanomaterial size characterized as a dry nanopowder and the nanomaterial behavior in ultrapure water, phosphate buffer, and cell culture media to predict nanomaterial‐induced cellular membrane damage (via lactate dehydrogenase release). Results of these studies provide insights on how engineered nanomaterial features influence cellular responses and thereby outline possible approaches for developing and applying predictive computational models for biological responses caused by exposure to nanomaterials.  相似文献   

7.
8.
Scientists, activists, industry, and governments have raised concerns about health and environmental risks of nanoscale materials. The Society for Risk Analysis convened experts in September 2008 in Washington, DC to deliberate on issues relating to the unique attributes of nanoscale materials that raise novel concerns about health risks. This article reports on the overall themes and findings of the workshop, uncovering the underlying issues for each of these topics that become recurring themes. The attributes of nanoscale particles and other nanomaterials that present novel issues for risk analysis are evaluated in a risk analysis framework, identifying challenges and opportunities for risk analysts and others seeking to assess and manage the risks from emerging nanoscale materials and nanotechnologies. Workshop deliberations and recommendations for advancing the risk analysis and management of nanotechnologies are presented.  相似文献   

9.
In expected utility theory, risk attitudes are modeled entirely in terms of utility. In the rank‐dependent theories, a new dimension is added: chance attitude, modeled in terms of nonadditive measures or nonlinear probability transformations that are independent of utility. Most empirical studies of chance attitude assume probabilities given and adopt parametric fitting for estimating the probability transformation. Only a few qualitative conditions have been proposed or tested as yet, usually quasi‐concavity or quasi‐convexity in the case of given probabilities. This paper presents a general method of studying qualitative properties of chance attitude such as optimism, pessimism, and the “inverse‐S shape” pattern, both for risk and for uncertainty. These qualitative properties can be characterized by permitting appropriate, relatively simple, violations of the sure‐thing principle. In particular, this paper solves a hitherto open problem: the preference axiomatization of convex (“pessimistic” or “uncertainty averse”) nonadditive measures under uncertainty. The axioms of this paper preserve the central feature of rank‐dependent theories, i.e. the separation of chance attitude and utility.  相似文献   

10.
Kenneth T. Bogen 《Risk analysis》2014,34(10):1795-1806
The National Research Council 2009 “Silver Book” panel report included a recommendation that the U.S. Environmental Protection Agency (EPA) should increase all of its chemical carcinogen (CC) potency estimates by ~7‐fold to adjust for a purported median‐vs.‐mean bias that I recently argued does not exist (Bogen KT. “Does EPA underestimate cancer risks by ignoring susceptibility differences?,” Risk Analysis, 2014; 34(10):1780–1784). In this issue of the journal, my argument is critiqued for having flaws concerning: (1) intent, bias, and conservatism of EPA estimates of CC potency; (2) bias in potency estimates derived from epidemiology; and (3) human‐animal CC‐potency correlation. However, my argument remains valid, for the following reasons. (1) EPA's default approach to estimating CC risks has correctly focused on bounding average (not median) individual risk under a genotoxic mode‐of‐action (MOA) assumption, although pragmatically the approach leaves both inter‐individual variability in CC–susceptibility, and widely varying CC‐specific magnitudes of fundamental MOA uncertainty, unquantified. (2) CC risk estimates based on large epidemiology studies are not systematically biased downward due to limited sampling from broad, lognormal susceptibility distributions. (3) A good, quantitative correlation is exhibited between upper‐bounds on CC‐specific potency estimated from human vs. animal studies (n = 24, r = 0.88, p = 2 × 10?8). It is concluded that protective upper‐bound estimates of individual CC risk that account for heterogeneity in susceptibility, as well as risk comparisons informed by best predictions of average‐individual and population risk that address CC‐specific MOA uncertainty, should each be used as separate, complimentary tools to improve regulatory decisions concerning low‐level, environmental CC exposures.  相似文献   

11.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

12.
The present study investigates U.S. Department of Agriculture inspection records in the Agricultural Quarantine Activity System database to estimate the probability of quarantine pests on propagative plant materials imported from various countries of origin and to develop a methodology ranking the risk of country–commodity combinations based on quarantine pest interceptions. Data collected from October 2014 to January 2016 were used for developing predictive models and validation study. A generalized linear model with Bayesian inference and a generalized linear mixed effects model were used to compare the interception rates of quarantine pests on different country–commodity combinations. Prediction ability of generalized linear mixed effects models was greater than that of generalized linear models. The estimated pest interception probability and confidence interval for each country–commodity combination was categorized into one of four compliance levels: “High,” “Medium,” “Low,” and “Poor/Unacceptable,” Using K‐means clustering analysis. This study presents risk‐based categorization for each country–commodity combination based on the probability of quarantine pest interceptions and the uncertainty in that assessment.  相似文献   

13.
Terje Aven 《Risk analysis》2015,35(3):476-483
Nassim Taleb's antifragile concept has been shown considerable interest in the media and on the Internet recently. For Taleb, the antifragile concept is a blueprint for living in a black swan world (where surprising extreme events may occur), the key being to love variation and uncertainty to some degree, and thus also errors. The antonym of “fragile” is not robustness or resilience, but “please mishandle” or “please handle carelessly,” using an example from Taleb when referring to sending a package full of glasses by post. In this article, we perform a detailed analysis of this concept, having a special focus on how the antifragile concept relates to common ideas and principles of risk management. The article argues that Taleb's antifragile concept adds an important contribution to the current practice of risk analysis by its focus on the dynamic aspects of risk and performance, and the necessity of some variation, uncertainties, and risk to achieve improvements and high performance at later stages.  相似文献   

14.
“Modest doubt is call'd the beacon of the wise.”—William Shakespeare, Troilus and Cressida. Although the character Hector warns his fellow Trojans with this line not to engage in war against the Greeks, Shakespeare's works are replete with characters who do not incorporate modest doubt, or any consideration of uncertainty, in their risk decisions. Perhaps Shakespeare was simply a keen observer of human nature. Although risk science has developed tremendously over the last five decades (and scientific inquiry over five centuries), the human mind still frequently defaults to conviction about certain beliefs, absent sufficient scientific evidence—which has effects not just on individual lives, but on policy decisions that affect many. This perspective provides background on the Shakespearean quote in its literary and historical context. Then, as this quote is the theme of the 2023 Society for Risk Analysis Annual Meeting, we describe how “modest doubt”—incorporating the notion of uncertainty into risk analysis for individual and policy decisions—is still the “beacon of the wise” today.  相似文献   

15.
The purpose of this article is to discuss the role of quantitative risk assessments for characterizing risk and uncertainty and delineating appropriate risk management options. Our main concern is situations (risk problems) with large potential consequences, large uncertainties, and/or ambiguities (related to the relevance, meaning, and implications of the decision basis; or related to the values to be protected and the priorities to be made), in particular terrorism risk. We look into the scientific basis of the quantitative risk assessments and the boundaries of the assessments in such a context. Based on a risk perspective that defines risk as uncertainty about and severity of the consequences (or outcomes) of an activity with respect to something that humans value we advocate a broad risk assessment approach characterizing uncertainties beyond probabilities and expected values. Key features of this approach are qualitative uncertainty assessment and scenario building instruments.  相似文献   

16.
Weinstein and Yildiz (2007) have shown that in static games, only very weak predictions are robust to perturbations of higher order beliefs. These predictions are precisely those provided by interim correlated rationalizability (ICR). This negative result is obtained under the assumption that agents have no information on payoffs. This assumption is unnatural in many settings. It is therefore natural to ask whether Weinstein and Yildiz's results remain true under more general information structures. This paper characterizes the “robust predictions” in static and dynamic games, under arbitrary information structures. This characterization is provided by an extensive form solution concept: interim sequential rationalizability (ISR). In static games, ISR coincides with ICR and does not depend on the assumptions on agents' information. Hence the “no information” assumption entails no loss of generality in these settings. This is not the case in dynamic games, where ISR refines ICR and depends on the details of the information structure. In these settings, the robust predictions depend on the assumptions on agents' information. This reveals a hitherto neglected interaction between information and higher order uncertainty, raising novel questions of robustness.  相似文献   

17.
Scott Janzwood 《Risk analysis》2023,43(10):2004-2016
Outside of the field of risk analysis, an important theoretical conversation on the slippery concept of uncertainty has unfolded over the last 40 years within the adjacent field of environmental risk. This literature has become increasingly standardized behind the tripartite distinction between uncertainty location, the nature of uncertainty, and uncertainty level, popularized by the “W&H framework.” This article introduces risk theorists and practitioners to the conceptual literature on uncertainty with the goal of catalyzing further development and clarification of the uncertainty concept within the field of risk analysis. It presents two critiques of the W&H framework's dimension of uncertainty level—the dimension that attempts to define the characteristics separating greater uncertainties from lesser uncertainties. First, I argue the framework's conceptualization of uncertainty level lacks a clear and consistent epistemological position and fails to acknowledge or reconcile the tensions between Bayesian and frequentist perspectives present within the framework. This article reinterprets the dimension of uncertainty level from a Bayesian perspective, which understands uncertainty as a mental phenomenon arising from “confidence deficits” as opposed to the ill-defined notion of “knowledge deficits” present in the framework. And second, I elaborate the undertheorized concept of uncertainty “reducibility.” These critiques inform a clarified conceptualization of uncertainty level that can be integrated with risk analysis concepts and usefully applied by modelers and decisionmakers engaged in model-based decision support.  相似文献   

18.
The qualitative and quantitative evaluation of risk in developmental toxicology has been discussed in several recent publications.(1–3) A number of issues still are to be resolved in this area. The qualitative evaluation and interpretation of end points in developmental toxicology depends on an understanding of the biological events leading to the end points observed, the relationships among end points, and their relationship to dose and to maternal toxicity. The interpretation of these end points is also affected by the statistical power of the experiments used for detecting the various end points observed. The quantitative risk assessment attempts to estimate human risk for developmental toxicity as a function of dose. The current approach is to apply safety (uncertainty) factors to die no observed effect level (NOEL). An alternative presented and discussed here is to model the experimental data and apply a safety factor to an estimated risk level to achieve an “acceptable” level of risk. In cases where the dose-response curves upward, this approach provides a conservative estimate of risk. This procedure does not preclude the existence of a threshold dose. More research is needed to develop appropriate dose-response models that can provide better estimates for low-dose extrapolation of developmental effects.  相似文献   

19.
Attitudes towards risk and uncertainty have been indicated to be highly context‐dependent, and to be sensitive to the measurement technique employed. We present data collected in controlled experiments with 2,939 subjects in 30 countries measuring risk and uncertainty attitudes through incentivized measures as well as survey questions. Our data show clearly that measures correlate not only within decision contexts or measurement methods, but also across contexts and methods. This points to the existence of one underlying “risk preference”, which influences attitudes independently of the measurement method or choice domain. We furthermore find that answers to a general and a financial survey question correlate with incentivized lottery choices in most countries. Incentivized and survey measures also correlate significantly between countries. This opens the possibility to conduct cultural comparisons on risk attitudes using survey instruments.  相似文献   

20.
Humans are continuously exposed to chemicals with suspected or proven endocrine disrupting chemicals (EDCs). Risk management of EDCs presents a major unmet challenge because the available data for adverse health effects are generated by examining one compound at a time, whereas real‐life exposures are to mixtures of chemicals. In this work, we integrate epidemiological and experimental evidence toward a whole mixture strategy for risk assessment. To illustrate, we conduct the following four steps in a case study: (1) identification of single EDCs (“bad actors”)—measured in prenatal blood/urine in the SELMA study—that are associated with a shorter anogenital distance (AGD) in baby boys; (2) definition and construction of a “typical” mixture consisting of the “bad actors” identified in Step 1; (3) experimentally testing this mixture in an in vivo animal model to estimate a dose–response relationship and determine a point of departure (i.e., reference dose [RfD]) associated with an adverse health outcome; and (4) use a statistical measure of “sufficient similarity” to compare the experimental RfD (from Step 3) to the exposure measured in the human population and generate a “similar mixture risk indicator” (SMRI). The objective of this exercise is to generate a proof of concept for the systematic integration of epidemiological and experimental evidence with mixture risk assessment strategies. Using a whole mixture approach, we could find a higher rate of pregnant women under risk (13%) when comparing with the data from more traditional models of additivity (3%), or a compound‐by‐compound strategy (1.6%).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号