首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Kenneth T. Bogen 《Risk analysis》2014,34(10):1795-1806
The National Research Council 2009 “Silver Book” panel report included a recommendation that the U.S. Environmental Protection Agency (EPA) should increase all of its chemical carcinogen (CC) potency estimates by ~7‐fold to adjust for a purported median‐vs.‐mean bias that I recently argued does not exist (Bogen KT. “Does EPA underestimate cancer risks by ignoring susceptibility differences?,” Risk Analysis, 2014; 34(10):1780–1784). In this issue of the journal, my argument is critiqued for having flaws concerning: (1) intent, bias, and conservatism of EPA estimates of CC potency; (2) bias in potency estimates derived from epidemiology; and (3) human‐animal CC‐potency correlation. However, my argument remains valid, for the following reasons. (1) EPA's default approach to estimating CC risks has correctly focused on bounding average (not median) individual risk under a genotoxic mode‐of‐action (MOA) assumption, although pragmatically the approach leaves both inter‐individual variability in CC–susceptibility, and widely varying CC‐specific magnitudes of fundamental MOA uncertainty, unquantified. (2) CC risk estimates based on large epidemiology studies are not systematically biased downward due to limited sampling from broad, lognormal susceptibility distributions. (3) A good, quantitative correlation is exhibited between upper‐bounds on CC‐specific potency estimated from human vs. animal studies (n = 24, r = 0.88, p = 2 × 10?8). It is concluded that protective upper‐bound estimates of individual CC risk that account for heterogeneity in susceptibility, as well as risk comparisons informed by best predictions of average‐individual and population risk that address CC‐specific MOA uncertainty, should each be used as separate, complimentary tools to improve regulatory decisions concerning low‐level, environmental CC exposures.  相似文献   

2.
Kenneth T. Bogen 《Risk analysis》2014,34(10):1780-1784
A 2009 report of the National Research Council (NRC) recommended that the U.S. Environmental Protection Agency (EPA) increase its estimates of increased cancer risk from exposure to environmental agents by ~7‐fold, due to an approximate ~25‐fold typical ratio between the median and upper 95th percentile persons’ cancer sensitivity assuming approximately lognormally distributed sensitivities. EPA inaction on this issue has raised concerns that cancer risks to environmentally exposed populations remain systematically underestimated. This concern is unwarranted, however, because EPA point estimates of cancer risk have always pertained to the average, not the median, person in each modeled exposure group. Nevertheless, EPA has yet to explain clearly how its risk characterization and risk management policies concerning individual risks from environmental chemical carcinogens do appropriately address broad variability in human cancer susceptibility that has been a focus of two major NRC reports to EPA concerning its risk assessment methods.  相似文献   

3.
In 2002, the U.S. Environmental Protection Agency (EPA) released an “Interim Policy on Genomics,” stating a commitment to developing guidance on the inclusion of genetic information in regulatory decision making. This statement was followed in 2004 by a document exploring the potential implications. Genetic information can play a key role in understanding and quantifying human susceptibility, an essential step in many of the risk assessments used to shape policy. For example, the federal Clean Air Act (CAA) requires EPA to set National Ambient Air Quality Standards (NAAQS) for criteria pollutants at levels to protect even sensitive populations from adverse health effects with an adequate margin of safety. Asthmatics are generally regarded as a sensitive population, yet substantial research gaps in understanding genetic susceptibility and disease have hindered quantitative risk analysis. This case study assesses the potential role of genomic information regarding susceptible populations in the NAAQS process for fine particulate matter (PM2.5) under the CAA. In this initial assessment, we model the contribution of a single polymorphism to asthma risk and mortality risk; however, multiple polymorphisms and interactions (gene‐gene and gene‐environment) are known to play key roles in the disease process. We show that the impact of new information about susceptibility on estimates of population risk or average risk derived from large epidemiological studies depends on the circumstances. We also suggest that analysis of a single polymorphism, or other risk factor such as health status, may or may not change estimates of individual risk enough to alter a particular regulatory decision, but this depends on specific characteristics of the decision and risk information. We also show how new information about susceptibility in the context of the NAAQS for PM2.5 could have a large impact on the estimated distribution of individual risk. This would occur if a group were consequently identified (based on genetic and/or disease status), that accounted for a disproportionate share of observed effects. Our results highlight certain conditions under which genetic information is likely to have an impact on risk estimates and the balance of costs and benefits within groups, and highlight critical research needs. As future studies explore more fully the relationship between exposure, genetic makeup, and disease status, the opportunity for genetic information and disease status to play pivotal roles in regulation can only increase.  相似文献   

4.
We review approaches for characterizing “peak” exposures in epidemiologic studies and methods for incorporating peak exposure metrics in dose–response assessments that contribute to risk assessment. The focus was on potential etiologic relations between environmental chemical exposures and cancer risks. We searched the epidemiologic literature on environmental chemicals classified as carcinogens in which cancer risks were described in relation to “peak” exposures. These articles were evaluated to identify some of the challenges associated with defining and describing cancer risks in relation to peak exposures. We found that definitions of peak exposure varied considerably across studies. Of nine chemical agents included in our review of peak exposure, six had epidemiologic data used by the U.S. Environmental Protection Agency (US EPA) in dose–response assessments to derive inhalation unit risk values. These were benzene, formaldehyde, styrene, trichloroethylene, acrylonitrile, and ethylene oxide. All derived unit risks relied on cumulative exposure for dose–response estimation and none, to our knowledge, considered peak exposure metrics. This is not surprising, given the historical linear no‐threshold default model (generally based on cumulative exposure) used in regulatory risk assessments. With newly proposed US EPA rule language, fuller consideration of alternative exposure and dose–response metrics will be supported. “Peak” exposure has not been consistently defined and rarely has been evaluated in epidemiologic studies of cancer risks. We recommend developing uniform definitions of “peak” exposure to facilitate fuller evaluation of dose response for environmental chemicals and cancer risks, especially where mechanistic understanding indicates that the dose response is unlikely linear and that short‐term high‐intensity exposures increase risk.  相似文献   

5.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

6.
A California Environmental Protection Agency (Cal/EPA) report concluded that a reasonable and likely explanation for the increased lung cancer rates in numerous epidemiological studies is a causal association between diesel exhaust exposure and lung cancer. A version of the present analysis, based on a retrospective study of a U.S. railroad worker cohort, provided the Cal/EPA report with some of its estimates of lung cancer risk associated with diesel exhaust. The individual data for that cohort study furnish information on age, employment, and mortality for 56,000 workers over 22 years. Related studies provide information on exposure concentrations. Other analyses of the original cohort data reported finding no relation between measures of diesel exhaust and lung cancer mortality, while a Health Effects Institute report found the data unsuitable for quantitative risk assessment. None of those three works used multistage models, which this article uses in finding a likely quantitative, positive relations between lung cancer and diesel exhaust. A seven-stage model that has the last or next-to-last stage sensitive to diesel exhaust provides best estimates of increase in annual mortality rate due to each unit of concentration, for bracketing assumptions on exposure. Using relative increases of risk and multiplying by the background lung cancer mortality rates for California, the 95% upper confidence limit of the 70-year unit risks for lung cancer is estimated to be in the range 2.1 x 10(-4) (microg/m3)(-1) to 5.5 x 10(-4) (microg/m3)(-1). These risks constitute the low end of those in the Cal/EPA report and are below those reported by previous investigators whose estimates were positive using human data.  相似文献   

7.
We examine whether the risk characterization estimated by catastrophic loss projection models is sensitive to the revelation of new information regarding risk type. We use commercial loss projection models from two widely employed modeling firms to estimate the expected hurricane losses of Florida Atlantic University's building stock, both including and excluding secondary information regarding hurricane mitigation features that influence damage vulnerability. We then compare the results of the models without and with this revealed information and find that the revelation of additional, secondary information influences modeled losses for the windstorm‐exposed university building stock, primarily evidenced by meaningful percent differences in the loss exceedance output indicated after secondary modifiers are incorporated in the analysis. Secondary risk characteristics for the data set studied appear to have substantially greater impact on probable maximum loss estimates than on average annual loss estimates. While it may be intuitively expected for catastrophe models to indicate that secondary risk characteristics hold value for reducing modeled losses, the finding that the primary value of secondary risk characteristics is in reduction of losses in the “tail” (low probability, high severity) events is less intuitive, and therefore especially interesting. Further, we address the benefit‐cost tradeoffs that commercial entities must consider when deciding whether to undergo the data collection necessary to include secondary information in modeling. Although we assert the long‐term benefit‐cost tradeoff is positive for virtually every entity, we acknowledge short‐term disincentives to such an effort.  相似文献   

8.
Modern theories in cognitive psychology and neuroscience indicate that there are two fundamental ways in which human beings comprehend risk. The “analytic system” uses algorithms and normative rules, such as probability calculus, formal logic, and risk assessment. It is relatively slow, effortful, and requires conscious control. The “experiential system” is intuitive, fast, mostly automatic, and not very accessible to conscious awareness. The experiential system enabled human beings to survive during their long period of evolution and remains today the most natural and most common way to respond to risk. It relies on images and associations, linked by experience to emotion and affect (a feeling that something is good or bad). This system represents risk as a feeling that tells us whether it is safe to walk down this dark street or drink this strange‐smelling water. Proponents of formal risk analysis tend to view affective responses to risk as irrational. Current wisdom disputes this view. The rational and the experiential systems operate in parallel and each seems to depend on the other for guidance. Studies have demonstrated that analytic reasoning cannot be effective unless it is guided by emotion and affect. Rational decision making requires proper integration of both modes of thought. Both systems have their advantages, biases, and limitations. Now that we are beginning to understand the complex interplay between emotion and reason that is essential to rational behavior, the challenge before us is to think creatively about what this means for managing risk. On the one hand, how do we apply reason to temper the strong emotions engendered by some risk events? On the other hand, how do we infuse needed “doses of feeling” into circumstances where lack of experience may otherwise leave us too “coldly rational”? This article addresses these important questions.  相似文献   

9.
基于条件风险价值CoVaR和SIM单指数分位数回归技术,选取2012-2018年我国股市24行业指数周频数据,构建时变的跨行业尾部风险网络,通过网络拓扑结构反映系统性风险的空间关联及潜在变化趋势。此外,引入ARDL模型探究网络结构和宏观经济变量对股市系统性风险的长短期效应,最后对系统性风险进行预测。结果表明:(1)我国股市行业板块间存在明显的系统性风险空间关联和传染效应,风险溢出网络具有“小世界”特征;(2)网络连边集中度HHI呈明显的周期性变化。在尾部事件期间,HHI指标显著增加,风险网络呈较单一的中心节点结构,网络稳定性差;(3)通过节点风险传播强度和中心化程度发现,仅通过节点内部属性判断节点的系统重要性已不够全面和准确,应结合节点在网络中的位置和关联关系来判断;信息技术、医疗保健、商业和专业服务行业是风险网络中最有影响力的行业;(4)通过ARDL-ECM模型发现网络连边集中度是系统性风险的主要影响因素,并对股市系统性风险进行了高度准确的预测。本研究可为监管机构有效识别我国股市中有影响力的行业提供参考,依据关键行业的溢出关联制定针对性的风险防范措施,同时对风险溢出效应设立预警机制。  相似文献   

10.
11.
The Environmental Benefits Mapping and Analysis Program (BenMAP) is a software tool developed by the U.S. Environmental Protection Agency (EPA) that is widely used inside and outside of EPA to produce quantitative estimates of public health risks from fine particulate matter (PM2.5). This article discusses the purpose and appropriate role of a risk analysis tool to support risk management deliberations, and evaluates the functions of BenMAP in this context. It highlights the importance in quantitative risk analyses of characterization of epistemic uncertainty, or outright lack of knowledge, about the true risk relationships being quantified. This article describes and quantitatively illustrates sensitivities of PM2.5 risk estimates to several key forms of epistemic uncertainty that pervade those calculations: the risk coefficient, shape of the risk function, and the relative toxicity of individual PM2.5 constituents. It also summarizes findings from a review of U.S.‐based epidemiological evidence regarding the PM2.5 risk coefficient for mortality from long‐term exposure. That review shows that the set of risk coefficients embedded in BenMAP substantially understates the range in the literature. We conclude that BenMAP would more usefully fulfill its role as a risk analysis support tool if its functions were extended to better enable and prompt its users to characterize the epistemic uncertainties in their risk calculations. This requires expanded automatic sensitivity analysis functions and more recognition of the full range of uncertainty in risk coefficients.  相似文献   

12.
Infrequently, it seems that a significant accident precursor or, worse, an actual accident, involving a commercial nuclear power reactor occurs to remind us of the need to reexamine the safety of this important electrical power technology from a risk perspective. Twenty‐five years since the major core damage accident at Chernobyl in the Ukraine, the Fukushima reactor complex in Japan experienced multiple core damages as a result of an earthquake‐induced tsunami beyond either the earthquake or tsunami design basis for the site. Although the tsunami itself killed tens of thousands of people and left the area devastated and virtually uninhabitable, much concern still arose from the potential radioactive releases from the damaged reactors, even though there was little population left in the area to be affected. As a lifelong probabilistic safety analyst in nuclear engineering, even I must admit to a recurrence of the doubt regarding nuclear power safety after Fukushima that I had experienced after Three Mile Island and Chernobyl. This article is my attempt to “recover” my personal perspective on acceptable risk by examining both the domestic and worldwide history of commercial nuclear power plant accidents and attempting to quantify the risk in terms of the frequency of core damage that one might glean from a review of operational history.  相似文献   

13.
14.
This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value‐at‐Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of “model risk” in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value‐at‐Risk model risk and compute the required regulatory capital add‐on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value‐at‐Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks.  相似文献   

15.
The value of a statistical life (VSL) is a widely used measure for the value of mortality risk reduction. As VSL should reflect preferences and attitudes to risk, there are reasons to believe that it varies depending on the type of risk involved. It has been argued that cancer should be considered a “dread disease,” which supports the use of a “cancer premium.” The objective of this study is to investigate the existence of a cancer premium (for pancreatic cancer and multiple myeloma) in relation to road traffic accidents, sudden cardiac arrest, and amyotrophic lateral sclerosis (ALS). Data were collected from 500 individuals in the Swedish general population of 50–74‐year olds using a web‐based questionnaire. Preferences were elicited using the contingent valuation method, and a split‐sample design was applied to test scale sensitivity. VSL differs significantly between contexts, being highest for ALS and lowest for road traffic accidents. A premium (92–113%) for cancer was found in relation to road traffic accidents. The premium was higher for cancer with a shorter time from diagnosis to death. A premium was also found for sudden cardiac arrest (73%) and ALS (118%) in relation to road traffic accidents. Eliminating risk was associated with a premium of around 20%. This study provides additional evidence that there exist a dread premium and risk elimination premium. These factors should be considered when searching for an appropriate value for economic evaluation and health technology assessment.  相似文献   

16.
Humans are continuously exposed to chemicals with suspected or proven endocrine disrupting chemicals (EDCs). Risk management of EDCs presents a major unmet challenge because the available data for adverse health effects are generated by examining one compound at a time, whereas real‐life exposures are to mixtures of chemicals. In this work, we integrate epidemiological and experimental evidence toward a whole mixture strategy for risk assessment. To illustrate, we conduct the following four steps in a case study: (1) identification of single EDCs (“bad actors”)—measured in prenatal blood/urine in the SELMA study—that are associated with a shorter anogenital distance (AGD) in baby boys; (2) definition and construction of a “typical” mixture consisting of the “bad actors” identified in Step 1; (3) experimentally testing this mixture in an in vivo animal model to estimate a dose–response relationship and determine a point of departure (i.e., reference dose [RfD]) associated with an adverse health outcome; and (4) use a statistical measure of “sufficient similarity” to compare the experimental RfD (from Step 3) to the exposure measured in the human population and generate a “similar mixture risk indicator” (SMRI). The objective of this exercise is to generate a proof of concept for the systematic integration of epidemiological and experimental evidence with mixture risk assessment strategies. Using a whole mixture approach, we could find a higher rate of pregnant women under risk (13%) when comparing with the data from more traditional models of additivity (3%), or a compound‐by‐compound strategy (1.6%).  相似文献   

17.
Royce A. Francis 《Risk analysis》2015,35(11):1983-1995
This article argues that “game‐changing” approaches to risk analysis must focus on “democratizing” risk analysis in the same way that information technologies have democratized access to, and production of, knowledge. This argument is motivated by the author's reading of Goble and Bier's analysis, “Risk Assessment Can Be a Game‐Changing Information Technology—But Too Often It Isn't” (Risk Analysis, 2013; 33: 1942–1951), in which living risk assessments are shown to be “game changing” in probabilistic risk analysis. In this author's opinion, Goble and Bier's article focuses on living risk assessment's potential for transforming risk analysis from the perspective of risk professionals—yet, the game‐changing nature of information technologies has typically achieved a much broader reach. Specifically, information technologies change who has access to, and who can produce, information. From this perspective, the author argues that risk assessment is not a game‐changing technology in the same way as the printing press or the Internet because transformative information technologies reduce the cost of production of, and access to, privileged knowledge bases. The author argues that risk analysis does not reduce these costs. The author applies Goble and Bier's metaphor to the chemical risk analysis context, and in doing so proposes key features that transformative risk analysis technology should possess. The author also discusses the challenges and opportunities facing risk analysis in this context. These key features include: clarity in information structure and problem representation, economical information dissemination, increased transparency to nonspecialists, democratized manufacture and transmission of knowledge, and democratic ownership, control, and interpretation of knowledge. The chemical safety decision‐making context illustrates the impact of changing the way information is produced and accessed in the risk context. Ultimately, the author concludes that although new chemical safety regulations do transform access to risk information, they do not transform the costs of producing this information—rather, they change the bearer of these costs. The need for further risk assessment transformation continues to motivate new practical and theoretical developments in risk analysis and management.  相似文献   

18.
Risk aversion (a second‐order risk preference) is a time‐proven concept in economic models of choice under risk. More recently, the higher order risk preferences of prudence (third‐order) and temperance (fourth‐order) also have been shown to be quite important. While a majority of the population seems to exhibit both risk aversion and these higher order risk preferences, a significant minority does not. We show how both risk‐averse and risk‐loving behaviors might be generated by a simple type of basic lottery preference for either (1) combining “good” outcomes with “bad” ones, or (2) combining “good with good” and “bad with bad,” respectively. We further show that this dichotomy is fairly robust at explaining higher order risk attitudes in the laboratory. In addition to our own experimental evidence, we take a second look at the extant laboratory experiments that measure higher order risk preferences and we find a fair amount of support for this dichotomy. Our own experiment also is the first to look beyond fourth‐order risk preferences, and we examine risk attitudes at even higher orders.  相似文献   

19.
Health care professionals are a major source of risk communications, but their estimation of risks may be compromised by systematic biases. We examined fuzzy-trace theory's predictions of professionals' biases in risk estimation for sexually transmitted infections (STIs) linked to: knowledge deficits (producing underestimation of STI risk, re-infection, and gender differences), gist-based mental representation of risk categories (producing overestimation of condom effectiveness for psychologically atypical but prevalent infections), retrieval failure for risk knowledge (producing greater risk underestimation when STIs are not specified), and processing interference involving combining risk estimates (producing biases in post-test estimation of infection, regardless of knowledge). One-hundred-seventy-four subjects (experts attending a national workshop, physicians, other health care professionals, and students) estimated the risk of teenagers contracting STIs, re-infection rates for males and females, and condom effectiveness in reducing infection risk. Retrieval was manipulated by asking estimation questions in two formats, a specific format that "unpacked" the STI category (infection types) and a global format that did not provide specific cues. Requesting estimates of infection risk after relevant knowledge was directly provided, isolating processing effects, assessed processing biases. As predicted, all groups of professionals underestimated the risk of STI transmission, re-infection, and gender differences, and overestimated the effectiveness of condoms, relative to published estimates. However, when questions provided better retrieval supports (specified format), estimation bias decreased. All groups of professionals also suffered from predicted processing biases. Although knowledge deficits contribute to estimation biases, the research showed that biases are also linked to fuzzy representations, retrieval failures, and processing errors Hence, interventions that are designed to improve risk perception among professionals must incorporate more than knowledge dissemination. They should also provide support for information representation, effective retrieval, and accurate processing.  相似文献   

20.
This article considers all 87 attacks worldwide against air and rail transport systems that killed at least two passengers over the 30‐year period of 1982–2011. The data offer strong and statistically significant evidence that successful acts of terror have “gone to ground” in recent years: attacks against aviation were concentrated early in the three decades studied whereas those against rail were concentrated later. Recent data are used to make estimates of absolute and comparative risk for frequent flyers and subway/rail commuters. Point estimates in the “status quo” case imply that mortality risk from successful acts of terror was very low on both modes of transportation and that, whereas risk per trip is higher for air travelers than subway/rail commuters, the rail commuters experience greater risk per year than the frequent flyers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号