首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Regulatory impact analyses (RIAs), required for new major federal regulations, are often criticized for not incorporating epistemic uncertainties into their quantitative estimates of benefits and costs. “Integrated uncertainty analysis,” which relies on subjective judgments about epistemic uncertainty to quantitatively combine epistemic and statistical uncertainties, is often prescribed. This article identifies an additional source for subjective judgment regarding a key epistemic uncertainty in RIAs for National Ambient Air Quality Standards (NAAQS)—the regulator's degree of confidence in continuation of the relationship between pollutant concentration and health effects at varying concentration levels. An illustrative example is provided based on the 2013 decision on the NAAQS for fine particulate matter (PM2.5). It shows how the regulator's justification for setting that NAAQS was structured around the regulator's subjective confidence in the continuation of health risks at different concentration levels, and it illustrates how such expressions of uncertainty might be directly incorporated into the risk reduction calculations used in the rule's RIA. The resulting confidence-weighted quantitative risk estimates are found to be substantially different from those in the RIA for that rule. This approach for accounting for an important source of subjective uncertainty also offers the advantage of establishing consistency between the scientific assumptions underlying RIA risk and benefit estimates and the science-based judgments developed when deciding on the relevant standards for important air pollutants such as PM2.5.  相似文献   

2.
Kenneth T. Bogen 《Risk analysis》2014,34(10):1795-1806
The National Research Council 2009 “Silver Book” panel report included a recommendation that the U.S. Environmental Protection Agency (EPA) should increase all of its chemical carcinogen (CC) potency estimates by ~7‐fold to adjust for a purported median‐vs.‐mean bias that I recently argued does not exist (Bogen KT. “Does EPA underestimate cancer risks by ignoring susceptibility differences?,” Risk Analysis, 2014; 34(10):1780–1784). In this issue of the journal, my argument is critiqued for having flaws concerning: (1) intent, bias, and conservatism of EPA estimates of CC potency; (2) bias in potency estimates derived from epidemiology; and (3) human‐animal CC‐potency correlation. However, my argument remains valid, for the following reasons. (1) EPA's default approach to estimating CC risks has correctly focused on bounding average (not median) individual risk under a genotoxic mode‐of‐action (MOA) assumption, although pragmatically the approach leaves both inter‐individual variability in CC–susceptibility, and widely varying CC‐specific magnitudes of fundamental MOA uncertainty, unquantified. (2) CC risk estimates based on large epidemiology studies are not systematically biased downward due to limited sampling from broad, lognormal susceptibility distributions. (3) A good, quantitative correlation is exhibited between upper‐bounds on CC‐specific potency estimated from human vs. animal studies (n = 24, r = 0.88, p = 2 × 10?8). It is concluded that protective upper‐bound estimates of individual CC risk that account for heterogeneity in susceptibility, as well as risk comparisons informed by best predictions of average‐individual and population risk that address CC‐specific MOA uncertainty, should each be used as separate, complimentary tools to improve regulatory decisions concerning low‐level, environmental CC exposures.  相似文献   

3.
This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value‐at‐Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of “model risk” in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value‐at‐Risk model risk and compute the required regulatory capital add‐on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value‐at‐Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks.  相似文献   

4.
Over time, concerns have been raised regarding the potential for human exposure and risk from asbestos in cosmetic‐talc–containing consumer products. In 1985, the U.S. Food and Drug Administration (FDA) conducted a risk assessment evaluating the potential inhalation asbestos exposure associated with the cosmetic talc consumer use scenario of powdering an infant during diapering, and found that risks were below levels associated with background asbestos exposures and risk. However, given the scope and age of the FDA's assessment, it was unknown whether the agency's conclusions remained relevant to current risk assessment practices, talc application scenarios, and exposure data. This analysis updates the previous FDA assessment by incorporating the current published exposure literature associated with consumer use of talcum powder and using the current U.S. Environmental Protection Agency's (EPA) nonoccupational asbestos risk assessment approach to estimate potential cumulative asbestos exposure and risk for four use scenarios: (1) infant exposure during diapering; (2) adult exposure from infant diapering; (3) adult exposure from face powdering; and (4) adult exposure from body powdering. The estimated range of cumulative asbestos exposure potential for all scenarios (assuming an asbestos content of 0.1%) ranged from 0.0000021 to 0.0096 f/cc‐yr and resulted in risk estimates that were within or below EPA's acceptable target risk levels. Consistent with the original FDA findings, exposure and corresponding health risk in this range were orders of magnitude below upper‐bound estimates of cumulative asbestos exposure and risk at ambient levels, which have not been associated with increased incidence of asbestos‐related disease.  相似文献   

5.
In 2002, the U.S. Environmental Protection Agency (EPA) released an “Interim Policy on Genomics,” stating a commitment to developing guidance on the inclusion of genetic information in regulatory decision making. This statement was followed in 2004 by a document exploring the potential implications. Genetic information can play a key role in understanding and quantifying human susceptibility, an essential step in many of the risk assessments used to shape policy. For example, the federal Clean Air Act (CAA) requires EPA to set National Ambient Air Quality Standards (NAAQS) for criteria pollutants at levels to protect even sensitive populations from adverse health effects with an adequate margin of safety. Asthmatics are generally regarded as a sensitive population, yet substantial research gaps in understanding genetic susceptibility and disease have hindered quantitative risk analysis. This case study assesses the potential role of genomic information regarding susceptible populations in the NAAQS process for fine particulate matter (PM2.5) under the CAA. In this initial assessment, we model the contribution of a single polymorphism to asthma risk and mortality risk; however, multiple polymorphisms and interactions (gene‐gene and gene‐environment) are known to play key roles in the disease process. We show that the impact of new information about susceptibility on estimates of population risk or average risk derived from large epidemiological studies depends on the circumstances. We also suggest that analysis of a single polymorphism, or other risk factor such as health status, may or may not change estimates of individual risk enough to alter a particular regulatory decision, but this depends on specific characteristics of the decision and risk information. We also show how new information about susceptibility in the context of the NAAQS for PM2.5 could have a large impact on the estimated distribution of individual risk. This would occur if a group were consequently identified (based on genetic and/or disease status), that accounted for a disproportionate share of observed effects. Our results highlight certain conditions under which genetic information is likely to have an impact on risk estimates and the balance of costs and benefits within groups, and highlight critical research needs. As future studies explore more fully the relationship between exposure, genetic makeup, and disease status, the opportunity for genetic information and disease status to play pivotal roles in regulation can only increase.  相似文献   

6.
The Environmental Protection Agency's (EPA's) estimates of the benefits of improved air quality, especially from reduced mortality associated with reductions in fine particle concentrations, constitute the largest category of benefits from all federal regulation over the last decade. EPA develops such estimates, however, using an approach little changed since a 2002 report by the National Research Council (NRC), which was critical of EPA's methods and recommended a more comprehensive uncertainty analysis incorporating probability distributions for major sources of uncertainty. Consistent with the NRC's 2002 recommendations, we explore alternative assumptions and probability distributions for the major variables used to calculate the value of mortality benefits. For metropolitan Philadelphia, we show that uncertainty in air quality improvements and in baseline mortality have only modest effects on the distribution of estimated benefits. We analyze the effects of alternative assumptions regarding the value of reducing mortality risk, whether the toxicity is above or below the average for fine particles, and whether there is a threshold in the concentration‐response relationship, and show these assumptions all have large effects on the distribution of benefits.  相似文献   

7.
Communities are concerned over pollution levels and seek methods to systematically identify and prioritize the environmental stressors in their communities. Geographic information system (GIS) maps of environmental information can be useful tools for communities in their assessment of environmental‐pollution‐related risks. Databases and mapping tools that supply community‐level estimates of ambient concentrations of hazardous pollutants, risk, and potential health impacts can provide relevant information for communities to understand, identify, and prioritize potential exposures and risk from multiple sources. An assessment of existing databases and mapping tools was conducted as part of this study to explore the utility of publicly available databases, and three of these databases were selected for use in a community‐level GIS mapping application. Queried data from the U.S. EPA's National‐Scale Air Toxics Assessment, Air Quality System, and National Emissions Inventory were mapped at the appropriate spatial and temporal resolutions for identifying risks of exposure to air pollutants in two communities. The maps combine monitored and model‐simulated pollutant and health risk estimates, along with local survey results, to assist communities with the identification of potential exposure sources and pollution hot spots. Findings from this case study analysis will provide information to advance the development of new tools to assist communities with environmental risk assessments and hazard prioritization.  相似文献   

8.
We consider the problem of managing demand risk in tactical supply chain planning for a particular global consumer electronics company. The company follows a deterministic replenishment‐and‐planning process despite considerable demand uncertainty. As a possible way to formally address uncertainty, we provide two risk measures, “demand‐at‐risk” (DaR) and “inventory‐at‐risk” (IaR) and two linear programming models to help manage demand uncertainty. The first model is deterministic and can be used to allocate the replenishment schedule from the plants among the customers as per the existing process. The other model is stochastic and can be used to determine the “ideal” replenishment request from the plants under demand uncertainty. The gap between the output of the two models as regards requested replenishment and the values of the risk measures can be used by the company to reallocate capacity among different products and to thus manage demand/inventory risk.  相似文献   

9.
How can risk analysts help to improve policy and decision making when the correct probabilistic relation between alternative acts and their probable consequences is unknown? This practical challenge of risk management with model uncertainty arises in problems from preparing for climate change to managing emerging diseases to operating complex and hazardous facilities safely. We review constructive methods for robust and adaptive risk analysis under deep uncertainty. These methods are not yet as familiar to many risk analysts as older statistical and model‐based methods, such as the paradigm of identifying a single “best‐fitting” model and performing sensitivity analyses for its conclusions. They provide genuine breakthroughs for improving predictions and decisions when the correct model is highly uncertain. We demonstrate their potential by summarizing a variety of practical risk management applications.  相似文献   

10.
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose‐response models. Current approaches do not explicitly address model uncertainty, and there is an existing need to more fully inform health risk assessors in this regard. In this study, a Bayesian model averaging (BMA) BMD estimation method taking model uncertainty into account is proposed as an alternative to current BMD estimation approaches for continuous data. Using the “hybrid” method proposed by Crump, two strategies of BMA, including both “maximum likelihood estimation based” and “Markov Chain Monte Carlo based” methods, are first applied as a demonstration to calculate model averaged BMD estimates from real continuous dose‐response data. The outcomes from the example data sets examined suggest that the BMA BMD estimates have higher reliability than the estimates from the individual models with highest posterior weight in terms of higher BMDL and smaller 90th percentile intervals. In addition, a simulation study is performed to evaluate the accuracy of the BMA BMD estimator. The results from the simulation study recommend that the BMA BMD estimates have smaller bias than the BMDs selected using other criteria. To further validate the BMA method, some technical issues, including the selection of models and the use of bootstrap methods for BMDL derivation, need further investigation over a more extensive, representative set of dose‐response data.  相似文献   

11.
The Petroleum Safety Authority Norway (PSA‐N) has recently adopted a new definition of risk: “the consequences of an activity with the associated uncertainty.” The PSA‐N has also been using “deficient risk assessment” for some time as a basis for assigning nonconformities in audit reports. This creates an opportunity to study the link between risk perspective and risk assessment quality in a regulatory context, and, in the present article, we take a hard look at the term “deficient risk assessment” both normatively and empirically. First, we perform a conceptual analysis of how a risk assessment can be deficient in light of a particular risk perspective consistent with the new PSA‐N risk definition. Then, we examine the usages of the term “deficient” in relation to risk assessments in PSA‐N audit reports and classify these into a set of categories obtained from the conceptual analysis. At an overall level, we were able to identify on what aspects of the risk assessment the PSA‐N is focusing and where deficiencies are being identified in regulatory practice. A key observation is that there is a diversity in how the agency officials approach the risk assessments in audits. Hence, we argue that improving the conceptual clarity of what the authorities characterize as “deficient” in relation to the uncertainty‐based risk perspective may contribute to the development of supervisory practices and, eventually, potentially strengthen the learning outcome of the audit reports.  相似文献   

12.
Ali Mosleh 《Risk analysis》2012,32(11):1888-1900
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from “nominal predictions” due to “upsetting events” such as the 2008 global banking crisis.  相似文献   

13.
Richard A. Canady 《Risk analysis》2010,30(11):1663-1670
A September 2008 workshop sponsored by the Society for Risk Analysis( 1 ) on risk assessment methods for nanoscale materials explored “nanotoxicology” in risk assessment. A general conclusion of the workshop was that, while research indicates that some nanoscale materials are toxic, the information presented at the workshop does not indicate the need for a conceptually different approach for risk assessment on nanoscale materials, compared to other materials. However, the toxicology discussions did identify areas of uncertainty that present a challenge for the assessment of nanoscale materials. These areas include novel metrics, characterizing multivariate dynamic mixtures, identification of toxicologically relevant properties and “impurities” for nanoscale characteristics, and characterizing persistence, toxicokinetics, and weight of evidence in consideration of the dynamic nature of the mixtures. The discussion also considered “nanomaterial uncertainty factors” for health risk values like the Environmental Protection Agency's reference dose (RfD). Similar to the general opinions for risk assessment, participants expressed that completing a data set regarding toxicity, or extrapolation between species, sensitive individuals, or durations of exposure, were not qualitatively different considerations for nanoscale materials in comparison to all chemicals, and therefore, a “nanomaterial uncertainty factor” for all nanomaterials does not seem appropriate. However, the quantitative challenges may require new methods and approaches to integrate the information and the uncertainty.  相似文献   

14.
Royce A. Francis 《Risk analysis》2015,35(11):1983-1995
This article argues that “game‐changing” approaches to risk analysis must focus on “democratizing” risk analysis in the same way that information technologies have democratized access to, and production of, knowledge. This argument is motivated by the author's reading of Goble and Bier's analysis, “Risk Assessment Can Be a Game‐Changing Information Technology—But Too Often It Isn't” (Risk Analysis, 2013; 33: 1942–1951), in which living risk assessments are shown to be “game changing” in probabilistic risk analysis. In this author's opinion, Goble and Bier's article focuses on living risk assessment's potential for transforming risk analysis from the perspective of risk professionals—yet, the game‐changing nature of information technologies has typically achieved a much broader reach. Specifically, information technologies change who has access to, and who can produce, information. From this perspective, the author argues that risk assessment is not a game‐changing technology in the same way as the printing press or the Internet because transformative information technologies reduce the cost of production of, and access to, privileged knowledge bases. The author argues that risk analysis does not reduce these costs. The author applies Goble and Bier's metaphor to the chemical risk analysis context, and in doing so proposes key features that transformative risk analysis technology should possess. The author also discusses the challenges and opportunities facing risk analysis in this context. These key features include: clarity in information structure and problem representation, economical information dissemination, increased transparency to nonspecialists, democratized manufacture and transmission of knowledge, and democratic ownership, control, and interpretation of knowledge. The chemical safety decision‐making context illustrates the impact of changing the way information is produced and accessed in the risk context. Ultimately, the author concludes that although new chemical safety regulations do transform access to risk information, they do not transform the costs of producing this information—rather, they change the bearer of these costs. The need for further risk assessment transformation continues to motivate new practical and theoretical developments in risk analysis and management.  相似文献   

15.
Use of variability of profits and other accounting‐based ratios in order to estimate a firm's risk of insolvency is a well‐established concept in management and economics. We argue that these measures fail to approximate the true level of risk accurately because managers consider other strategic choices and goals when making risky decisions. Instead, we propose an econometric model that incorporates current and past strategic choices to estimate risk from the profit function. Specifically, we extend the well‐established multiplicative error model to allow for the endogeneity of the uncertainty component. We demonstrate the power of the model using a large sample of US banks and show that our estimates predict the accelerated bank risk that led to the subprime crisis in 2007. Our measure of risk also predicts the probability of bank default both in the period of the default but also well in advance of this default and before conventional measures of bank risk.  相似文献   

16.
Scott Janzwood 《Risk analysis》2023,43(10):2004-2016
Outside of the field of risk analysis, an important theoretical conversation on the slippery concept of uncertainty has unfolded over the last 40 years within the adjacent field of environmental risk. This literature has become increasingly standardized behind the tripartite distinction between uncertainty location, the nature of uncertainty, and uncertainty level, popularized by the “W&H framework.” This article introduces risk theorists and practitioners to the conceptual literature on uncertainty with the goal of catalyzing further development and clarification of the uncertainty concept within the field of risk analysis. It presents two critiques of the W&H framework's dimension of uncertainty level—the dimension that attempts to define the characteristics separating greater uncertainties from lesser uncertainties. First, I argue the framework's conceptualization of uncertainty level lacks a clear and consistent epistemological position and fails to acknowledge or reconcile the tensions between Bayesian and frequentist perspectives present within the framework. This article reinterprets the dimension of uncertainty level from a Bayesian perspective, which understands uncertainty as a mental phenomenon arising from “confidence deficits” as opposed to the ill-defined notion of “knowledge deficits” present in the framework. And second, I elaborate the undertheorized concept of uncertainty “reducibility.” These critiques inform a clarified conceptualization of uncertainty level that can be integrated with risk analysis concepts and usefully applied by modelers and decisionmakers engaged in model-based decision support.  相似文献   

17.
Pricing below cost is often classified as “dumping” in international trade and as “predatory pricing” in local markets. It is legally prohibited from practice because of earlier findings that it leads to predatory behavior by either eliminating competition or stealing market share. This study shows that a stochastic exchange rate can create incentives for a profit‐minded monopoly firm to set price below marginal cost. Our result departs from earlier findings because the optimal pricing decision is based on a rational behavior that does not exhibit any malicious intent against the competition to be considered as violating anti‐trust laws. The finding is a robust result, because our analysis demonstrates that this behavior occurs under various settings such as when the firm (i) is risk‐averse, (ii) can postpone prices until after exchange rates are realized, (iii) is capable of manufacturing in multiple countries, and (iv) operates under demand uncertainty in addition to the random exchange rate.  相似文献   

18.
Researchers have long recognized that subjective perceptions of risk are better predictors of choices over risky outcomes than science‐based or experts’ assessments of risk. More recent work suggests that uncertainty about risks also plays a role in predicting choices and behavior. In this article, we develop and estimate a formal model for an individual's perceived health risks associated with arsenic contamination of his or her drinking water. The modeling approach treats risk as a random variable, with an estimable probability distribution whose variance reflects uncertainty. The model we estimate uses data collected from a survey given to a sample of people living in arsenic‐prone areas in the United States. The findings from this article support the fact that scientific information is essential to explaining the mortality rate perceived by the individuals, but uncertainty about the probability remains significant.  相似文献   

19.
A challenge for large‐scale environmental health investigations such as the National Children's Study (NCS), is characterizing exposures to multiple, co‐occurring chemical agents with varying spatiotemporal concentrations and consequences modulated by biochemical, physiological, behavioral, socioeconomic, and environmental factors. Such investigations can benefit from systematic retrieval, analysis, and integration of diverse extant information on both contaminant patterns and exposure‐relevant factors. This requires development, evaluation, and deployment of informatics methods that support flexible access and analysis of multiattribute data across multiple spatiotemporal scales. A new “Tiered Exposure Ranking” (TiER) framework, developed to support various aspects of risk‐relevant exposure characterization, is described here, with examples demonstrating its application to the NCS. TiER utilizes advances in informatics computational methods, extant database content and availability, and integrative environmental/exposure/biological modeling to support both “discovery‐driven” and “hypothesis‐driven” analyses. “Tier 1” applications focus on “exposomic” pattern recognition for extracting information from multidimensional data sets, whereas second and higher tier applications utilize mechanistic models to develop risk‐relevant exposure metrics for populations and individuals. In this article, “tier 1” applications of TiER explore identification of potentially causative associations among risk factors, for prioritizing further studies, by considering publicly available demographic/socioeconomic, behavioral, and environmental data in relation to two health endpoints (preterm birth and low birth weight). A “tier 2” application develops estimates of pollutant mixture inhalation exposure indices for NCS counties, formulated to support risk characterization for these endpoints. Applications of TiER demonstrate the feasibility of developing risk‐relevant exposure characterizations for pollutants using extant environmental and demographic/socioeconomic data.  相似文献   

20.
Subjective probability distributions constitute an important part of the input to decision analysis and other decision aids. The long list of persistent biases associated with human judgments under uncertainy [16] suggests, however, that these biases can be translated into the elicited probabilities which, in turn, may be reflected in the output of the decision aids, potentially leading to biased decisions. This experiment studies the effectiveness of three debiasing techniques in elicitation of subjective probability distributions. It is hypothesized that the Socratic procedure [18] and the devil's advocate approach [6] [7] [31] [32] [33] [34] will increase subjective uncertainty and thus help assessors overcome a persistent bias called “overconfidence.” Mental encoding of the frequency of the observed instances into prespecified intervals, however, is expected to decrease subjective uncertainty and to help assessors better capture, mentally, the location and skewness of the observed distribution. The assessors' ratings of uncertainty confirm these hypotheses related to subjective uncertainty but three other measures based on the dispersion of the elicited subjective probability distributions do not. Possible explanations are discussed. An intriguing explanation is that debiasing may affect what some have called “second order” uncertainty. While uncertainty ratings may include this second component, the measures based on the elicited distributions relate only to “first order” uncertainty.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号