首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
《Risk analysis》2018,38(1):163-176
The U.S. Environmental Protection Agency (EPA) uses health risk assessment to help inform its decisions in setting national ambient air quality standards (NAAQS). EPA's standard approach is to make epidemiologically‐based risk estimates based on a single statistical model selected from the scientific literature, called the “core” model. The uncertainty presented for “core” risk estimates reflects only the statistical uncertainty associated with that one model's concentration‐response function parameter estimate(s). However, epidemiologically‐based risk estimates are also subject to “model uncertainty,” which is a lack of knowledge about which of many plausible model specifications and data sets best reflects the true relationship between health and ambient pollutant concentrations. In 2002, a National Academies of Sciences (NAS) committee recommended that model uncertainty be integrated into EPA's standard risk analysis approach. This article discusses how model uncertainty can be taken into account with an integrated uncertainty analysis (IUA) of health risk estimates. It provides an illustrative numerical example based on risk of premature death from respiratory mortality due to long‐term exposures to ambient ozone, which is a health risk considered in the 2015 ozone NAAQS decision. This example demonstrates that use of IUA to quantitatively incorporate key model uncertainties into risk estimates produces a substantially altered understanding of the potential public health gain of a NAAQS policy decision, and that IUA can also produce more helpful insights to guide that decision, such as evidence of decreasing incremental health gains from progressive tightening of a NAAQS.  相似文献   

2.
Regulatory impact analyses (RIAs), required for new major federal regulations, are often criticized for not incorporating epistemic uncertainties into their quantitative estimates of benefits and costs. “Integrated uncertainty analysis,” which relies on subjective judgments about epistemic uncertainty to quantitatively combine epistemic and statistical uncertainties, is often prescribed. This article identifies an additional source for subjective judgment regarding a key epistemic uncertainty in RIAs for National Ambient Air Quality Standards (NAAQS)—the regulator's degree of confidence in continuation of the relationship between pollutant concentration and health effects at varying concentration levels. An illustrative example is provided based on the 2013 decision on the NAAQS for fine particulate matter (PM2.5). It shows how the regulator's justification for setting that NAAQS was structured around the regulator's subjective confidence in the continuation of health risks at different concentration levels, and it illustrates how such expressions of uncertainty might be directly incorporated into the risk reduction calculations used in the rule's RIA. The resulting confidence-weighted quantitative risk estimates are found to be substantially different from those in the RIA for that rule. This approach for accounting for an important source of subjective uncertainty also offers the advantage of establishing consistency between the scientific assumptions underlying RIA risk and benefit estimates and the science-based judgments developed when deciding on the relevant standards for important air pollutants such as PM2.5.  相似文献   

3.
David M. Stieb 《Risk analysis》2012,32(12):2133-2151
The monetized value of avoided premature mortality typically dominates the calculated benefits of air pollution regulations; therefore, characterization of the uncertainty surrounding these estimates is key to good policymaking. Formal expert judgment elicitation methods are one means of characterizing this uncertainty. They have been applied to characterize uncertainty in the mortality concentration‐response function, but have yet to be used to characterize uncertainty in the economic values placed on avoided mortality. We report the findings of a pilot expert judgment study for Health Canada designed to elicit quantitative probabilistic judgments of uncertainties in Value‐per‐Statistical‐Life (VSL) estimates for use in an air pollution context. The two‐stage elicitation addressed uncertainties in both a base case VSL for a reduction in mortality risk from traumatic accidents and in benefits transfer‐related adjustments to the base case for an air quality application (e.g., adjustments for age, income, and health status). Results for each expert were integrated to develop example quantitative probabilistic uncertainty distributions for VSL that could be incorporated into air quality models.  相似文献   

4.
Cox LA 《Risk analysis》2012,32(5):816-829
Recent proposals to further reduce permitted levels of air pollution emissions are supported by high projected values of resulting public health benefits. For example, the Environmental Protection Agency recently estimated that the 1990 Clean Air Act Amendment (CAAA) will produce human health benefits in 2020, from reduced mortality rates, valued at nearly $2 trillion per year, compared to compliance costs of $65 billion ($0.065 trillion). However, while compliance costs can be measured, health benefits are unproved: they depend on a series of uncertain assumptions. Among these are that additional life expectancy gained by a beneficiary (with median age of about 80 years) should be valued at about $80,000 per month; that there is a 100% probability that a positive, linear, no-threshold, causal relation exists between PM(2.5) concentration and mortality risk; and that progress in medicine and disease prevention will not greatly diminish this relationship. We present an alternative uncertainty analysis that assigns a positive probability of error to each assumption. This discrete uncertainty analysis suggests (with probability >90% under plausible alternative assumptions) that the costs of CAAA exceed its benefits. Thus, instead of suggesting to policymakers that CAAA benefits are almost certainly far larger than its costs, we believe that accuracy requires acknowledging that the costs purchase a relatively uncertain, possibly much smaller, benefit. The difference between these contrasting conclusions is driven by different approaches to uncertainty analysis, that is, excluding or including discrete uncertainties about the main assumptions required for nonzero health benefits to exist at all.  相似文献   

5.
Despite the key role that subjective probabilities play in decisions made under conditions of uncertainty, little is known about the ability of probability assessors in developing these estimates. A literature survey is followed by a review of results from a continuing series of experiments designed to investigate the external accuracy of subjectively assessed probability distributions. Initial findings confirm that probability assessments provided by untrained assessors are of questionable value in predicting the distribution of actual outcomes of uncertain events. Particular difficulty is encountered when subjects attempt to quantify the extremes of their subjective distributions. The impact of extended assessor training and hypotheses regarding the effects of variation in the assessor's information level and the complexity of the assessment task are explored. Implications for applied decision making are drawn, and directions for future investigations are suggested.  相似文献   

6.
We propose a distribution‐free entropy‐based methodology to calculate the expected value of an uncertainty reduction effort and present our results within the context of reducing demand uncertainty. In contrast to existing techniques, the methodology does not require a priori assumptions regarding the underlying demand distribution, does not require sampled observations to be the mechanism by which uncertainty is reduced, and provides an expectation of information value as opposed to an upper bound. In our methodology, a decision maker uses his existing knowledge combined with the maximum entropy principle to model both his present and potential future states of uncertainty as probability densities over all possible demand distributions. Modeling uncertainty in this way provides for a theoretically justified and intuitively satisfying method of valuing an uncertainty reduction effort without knowing the information to be revealed. We demonstrate the methodology's use in three different settings: (i) a newsvendor valuing knowledge of expected demand, (ii) a short life cycle product supply manager considering the adoption of a quick response strategy, and (iii) a revenue manager making a pricing decision with limited knowledge of the market potential for his product.  相似文献   

7.
Since motor vehicles are a major air pollution source, urban designs that decrease private automobile use could improve air quality and decrease air pollution health risks. Yet, the relationships among urban form, air quality, and health are complex and not fully understood. To explore these relationships, we model the effects of three alternative development scenarios on annual average fine particulate matter (PM2.5) concentrations in ambient air and associated health risks from PM2.5 exposure in North Carolina's Raleigh‐Durham‐Chapel Hill area. We integrate transportation demand, land‐use regression, and health risk assessment models to predict air quality and health impacts for three development scenarios: current conditions, compact development, and sprawling development. Compact development slightly decreases (?0.2%) point estimates of regional annual average PM2.5 concentrations, while sprawling development slightly increases (+1%) concentrations. However, point estimates of health impacts are in opposite directions: compact development increases (+39%) and sprawling development decreases (?33%) PM2.5‐attributable mortality. Furthermore, compactness increases local variation in PM2.5 concentrations and increases the severity of local air pollution hotspots. Hence, this research suggests that while compact development may improve air quality from a regional perspective, it may also increase the concentration of PM2.5 in local hotspots and increase population exposure to PM2.5. Health effects may be magnified if compact neighborhoods and PM2.5 hotspots are spatially co‐located. We conclude that compactness alone is an insufficient means of reducing the public health impacts of transportation emissions in automobile‐dependent regions. Rather, additional measures are needed to decrease automobile dependence and the health risks of transportation emissions.  相似文献   

8.
Many environmental data sets, such as for air toxic emission factors, contain several values reported only as below detection limit. Such data sets are referred to as "censored." Typical approaches to dealing with the censored data sets include replacing censored values with arbitrary values of zero, one-half of the detection limit, or the detection limit. Here, an approach to quantification of the variability and uncertainty of censored data sets is demonstrated. Empirical bootstrap simulation is used to simulate censored bootstrap samples from the original data. Maximum likelihood estimation (MLE) is used to fit parametric probability distributions to each bootstrap sample, thereby specifying alternative estimates of the unknown population distribution of the censored data sets. Sampling distributions for uncertainty in statistics such as the mean, median, and percentile are calculated. The robustness of the method was tested by application to different degrees of censoring, sample sizes, coefficients of variation, and numbers of detection limits. Lognormal, gamma, and Weibull distributions were evaluated. The reliability of using this method to estimate the mean is evaluated by averaging the best estimated means of 20 cases for small sample size of 20. The confidence intervals for distribution percentiles estimated with bootstrap/MLE method compared favorably to results obtained with the nonparametric Kaplan-Meier method. The bootstrap/MLE method is illustrated via an application to an empirical air toxic emission factor data set.  相似文献   

9.
本文从概率统计模型本身的不确定性是本质的、不能消除掉的角度出发, 研究了Knight不确定下连续时间委托-代理问题, 其中主要考虑了代理人的道德风险对契约执行过程以及契约存续情况的影响.首先, 建立了代理人延续价值以及委托人预期利润的动态方程.其次, 运用次线性期望下的随机最优性原理, 以更加准确、深刻的方法去刻画实际委托人和代理人经济行为, 进而得到委托人效用值函数的Hamilton-Jacobi-Bellman (HJB) 方程, 并求得委托人对代理人最优支付以及代理人最优努力水平的表达式.最后, 通过理论解的数值模拟, 分析了Knight不确定对委托人和代理人最优策略以及最优契约的影响.  相似文献   

10.
Expert judgments expressed as subjective probability distributions provide an appropriate means of incorporating technical uncertainty in some quantitative policy studies. Judgments and distributions obtained from several experts allow one to explore the extent to which the conclusions reached in such a study depend on which expert one talks to. For the case of sulfur air pollution from coal-fired power plants, estimates of sulfur mass balance as a function of plume flight time are shown to vary little across the range of opinions of leading atmospheric scientists while estimates of possible health impacts are shown to vary widely across the range of opinions of leading scientists in air pollution health effects.  相似文献   

11.
Concern about the degree of uncertainty and potential conservatism in deterministic point estimates of risk has prompted researchers to turn increasingly to probabilistic methods for risk assessment. With Monte Carlo simulation techniques, distributions of risk reflecting uncertainty and/or variability are generated as an alternative. In this paper the compounding of conservatism(1) between the level associated with point estimate inputs selected from probability distributions and the level associated with the deterministic value of risk calculated using these inputs is explored. Two measures of compounded conservatism are compared and contrasted. The first measure considered, F , is defined as the ratio of the risk value, R d, calculated deterministically as a function of n inputs each at the j th percentile of its probability distribution, and the risk value, R j that falls at the j th percentile of the simulated risk distribution (i.e., F=Rd/Rj). The percentile of the simulated risk distribution which corresponds to the deterministic value, Rd , serves as a second measure of compounded conservatism. Analytical results for simple products of lognormal distributions are presented. In addition, a numerical treatment of several complex cases is presented using five simulation analyses from the literature to illustrate. Overall, there are cases in which conservatism compounds dramatically for deterministic point estimates of risk constructed from upper percentiles of input parameters, as well as those for which the effect is less notable. The analytical and numerical techniques discussed are intended to help analysts explore the factors that influence the magnitude of compounding conservatism in specific cases.  相似文献   

12.
In environmental risk management, there are often interests in maximizing public health benefits (efficiency) and addressing inequality in the distribution of health outcomes. However, both dimensions are not generally considered within a single analytical framework. In this study, we estimate both total population health benefits and changes in quantitative indicators of health inequality for a number of alternative spatial distributions of diesel particulate filter retrofits across half of an urban bus fleet in Boston, Massachusetts. We focus on the impact of emissions controls on primary fine particulate matter (PM2.5) emissions, modeling the effect on PM2.5 concentrations and premature mortality. Given spatial heterogeneity in baseline mortality rates, we apply the Atkinson index and other inequality indicators to quantify changes in the distribution of mortality risk. Across the different spatial distributions of control strategies, the public health benefits varied by more than a factor of two, related to factors such as mileage driven per day, population density near roadways, and baseline mortality rates in exposed populations. Changes in health inequality indicators varied across control strategies, with the subset of optimal strategies considering both efficiency and equality generally robust across different parametric assumptions and inequality indicators. Our analysis demonstrates the viability of formal analytical approaches to jointly address both efficiency and equality in risk assessment, providing a tool for decisionmakers who wish to consider both issues.  相似文献   

13.
Subjective probability distributions constitute an important part of the input to decision analysis and other decision aids. The long list of persistent biases associated with human judgments under uncertainy [16] suggests, however, that these biases can be translated into the elicited probabilities which, in turn, may be reflected in the output of the decision aids, potentially leading to biased decisions. This experiment studies the effectiveness of three debiasing techniques in elicitation of subjective probability distributions. It is hypothesized that the Socratic procedure [18] and the devil's advocate approach [6] [7] [31] [32] [33] [34] will increase subjective uncertainty and thus help assessors overcome a persistent bias called “overconfidence.” Mental encoding of the frequency of the observed instances into prespecified intervals, however, is expected to decrease subjective uncertainty and to help assessors better capture, mentally, the location and skewness of the observed distribution. The assessors' ratings of uncertainty confirm these hypotheses related to subjective uncertainty but three other measures based on the dispersion of the elicited subjective probability distributions do not. Possible explanations are discussed. An intriguing explanation is that debiasing may affect what some have called “second order” uncertainty. While uncertainty ratings may include this second component, the measures based on the elicited distributions relate only to “first order” uncertainty.  相似文献   

14.
As part of its periodic re-evaluation of particulate matter (PM) standards, the U.S. Environmental Protection Agency estimated the health risk reductions associated with attainment of alternative PM standards in two locations in the United States with relatively complete air quality data: Philadelphia and Los Angeles. PM standards at the time of the analysis were defined for particles of aerodynamic diameter less than or equal to 10 microm, denoted as PM-10. The risk analyses estimated the risk reductions that would be associated with changing from attainment of the PM-10 standards then in place to attainment of alternative standards using an indicator measuring fine particles, defined as those particles of aerodynamic diameter less than or equal to 2.5 microm and denoted as PM-2.5. Annual average PM-2.5 standards of 12.5, 15, and 20 microg/m3 were considered in various combinations with daily PM-2.5 standards of 50 and 65 microg/m3. Attainment of a standard or set of standards was simulated by a proportional rollback of "as is" daily PM concentrations to daily PM concentrations that would just meet the standard(s). The predicted reductions in the incidence of health effects varied from zero, for those alternative standards already being met, to substantial reductions of over 88% of all PM-associated incidence (e.g., in mortality associated with long-term exposures in Los Angeles, under attainment of an annual standard of 12.5 microg/m3). Sensitivity analyses and integrated uncertainty analyses assessed the multiple-source uncertainty surrounding estimates of risk reduction.  相似文献   

15.
16.
This article reports on a study to quantify expert beliefs about the explosion probability of unexploded ordnance (UXO). Some 1,976 sites at closed military bases in the United States are contaminated with UXO and are slated for cleanup, at an estimated cost of $15–140 billion. Because no available technology can guarantee 100% removal of UXO, information about explosion probability is needed to assess the residual risks of civilian reuse of closed military bases and to make decisions about how much to invest in cleanup. This study elicited probability distributions for the chance of UXO explosion from 25 experts in explosive ordnance disposal, all of whom have had field experience in UXO identification and deactivation. The study considered six different scenarios: three different types of UXO handled in two different ways (one involving children and the other involving construction workers). We also asked the experts to rank by sensitivity to explosion 20 different kinds of UXO found at a case study site at Fort Ord, California. We found that the experts do not agree about the probability of UXO explosion, with significant differences among experts in their mean estimates of explosion probabilities and in the amount of uncertainty that they express in their estimates. In three of the six scenarios, the divergence was so great that the average of all the expert probability distributions was statistically indistinguishable from a uniform (0, 1) distribution—suggesting that the sum of expert opinion provides no information at all about the explosion risk. The experts' opinions on the relative sensitivity to explosion of the 20 UXO items also diverged. The average correlation between rankings of any pair of experts was 0.41, which, statistically, is barely significant (p= 0.049) at the 95% confidence level. Thus, one expert's rankings provide little predictive information about another's rankings. The lack of consensus among experts suggests that empirical studies are needed to better understand the explosion risks of UXO.  相似文献   

17.
Risks from exposure to contaminated land are often assessed with the aid of mathematical models. The current probabilistic approach is a considerable improvement on previous deterministic risk assessment practices, in that it attempts to characterize uncertainty and variability. However, some inputs continue to be assigned as precise numbers, while others are characterized as precise probability distributions. Such precision is hard to justify, and we show in this article how rounding errors and distribution assumptions can affect an exposure assessment. The outcome of traditional deterministic point estimates and Monte Carlo simulations were compared to probability bounds analyses. Assigning all scalars as imprecise numbers (intervals prescribed by significant digits) added uncertainty to the deterministic point estimate of about one order of magnitude. Similarly, representing probability distributions as probability boxes added several orders of magnitude to the uncertainty of the probabilistic estimate. This indicates that the size of the uncertainty in such assessments is actually much greater than currently reported. The article suggests that full disclosure of the uncertainty may facilitate decision making in opening up a negotiation window. In the risk analysis process, it is also an ethical obligation to clarify the boundary between the scientific and social domains.  相似文献   

18.
The U.S. Environmental Protection Agency undertook a case study in the Detroit metropolitan area to test the viability of a new multipollutant risk‐based (MP/RB) approach to air quality management, informed by spatially resolved air quality, population, and baseline health data. The case study demonstrated that the MP/RB approach approximately doubled the human health benefits achieved by the traditional approach while increasing cost less than 20%—moving closer to the objective of Executive Order 12866 to maximize net benefits. Less well understood is how the distribution of health benefits from the MP/RB and traditional strategies affect the existing inequalities in air‐pollution‐related risks in Detroit. In this article, we identify Detroit populations that may be both most susceptible to air pollution health impacts (based on local‐scale baseline health data) and most vulnerable to air pollution (based on fine‐scale PM2.5 air quality modeling and socioeconomic characteristics). Using these susceptible/vulnerable subpopulation profiles, we assess the relative impacts of each control strategy on risk inequality, applying the Atkinson Index (AI) to quantify health risk inequality at baseline and with either risk management approach. We find that the MP/RB approach delivers greater air quality improvements among these subpopulations while also generating substantial benefits among lower‐risk populations. Applying the AI, we confirm that the MP/RB strategy yields less PM2.5 mortality and asthma hospitalization risk inequality than the traditional approach. We demonstrate the value of this approach to policymakers as they develop cost‐effective air quality management plans that maximize risk reduction while minimizing health inequality.  相似文献   

19.
PERT, as an aid in planning for project managers, has been widely accepted, but as yet there appears to be a wide gap between the user's apparent impression of the underlying assumptions and the theoretical assumptions. This paper points out some of the more common misconceptions and their implications upon the total project. The Beta distribution as the underlying probability distribution is evaluated as to first, the overall effects of the inherent errors that are imposed by the basic PERT assumptions, and second, the effects, from a probability distribution viewpoint, of some of the more common PERT misconceptions. Finally, several alternatives to the basic PERT methodology are explored, both from the theoretical and practical viewpoints.  相似文献   

20.
Roger Cooke 《Risk analysis》2010,30(3):330-339
The practice of uncertainty factors as applied to noncancer endpoints in the IRIS database harkens back to traditional safety factors. In the era before risk quantification, these were used to build in a “margin of safety.” As risk quantification takes hold, the safety factor methods yield to quantitative risk calculations to guarantee safety. Many authors believe that uncertainty factors can be given a probabilistic interpretation as ratios of response rates, and that the reference values computed according to the IRIS methodology can thus be converted to random variables whose distributions can be computed with Monte Carlo methods, based on the distributions of the uncertainty factors. Recent proposals from the National Research Council echo this view. Based on probabilistic arguments, several authors claim that the current practice of uncertainty factors is overprotective. When interpreted probabilistically, uncertainty factors entail very strong assumptions on the underlying response rates. For example, the factor for extrapolating from animal to human is the same whether the dosage is chronic or subchronic. Together with independence assumptions, these assumptions entail that the covariance matrix of the logged response rates is singular. In other words, the accumulated assumptions entail a log‐linear dependence between the response rates. This in turn means that any uncertainty analysis based on these assumptions is ill‐conditioned; it effectively computes uncertainty conditional on a set of zero probability. The practice of uncertainty factors is due for a thorough review. Two directions are briefly sketched, one based on standard regression models, and one based on nonparametric continuous Bayesian belief nets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号