首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Decisionmakers are often presented with explicit likelihood assessments (e.g., there is a 10% chance that an attack will occur over the next three months) and supporting narrative evidence in forecasting and risk communication domains. Decisionmakers are thought to rely on both numerical and narrative information to the extent that they perceive the information to be diagnostic, accurate, and trustworthy. In two studies, we explored how lay decisionmakers varying in numeracy evaluated and used likelihood assessments and narrative evidence in forecasts. Overall, the less numerate reported higher risk and likelihood perceptions. In simple probabilistic forecasts without narrative evidence, decisionmakers at all levels of numeracy were able to use the stated likelihood information, although risk perceptions of the less numerate were more affected by likelihood format. When a forecast includes narrative evidence, decisionmakers were better able to use stated likelihood in a percentage as compared to frequency or verbal formats. The more numerate used stated likelihood more in their evaluations whereas the less numerate focused more on the narrative evidence. These results have important implications for risk analysts and forecasters who need to report the results of their analyses to decisionmakers. Decisionmakers varying in numerical ability may evaluate forecasts in different ways depending on the types of information they find easiest to evaluate.  相似文献   

2.
3.
Biomagnification of organochlorine and other persistent organic contaminants by higher trophic level organisms represents one of the most significant sources of uncertainty and variability in evaluating potential risks associated with disposal of dredged materials. While it is important to distinguish between population variability (e.g., true population heterogeneity in fish weight, and lipid content) and uncertainty (e.g., measurement error), they can be operationally difficult to define separately in probabilistic estimates of human health and ecological risk. We propose a disaggregation of uncertain and variable parameters based on: (1) availability of supporting data; (2) the specific management and regulatory context (in this case, of the U.S. Army Corps of Engineers/U.S. Environmental Protection Agency tiered approach to dredged material management); and (3) professional judgment and experience in conducting probabilistic risk assessments. We describe and quantitatively evaluate several sources of uncertainty and variability in estimating risk to human health from trophic transfer of polychlorinated biphenyls (PCBs) using a case study of sediments obtained from the New York-New Jersey Harbor and being evaluated for disposal at an open water off-shore disposal site within the northeast region. The estimates of PCB concentrations in fish and dietary doses of PCBs to humans ingesting fish are expressed as distributions of values, of which the arithmetic mean or mode represents a particular fractile. The distribution of risk values is obtained using a food chain biomagnification model developed by Gobas by specifying distributions for input parameters disaggregated to represent either uncertainty or variability. Only those sources of uncertainty that could be quantified were included in the analysis. Results for several different two-dimensional Latin Hypercube analyses are provided to evaluate the influence of the uncertain versus variable disaggregation of model parameters. The analysis suggests that variability in human exposure parameters is greater than the uncertainty bounds on any particular fractile, given the described assumptions.  相似文献   

4.
Given the prevalence of uncertainty and variability in estimates of environmental health risks, it is important to know how citizens interpret information representing uncertainty in risk estimates. Ranges of risk estimates from a hypothetical industry source elicited divergent evaluations of risk assessors' honesty and competence among New Jersey residents within one mile of one or more factories. A plurality saw ranges of risk estimates as both honest and competent, but with most judging such ranges as deficient on one or both dimensions. They wanted definitive conclusions about safety, tended to believe the high end of the range was more likely to be an accurate estimate of the risk, and believed that institutions only discuss risks when they are "high." Acknowledgment of scientific, as opposed to self-interested, reasons for uncertainty and disputes among experts was low. Attitude toward local industry seemed associated with, if not a cause of, attitudes about ranges of risk estimates. These reactions by industry neighbors appear to replicate the findings of Johnson and Slovic (1995, 1998), despite the hypothetical producer of risk estimates being industry instead of government. Respondents were older and less educated on average than were the earlier samples, but more diverse. Regression analyses suggested attitude toward industry was a major factor in these reactions, although other explanations (e.g., level of scientific understanding independent of general education) were not tested in this study.  相似文献   

5.
Limited time and resources usually characterize environmental decision making at policy organizations such as the U.S. Environmental Protection Agency. In these climates, addressing uncertainty, usually considered a flaw in scientific analyses, is often avoided. However, ignoring uncertainties can result in unpleasant policy surprises. Furthermore, it is important for decisionmakers to know how defensible a chosen policy option is over other options when the uncertainties of the data are considered. The purpose of this article is to suggest an approach that is unique from other approaches in that it considers uncertainty in two specific ways-the uncertainty of stakeholder values within a particular decision context and data uncertainty in the light of the decision-contextual data-values relationship. It is the premise of this article that the interaction between data and stakeholder values is critical to how the decision options are viewed and determines the effect of data uncertainty on the relative acceptability of the decision options, making the understanding of this interaction important to decisionmakers and other stakeholders. This approach utilizes the recently developed decision analysis framework and process, multi-criteria integrated resource assessment (MIRA). This article will specifically address how MIRA can be used to help decisionmakers better understand the importance of uncertainty on the specific (i.e., decision contextual) environmental policy options that they are deliberating.  相似文献   

6.
A better understanding of the uncertainty that exists in models used for seismic risk assessment is critical to improving risk-based decisions pertaining to earthquake safety. Current models estimating the probability of collapse of a building do not consider comprehensively the nature and impact of uncertainty. This article presents a model framework to enhance seismic risk assessment and thus gives decisionmakers a fuller understanding of the nature and limitations of the estimates. This can help ensure that risks are not over- or underestimated and the value of acquiring accurate data is appreciated fully. The methodology presented provides a novel treatment of uncertainties in input variables, their propagation through the model, and their effect on the results. The study presents ranges of possible annual collapse probabilities for different case studies on buildings in different parts of the world, exposed to different levels of seismicity, and with different vulnerabilities. A global sensitivity analysis was conducted to determine the significance of uncertain variables. Two key outcomes are (1) that the uncertainty in ground-motion conversion equations has the largest effect on the uncertainty in the calculation of annual collapse probability; and (2) the vulnerability of a building appears to have an effect on the range of annual collapse probabilities produced, i.e., the level of uncertainty in the estimate of annual collapse probability, with less vulnerable buildings having a smaller uncertainty.  相似文献   

7.
Dose‐response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose‐response model parameters are estimated using limited epidemiological data is rarely quantified. Second‐order risk characterization approaches incorporating uncertainty in dose‐response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta‐Poisson dose‐response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta‐Poisson dose‐response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta‐Poisson dose‐response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta‐Poisson model are proposed, and simple algorithms to evaluate actual beta‐Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta‐Poisson dose‐response model parameters is attributable to the absence of low‐dose data. This region includes beta‐Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility.  相似文献   

8.
Setting Risk Priorities: A Formal Model   总被引:2,自引:1,他引:1  
This article presents a model designed to capture the major aspects of setting priorities among risks, a common task in government and industry. The model has both design features, under the control of the rankers (e.g., how success is evaluated), and context features, properties of the situations that they are trying to understand (e.g., how quickly uncertainty can be reduced). The model is demonstrated in terms of two extreme ranking strategies. The first, sequential risk ranking , devotes all its resources, in a given period, to learning more about a single risk, and its place in the overall ranking. This strategy characterizes the process for a society (or organization or individual) that throws itself completely into dealing with one risk after another. The other extreme strategy, simultaneous risk ranking , spreads available resources equally across all risks. It characterizes the most methodical of ranking exercises. Given ample ranking resources, simultaneous risk ranking will eventually provide an accurate set of priorities, whereas sequential ranking might never get to some risks. Resource constraints, however, may prevent simultaneous rankers from examining any risk very thoroughly. The model is intended to clarify the nature of ranking tasks, predict the efficacy of alternative strategies, and improve their design.  相似文献   

9.
Currently, there is a trend away from the use of single (often conservative) estimates of risk to summarize the results of risk analyses in favor of stochastic methods which provide a more complete characterization of risk. The use of such stochastic methods leads to a distribution of possible values of risk, taking into account both uncertainty and variability in all of the factors affecting risk. In this article, we propose a general framework for the analysis of uncertainty and variability for use in the commonly encountered case of multiplicative risk models, in which risk may be expressed as a product of two or more risk factors. Our analytical methods facilitate the evaluation of overall uncertainty and variability in risk assessment, as well as the contributions of individual risk factors to both uncertainty and variability which is cumbersome using Monte Carlo methods. The use of these methods is illustrated in the analysis of potential cancer risks due to the ingestion of radon in drinking water.  相似文献   

10.
The selection among distributional forms for inputs into uncertainty and variability (e.g., Monte Carlo) analyses is an important task. This paper considers the importance of distributional selection by examining the overall and tail behavior of the lognormal, Weibull, gamma, and inverse gaussian distributions. It is concluded that at low relative standard deviation (below 1), there is less of a difference between upper tail behavior among the distributions than at higher RSD values. Sample sizes in excess of 200 are required to reliably distinguish between distributional forms at the higher RSD values. The likelihood statistic appears to offer a reasonable approach to distributional discrimination, and it, or a similar approach, should be incorporated into distributional fitting procedures used in risk analysis.  相似文献   

11.
Modern technology, together with an advanced economy, can provide a good or service in myriad ways, giving us choices on what to produce and how to produce it. To make those choices more intelligently, society needs to know not only the market price of each alternative, but the associated health and environmental consequences. A fair comparison requires evaluating the consequences across the whole "life cycle"--from the extraction of raw materials and processing to manufacture/construction, use, and end-of-life--of each alternative. Focusing on only one stage (e.g., manufacture) of the life cycle is often misleading. Unfortunately, analysts and researchers still have only rudimentary tools to quantify the materials and energy inputs and the resulting damage to health and the environment. Life cycle assessment (LCA) provides an overall framework for identifying and evaluating these implications. Since the 1960s, considerable progress has been made in developing methods for LCA, especially in characterizing, qualitatively and quantitatively, environmental discharges. However, few of these analyses have attempted to assess the quantitative impact on the environment and health of material inputs and environmental discharges Risk analysis and LCA are connected closely. While risk analysis has characterized and quantified the health risks of exposure to a toxicant, the policy implications have not been clear. Inferring that an occupational or public health exposure carries a nontrivial risk is only the first step in formulating a policy response. A broader framework, including LCA, is needed to see which response is likely to lower the risk without creating high risks elsewhere. Even more important, LCA has floundered at the stage of translating an inventory of environmental discharges into estimates of impact on health and the environment. Without the impact analysis, policymakers must revert to some simple rule, such as that all discharges, regardless of which chemical, which medium, and where they are discharged, are equally toxic. Thus, risk analysts should seek LCA guidance in translating a risk analysis into policy conclusions or even advice to those at risk. LCA needs the help of RA to go beyond simplistic assumptions about the implications of a discharge inventory. We demonstrate the need and rationale for LCA, present a brief history of LCA, present examples of the application of this tool, note the limitations of LCA models, and present several methods for incorporating risk assessment into LCA. However, we warn the reader not to expect too much. A comprehensive comparison of the health and environmental implications of alternatives is beyond the state of the art. LCA is currently not able to provide risk analysts with detailed information on the chemical form and location of the environmental discharges that would allow detailed estimation of the risks to individuals due to toxicants. For example, a challenge for risk analysts is to estimate health and other risks where the location and chemical speciation are not characterized precisely. Providing valuable information to decisionmakers requires advances in both LCA and risk analysis. These two disciplines should be closely linked, since each has much to contribute to the other.  相似文献   

12.
Variability and Uncertainty Meet Risk Management and Risk Communication   总被引:1,自引:0,他引:1  
In the past decade, the use of probabilistic risk analysis techniques to quantitatively address variability and uncertainty in risks increased in popularity as recommended by the 1994 National Research Council that wrote Science and Judgment in Risk Assessment. Under the 1996 Food Quality Protection Act, for example, the U.S. EPA supported the development of tools that produce distributions of risk demonstrating the variability and/or uncertainty in the results. This paradigm shift away from the use of point estimates creates new challenges for risk managers, who now struggle with decisions about how to use distributions in decision making. The challenges for risk communication, however, have only been minimally explored. This presentation uses the case studies of variability in the risks of dying on the ground from a crashing airplane and from the deployment of motor vehicle airbags to demonstrate how better characterization of variability and uncertainty in the risk assessment lead to better risk communication. Analogies to food safety and environmental risks are also discussed. This presentation demonstrates that probabilistic risk assessment has an impact on both risk management and risk communication, and highlights remaining research issues associated with using improved sensitivity and uncertainty analyses in risk assessment.  相似文献   

13.
14.
Quantitative risk assessment (QRA) models are used to estimate the risks of transporting dangerous goods and to assess the merits of introducing alternative risk reduction measures for different transportation scenarios and assumptions. A comprehensive QRA model recently was developed in Europe for application to road tunnels. This model can assess the merits of a limited number of "native safety measures." In this article, we introduce a procedure for extending its scope to include the treatment of a number of important "nonnative safety measures" of interest to tunnel operators and decisionmakers. Nonnative safety measures were not included in the original model specification. The suggested procedure makes use of expert judgment and Monte Carlo simulation methods to model uncertainty in the revised risk estimates. The results of a case study application are presented that involve the risks of transporting a given volume of flammable liquid through a 10-km road tunnel.  相似文献   

15.
There is increasing interest in the integration of quantitative risk analysis with benefit-cost and cost-effectiveness methods to evaluate environmental health policy making and perform comparative analyses. However, the combined use of these methods has revealed deficiencies in the available methods, and the lack of useful analytical frameworks currently constrains the utility of comparative risk and policy analyses. A principal issue in integrating risk and economic analysis is the lack of common performance metrics, particularly when conducting comparative analyses of regulations with disparate health endpoints (e.g., cancer and noncancer effects or risk-benefit analysis) and quantitative estimation of cumulative risk, whether from exposure to single agents with multiple health impacts or from exposure to mixtures. We propose a general quantitative framework and examine assumptions required for performing analyses of health risks and policies. We review existing and proposed risk and health-impact metrics for evaluating policies designed to protect public health from environmental exposures, and identify their strengths and weaknesses with respect to their use in a general comparative risk and policy analysis framework. Case studies are presented to demonstrate applications of this framework with risk-benefit and air pollution risk analyses. Through this analysis, we hope to generate discussions regarding the data requirements, analytical approaches, and assumptions required for general models to be used in comparative risk and policy analysis.  相似文献   

16.
In recent years, there have been growing concerns regarding risks in federal information technology (IT) supply chains in the United States that protect cyber infrastructure. A critical need faced by decisionmakers is to prioritize investment in security mitigations to maximally reduce risks in IT supply chains. We extend existing stochastic expected budgeted maximum multiple coverage models that identify “good” solutions on average that may be unacceptable in certain circumstances. We propose three alternative models that consider different robustness methods that hedge against worst‐case risks, including models that maximize the worst‐case coverage, minimize the worst‐case regret, and maximize the average coverage in the ( 1 ? α ) worst cases (conditional value at risk). We illustrate the solutions to the robust methods with a case study and discuss the insights their solutions provide into mitigation selection compared to an expected‐value maximizer. Our study provides valuable tools and insights for decisionmakers with different risk attitudes to manage cybersecurity risks under uncertainty.  相似文献   

17.
Risk Perception and the Value of Safety   总被引:3,自引:0,他引:3  
This paper examines the relationship between perceived risk and willingness-to-pay (WTP) for increased safety from technological hazards in both conceptual and empirical terms. A conceptual model is developed in which a given household's WTP for risk reductions is a function of traditional socioeconomic variables (i.e., income and base level of risk) and perceived characteristics of the hazards (i.e., dread, knowledge, and exposure). Data to estimate the model are obtained through a combined contingent valuation and risk perception survey that considers 10 technological hazards, five of which are well-defined (e.g., death rates are known and the risks are relatively common) and five are less well-defined. Econometric results, using TOBIT estimation procedures, support the importance of both types of variables in explaining WTP across all 10 hazards. When the risks are split into two groups, the results show that WTP for well-defined hazards is most influenced by perceived personal exposure, while WTP for less well-defined risks is most influenced by levels of dread and severity.  相似文献   

18.
Influenza remains a significant threat to public health, yet there is significant uncertainty about the routes of influenza transmission from an infectious source through the environment to a receptor, and their relative risks. Herein, data pertaining to factors that influence the environmental mediation of influenza transmission are critically reviewed, including: frequency, magnitude and size distribution and virus expiration, inactivation rates, environmental and self‐contact rates, and viral transfer efficiencies during contacts. Where appropriate, two‐stage Monte Carlo uncertainty analysis is used to characterize variability and uncertainty in the reported data. Significant uncertainties are present in most factors, due to: limitations in instrumentation or study realism; lack of documentation of data variability; or lack of study. These analyses, and future experimental work, will improve parameterization of influenza transmission and risk models, facilitating more robust characterization of the magnitude and uncertainty in infection risk.  相似文献   

19.
Exposure guidelines for potentially toxic substances are often based on a reference dose (RfD) that is determined by dividing a no-observed-adverse-effect-level (NOAEL), lowest-observed-adverse-effect-level (LOAEL), or benchmark dose (BD) corresponding to a low level of risk, by a product of uncertainty factors. The uncertainty factors for animal to human extrapolation, variable sensitivities among humans, extrapolation from measured subchronic effects to unknown results for chronic exposures, and extrapolation from a LOAEL to a NOAEL can be thought of as random variables that vary from chemical to chemical. Selected databases are examined that provide distributions across chemicals of inter- and intraspecies effects, ratios of LOAELs to NOAELs, and differences in acute and chronic effects, to illustrate the determination of percentiles for uncertainty factors. The distributions of uncertainty factors tend to be approximately lognormally distributed. The logarithm of the product of independent uncertainty factors is approximately distributed as the sum of normally distributed variables, making it possible to estimate percentiles for the product. Hence, the size of the products of uncertainty factors can be selected to provide adequate safety for a large percentage (e.g., approximately 95%) of RfDs. For the databases used to describe the distributions of uncertainty factors, using values of 10 appear to be reasonable and conservative. For the databases examined the following simple "Rule of 3s" is suggested that exceeds the estimated 95th percentile of the product of uncertainty factors: If only a single uncertainty factor is required use 33, for any two uncertainty factors use 3 x 33 approximately 100, for any three uncertainty factors use a combined factor of 3 x 100 = 300, and if all four uncertainty factors are needed use a total factor of 3 x 300 = 900. If near the 99th percentile is desired use another factor of 3. An additional factor may be needed for inadequate data or a modifying factor for other uncertainties (e.g., different routes of exposure) not covered above.  相似文献   

20.
As part of its periodic re-evaluation of particulate matter (PM) standards, the U.S. Environmental Protection Agency estimated the health risk reductions associated with attainment of alternative PM standards in two locations in the United States with relatively complete air quality data: Philadelphia and Los Angeles. PM standards at the time of the analysis were defined for particles of aerodynamic diameter less than or equal to 10 microm, denoted as PM-10. The risk analyses estimated the risk reductions that would be associated with changing from attainment of the PM-10 standards then in place to attainment of alternative standards using an indicator measuring fine particles, defined as those particles of aerodynamic diameter less than or equal to 2.5 microm and denoted as PM-2.5. Annual average PM-2.5 standards of 12.5, 15, and 20 microg/m3 were considered in various combinations with daily PM-2.5 standards of 50 and 65 microg/m3. Attainment of a standard or set of standards was simulated by a proportional rollback of "as is" daily PM concentrations to daily PM concentrations that would just meet the standard(s). The predicted reductions in the incidence of health effects varied from zero, for those alternative standards already being met, to substantial reductions of over 88% of all PM-associated incidence (e.g., in mortality associated with long-term exposures in Los Angeles, under attainment of an annual standard of 12.5 microg/m3). Sensitivity analyses and integrated uncertainty analyses assessed the multiple-source uncertainty surrounding estimates of risk reduction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号