首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Slovic  Paul 《Risk analysis》1999,19(4):689-701
Risk management has become increasingly politicized and contentious. Polarized views, controversy, and conflict have become pervasive. Research has begun to provide a new perspective on this problem by demonstrating the complexity of the concept risk and the inadequacies of the traditional view of risk assessment as a purely scientific enterprise. This paper argues that danger is real, but risk is socially constructed. Risk assessment is inherently subjective and represents a blending of science and judgment with important psychological, social, cultural, and political factors. In addition, our social and democratic institutions, remarkable as they are in many respects, breed distrust in the risk arena. Whoever controls the definition of risk controls the rational solution to the problem at hand. If risk is defined one way, then one option will rise to the top as the most cost-effective or the safest or the best. If it is defined another way, perhaps incorporating qualitative characteristics and other contextual factors, one will likely get a different ordering of action solutions. Defining risk is thus an exercise in power. Scientific literacy and public education are important, but they are not central to risk controversies. The public is not irrational. Their judgments about risk are influenced by emotion and affect in a way that is both simple and sophisticated. The same holds true for scientists. Public views are also influenced by worldviews, ideologies, and values; so are scientists' views, particularly when they are working at the limits of their expertise. The limitations of risk science, the importance and difficulty of maintaining trust, and the complex, sociopolitical nature of risk point to the need for a new approach—one that focuses upon introducing more public participation into both risk assessment and risk decision making in order to make the decision process more democratic, improve the relevance and quality of technical analysis, and increase the legitimacy and public acceptance of the resulting decisions.  相似文献   

2.
The risk of catastrophic failures, for example in the aviation and aerospace industries, can be approached from different angles (e.g., statistics when they exist, or a detailed probabilistic analysis of the system). Each new accident carries information that has already been included in the experience base or constitutes new evidence that can be used to update a previous assessment of the risk. In this paper, we take a different approach and consider the risk and the updating from the investor's point of view. Based on the market response to past airplane accidents, we examine which ones have created a surprise response and which ones are considered part of the risk of the airline business as previously assessed. To do so, we quantify the magnitude and the timing of the observed market response to catastrophic accidents, and we compare it to an estimate of the response that would be expected based on the true actual cost of the accident including direct and indirect costs (full-cost information response). First, we develop a method based on stock market data to measure the actual market response to an accident and we construct an estimate of the full-cost information response to such an event. We then compare the two figures for the immediate and the long-term response of the market for the affected firm, as well as for the whole industry group to which the firm belongs. As an illustration, we analyze a sample of ten fatal accidents experienced by major US domestic airlines during the last seven years. In four cases, we observed an abnormal market response. In these instances, it seems that the shareholders may have updated their estimates of the probability of a future accident in the affected airlines or more generally of the firm's future business prospects. This market reaction is not always easy to explain much less to anticipate, a fact which management should bear in mind when planning a firm's response to such an event.  相似文献   

3.
In a systematic process of project risk management, after risk assessment is implemented, the risk analysts encounter the phase of assessment and selection of the project risk response actions (RA). As indicated by many researchers, there are less systematic and well-developed solutions in the area of risk response assessment and selection. The present article introduces a methodology including a modeling approach with the objective of selecting a set of RA that minimizes the undesirable deviation from achieving the project scope. The developed objective function comprises the three key success criteria of a project, namely, time, quality, and cost. Our model integrates overall project management into the project risk response planning (P2RP). Furthermore, the proposed model stresses on an equivalent importance for both "risk" and "response." We believe that applying the proposed model helps the project risk analyst in most effective and efficient manner dealing with his or her complicated RA selection problems. The application of the proposed model was implemented in projects in the construction industry in which it showed tremendous time, cost, and quality improvements.  相似文献   

4.
Quantitative risk assessment often begins with an estimate of the exposure or dose associated with a particular risk level from which exposure levels posing low risk to populations can be extrapolated. For continuous exposures, this value, the benchmark dose, is often defined by a specified increase (or decrease) from the median or mean response at no exposure. This method of calculating the benchmark dose does not take into account the response distribution and, consequently, cannot be interpreted based upon probability statements of the target population. We investigate quantile regression as an alternative to the use of the median or mean regression. By defining the dose–response quantile relationship and an impairment threshold, we specify a benchmark dose as the dose associated with a specified probability that the population will have a response equal to or more extreme than the specified impairment threshold. In addition, in an effort to minimize model uncertainty, we use Bayesian monotonic semiparametric regression to define the exposure–response quantile relationship, which gives the model flexibility to estimate the quantal dose–response function. We describe this methodology and apply it to both epidemiology and toxicology data.  相似文献   

5.
We present a critical assessment of the benchmark dose (BMD) method introduced by Crump(1) as an alternative method for setting a characteristic dose level for toxicant risk assessment. The no-observed-adverse-effect-level (NOAEL) method has been criticized because it does not use all of the data and because the characteristic dose level obtained depends on the dose levels and the statistical precision (sample sizes) of the study design. Defining the BMD in terms of a confidence bound on a point estimate results in a characteristic dose that also varies with the statistical precision and still depends on the study dose levels.(2) Indiscriminate choice of benchmark response level may result in a BMD that reflects little about the dose-response behavior available from using all of the data. Another concern is that the definition of the BMD for the quantal response case is different for the continuous response case. Specifically, defining the BMD for continuous data using a ratio of increased effect divided by the background response results in an arbitrary dependence on the natural background for the endpoint being studied, making comparison among endpoints less meaningful and standards more arbitrary. We define a modified benchmark dose as a point estimate using the ratio of increased effect divided by the full adverse response range which enables consistent placement of the benchmark response level and provides a BMD with a more consistent relationship to the dose-response curve shape.  相似文献   

6.
Skin Cancer and Inorganic Arsenic: Uncertainty-Status of Risk   总被引:5,自引:0,他引:5  
The current U.S. EPA standard for inorganic arsenic in drinking water is 50 ppb (μg/L), dating to the National Interim Primary Drinking Water Regulation of 1976. The current EPA risk analysis predicts an increased lifetime skin cancer risk on the order of 3 or 4 per 1000 from chronic exposure at that concentration. Revision of the standard to only a few ppb, perhaps even less than 1 ppb, may be indicated by the EPA analysis to reduce the lifetime risk to an acceptable level. The cost to water utilities, and ultimately to their consumers, to conform to such a large reduction in the standard could easily reach several billion dollars, so it is particularly important to assess accurately the current risk and the risk reduction that would be achieved by a lower standard. This article addresses the major sources of uncertainty in the EPA analysis with respect to this objective. Specifically, it focuses on uncertainty and variability in the exposure estimates for the landmark study of Tseng and colleagues in Taiwan, analyzed using a reconstruction of the their exposure data. It is concluded that while the available dataset is suitable to establish the hazard of skin cancer, it is too highly summarized for reliable dose-response assessment. A new epidemiologic study is needed, designed for the requirements of dose-response assessment.  相似文献   

7.
Risk‐informed decision making is often accompanied by the specification of an acceptable level of risk. Such target level is compared against the value of a risk metric, usually computed through a probabilistic safety assessment model, to decide about the acceptability of a given design, the launch of a space mission, etc. Importance measures complement the decision process with information about the risk/safety significance of events. However, importance measures do not tell us whether the occurrence of an event can change the overarching decision. By linking value of information and importance measures for probabilistic risk assessment models, this work obtains a value‐of‐information‐based importance measure that brings together the risk metric, risk importance measures, and the risk threshold in one expression. The new importance measure does not impose additional computational burden because it can be calculated from our knowledge of the risk achievement and risk reduction worth, and complements the insights delivered by these importance measures. Several properties are discussed, including the joint decision worth of basic event groups. The application to the large loss of coolant accident sequence of the Advanced Test Reactor helps us in illustrating the risk analysis insights.  相似文献   

8.
For the vast majority of chemicals that have cancer potency estimates on IRIS, the underlying database is deficient with respect to early-life exposures. This data gap has prevented derivation of cancer potency factors that are relevant to this time period, and so assessments may not fully address children's risks. This article provides a review of juvenile animal bioassay data in comparison to adult animal data for a broad array of carcinogens. This comparison indicates that short-term exposures in early life are likely to yield a greater tumor response than short-term exposures in adults, but similar tumor response when compared to long-term exposures in adults. This evidence is brought into a risk assessment context by proposing an approach that: (1) does not prorate children's exposures over the entire life span or mix them with exposures that occur at other ages; (2) applies the cancer slope factor from adult animal or human epidemiology studies to the children's exposure dose to calculate the cancer risk associated with the early-life period; and (3) adds the cancer risk for young children to that for older children/adults to yield a total lifetime cancer risk. The proposed approach allows for the unique exposure and pharmacokinetic factors associated with young children to be fully weighted in the cancer risk assessment. It is very similar to the approach currently used by U.S. EPA for vinyl chloride. The current analysis finds that the database of early life and adult cancer bioassays supports extension of this approach from vinyl chloride to other carcinogens of diverse mode of action. This approach should be enhanced by early-life data specific to the particular carcinogen under analysis whenever possible.  相似文献   

9.
Increasing identification of transmissions of emerging infectious diseases (EIDs) by blood transfusion raised the question which of these EIDs poses the highest risk to blood safety. For a number of the EIDs that are perceived to be a threat to blood safety, evidence on actual disease or transmission characteristics is lacking, which might render measures against such EIDs disputable. On the other hand, the fact that we call them “emerging” implies almost by definition that we are uncertain about at least some of their characteristics. So what is the relative importance of various disease and transmission characteristics, and how are these influenced by the degree of uncertainty associated with their actual values? We identified the likelihood of transmission by blood transfusion, the presence of an asymptomatic phase of infection, prevalence of infection, and the disease impact as the main characteristics of the perceived risk of disease transmission by blood transfusion. A group of experts in the field of infectious diseases and blood transfusion ranked sets of (hypothetical) diseases with varying degrees of uncertainty associated with their disease characteristics, and used probabilistic inversion to obtain probability distributions for the weight of each of these risk characteristics. These distribution weights can be used to rank both existing and newly emerging infectious diseases with (partially) known characteristics. Analyses show that in case there is a lack of data concerning disease characteristics, it is the uncertainty concerning the asymptomatic phase and the disease impact that are the most important drivers of the perceived risk. On the other hand, if disease characteristics are well established, it is the prevalence of infection and the transmissibility of the disease by blood transfusion that will drive the perceived risk. The risk prioritization model derived provides an easy to obtain and rational expert assessment of the relative importance of an (emerging) infectious disease, requiring only a limited amount of information. Such a model might be used to justify a rational and proportional response to an emerging infectious disease, especially in situations where little or no specific information is available.  相似文献   

10.
Risk‐benefit analyses are introduced as a new paradigm for old problems. However, in many cases it is not always necessary to perform a full comprehensive and expensive quantitative risk‐benefit assessment to solve the problem, nor is it always possible, given the lack of required date. The choice to continue from a more qualitative to a full quantitative risk‐benefit assessment can be made using a tiered approach. In this article, this tiered approach for risk‐benefit assessment will be addressed using a decision tree. The tiered approach described uses the same four steps as the risk assessment paradigm: hazard and benefit identification, hazard and benefit characterization, exposure assessment, and risk‐benefit characterization, albeit in a different order. For the purpose of this approach, the exposure assessment has been moved upward and the dose‐response modeling (part of hazard and benefit characterization) is moved to a later stage. The decision tree includes several stop moments, depending on the situation where the gathered information is sufficient to answer the initial risk‐benefit question. The approach has been tested for two food ingredients. The decision tree presented in this article is useful to assist on a case‐by‐case basis a risk‐benefit assessor and policymaker in making informed choices when to stop or continue with a risk‐benefit assessment.  相似文献   

11.
Toxicologists are often interested in assessing the joint effect of an exposure on multiple reproductive endpoints, including early loss, fetal death, and malformation. Exposures that occur prior to mating or extremely early in development can adversely affect the number of implantation sites or fetuses that form within each dam and may even prevent pregnancy. A simple approach for assessing overall adverse effects in such studies is to consider fetuses or implants that fail to develop due to exposure as missing data. The missing data can be imputed, and standard methods for the analysis of quantal response data can then be used for quantitative risk assessment or testing. In this article, a new bias-corrected imputation procedure is proposed and evaluated. The procedure is straightforward to implement in standard statistical packages and has excellent operating characteristics when used in combination with a marginal model fit with generalized estimating equations. The methods are applied to data from a reproductive toxicity study of Nitrofurazone conducted by the National Toxicology Program.  相似文献   

12.
The Homeland Security Act mandates the development of a national, risk-based system to support planning for, response to, and recovery from emergency situations involving large-scale toxic exposures. To prepare for and manage consequences effectively, planners and responders need not only to identify zones of potentially elevated individual risk but also to predict expected casualties. Emergency response support systems now define "consequences" by mapping areas in which toxic chemical concentrations do or may exceed Acute Exposure Guideline Levels (AEGLs) or similar guidelines. However, because AEGLs do not estimate expected risks, current unqualified claims that such maps support consequence management are misleading. Intentionally protective, AEGLs incorporate various safety/uncertainty factors depending on the scope and quality of chemical-specific toxicity data. Some of these factors are irrelevant, and others need to be modified, whenever resource constraints or exposure-scenario complexities require responders to make critical trade-off (triage) decisions in order to minimize expected casualties. AEGL-exceedance zones cannot consistently be aggregated, compared, or used to calculate expected casualties and so may seriously misguide emergency response triage decisions. Methods and tools well established and readily available to support environmental health protection are not yet developed for chemically-related environmental health triage. Effective triage decisions involving chemical risks require a new assessment approach that focuses on best estimates of likely casualties, rather than on upper plausible bounds of individual risk. If risk-based consequence management is to become a reality, federal agencies tasked with supporting emergency response must actively coordinate to foster new methods that can support effective environmental health triage.  相似文献   

13.
The choice of a dose-response model is decisive for the outcome of quantitative risk assessment. Single-hit models have played a prominent role in dose-response assessment for pathogenic microorganisms, since their introduction. Hit theory models are based on a few simple concepts that are attractive for their clarity and plausibility. These models, in particular the Beta Poisson model, are used for extrapolation of experimental dose-response data to low doses, as are often present in drinking water or food products. Unfortunately, the Beta Poisson model, as it is used throughout the microbial risk literature, is an approximation whose validity is not widely known. The exact functional relation is numerically complex, especially for use in optimization or uncertainty analysis. Here it is shown that although the discrepancy between the Beta Poisson formula and the exact function is not very large for many data sets, the differences are greatest at low doses--the region of interest for many risk applications. Errors may become very large, however, in the results of uncertainty analysis, or when the data contain little low-dose information. One striking property of the exact single-hit model is that it has a maximum risk curve, limiting the upper confidence level of the dose-response relation. This is due to the fact that the risk cannot exceed the probability of exposure, a property that is not retained in the Beta Poisson approximation. This maximum possible response curve is important for uncertainty analysis, and for risk assessment of pathogens with unknown properties.  相似文献   

14.
A quantitative risk analysis was conducted to evaluate the design of the VX neutralization subsystem and related support facilities of the U.S. Army Newport Chemical Agent Disposal Facility. Three major incidents including agent release, personnel injury, and system loss were studied using fault tree analysis methodology. Each incident was assigned a risk assessment code based on the severity level and probability of occurrence of the incident. Safety mitigations or design changes were recommended to bring the "undesired" risk level (typical agent release events) to be "acceptable with controls" or "acceptable."  相似文献   

15.
16.
Statements such as "80% of the employees do 20% of the work" or "the richest 1% of society controls 10% of its assets" are commonly used to describe the distribution or concentration of a variable characteristic within a population. Analogous statements can be constructed to reflect the relationship between probability and concentration for unvarying quantities surrounded by uncertainty. Both kinds of statements represent specific usages of a general relationship, the "mass density function," that is not widely exploited in risk analysis and management. This paper derives a simple formula for the mass density function when the uncertainty and/or the variability in a quantity is lognormally distributed; the formula gives the risk analyst an exact, "back-of-the-envelope" method for determining the fraction of the total amount of a quantity contained within any portion of its distribution. For example, if exposures to a toxicant are lognormally distributed with σin x= 2, 50% of all the exposure is borne by the 2.3% of persons most heavily exposed. Implications of this formula for various issues in risk assessment are explored, including: (1) the marginal benefits of risk reduction; (2) distributional equity and risk perception; (3) accurate confidence intervals for the population mean when a limited set of data is available; (4) the possible biases introduced by the uncritical assumption that extreme "outliers" exist; and (5) the calculation of the value of new information.  相似文献   

17.
Ethylene oxide (EO) research has significantly increased since the 1980s, when regulatory risk assessments were last completed on the basis of the animal cancer chronic bioassays. In tandem with the new scientific understanding, there have been evolutionary changes in regulatory risk assessment guidelines, that encourage flexibility and greater use of scientific information. The results of an updated meta-analysis of the findings from 10 unique EO study cohorts from five countries, including nearly 33,000 workers, and over 800 cancers are presented, indicating that EO does not cause increased risk of cancers overall or of brain, stomach or pancreatic cancers. The findings for leukemia and non-Hodgkin's lymphoma (NHL) are inconclusive. Two studies with the requisite attributes of size, individual exposure estimates and follow up are the basis for dose-response modeling and added lifetime risk predictions under environmental and occupational exposure scenarios and a variety of plausible alternative assumptions. A point of departure analysis, with various margins of exposure, is also illustrated using human data. The two datasets produce remarkably similar leukemia added risk predictions, orders of magnitude lower than prior animal-based predictions under conservative, default assumptions, with risks on the order of 1 × 10–6 or lower for exposures in the low ppb range. Inconsistent results for lymphoid tumors, a non-standard grouping using histologic information from death certificates, are discussed. This assessment demonstrates the applicability of the current risk assessment paradigm to epidemiological data.  相似文献   

18.
19.
Nanotechnology involves the fabrication, manipulation, and control of materials at the atomic level and may also bring novel uncertainties and risks. Potential parallels with other controversial technologies mean there is a need to develop a comprehensive understanding of processes of public perception of nanotechnology uncertainties, risks, and benefits, alongside related communication issues. Study of perceptions, at so early a stage in the development trajectory of a technology, is probably unique in the risk perception and communication field. As such it also brings new methodological and conceptual challenges. These include: dealing with the inherent diversity of the nanotechnology field itself; the unfamiliar and intangible nature of the concept, with few analogies to anchor mental models or risk perceptions; and the ethical and value questions underlying many nanotechnology debates. Utilizing the lens of social amplification of risk, and drawing upon the various contributions to this special issue of Risk Analysis on Nanotechnology Risk Perceptions and Communication, nanotechnology may at present be an attenuated hazard. The generic idea of "upstream public engagement" for emerging technologies such as nanotechnology is also discussed, alongside its importance for future work with emerging technologies in the risk communication field.  相似文献   

20.
Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose‐response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose‐response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose‐response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose‐response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号