首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A question has been raised in recent years as to whether the risk field, including analysis, assessment, and management, ought to be considered a discipline on its own. As suggested by Terje Aven, unification of the risk field would require a common understanding of basic concepts, such as risk and probability; hence, more discussion is needed of what he calls “foundational issues.” In this article, we show that causation is a foundational issue of risk, and that a proper understanding of it is crucial. We propose that some old ideas about the nature of causation must be abandoned in order to overcome certain persisting challenges facing risk experts over the last decade. In particular, we discuss the challenge of including causally relevant knowledge from the local context when studying risk. Although it is uncontroversial that the receptor plays an important role for risk evaluations, we show how the implementation of receptor‐based frameworks is hindered by methodological shortcomings that can be traced back to Humean orthodoxies about causation. We argue that the first step toward the development of frameworks better suited to make realistic risk predictions is to reconceptualize causation, by examining a philosophical alternative to the Humean understanding. In this article, we show how our preferred account, causal dispositionalism, offers a different perspective in how risk is evaluated and understood.  相似文献   

2.
Congress is currently considering adopting a mathematical formula to assign shares in cancer causation to specific doses of radiation, for use in establishing liability and compensation awards. The proposed formula, if it were sound, would allow difficult problems in tort law and public policy to be resolved by reference to tabulated "probabilities of causation." This article examines the statistical and conceptual bases for the proposed methodology. We find that the proposed formula is incorrect as an expression for "probability and causation," that it implies hidden, debatable policy judgments in its treatment of factor interactions and uncertainties, and that it can not in general be quantified with sufficient precision to be useful. Three generic sources of statistical uncertainty are identified--sampling variability, population heterogeneity, and error propagation--that prevent accurate quantification of "assigned shares." These uncertainties arise whenever aggregate epidemiological or risk data are used to draw causal inferences about individual cases.  相似文献   

3.
Graphs are increasingly recommended for improving decision-making and promoting risk-avoidant behaviors. Graphs that depict only the number of people affected by a risk (“foreground-only” displays) tend to increase perceived risk and risk aversion (e.g., willingness to get vaccinated), as compared to graphs that also depict the number of people at risk for harm (“foreground+background” displays). However, previous research examining these “foreground-only effects” has focused on relatively low-probability risks (<10%), limiting generalizability to communications about larger risks. In two experiments, we systematically investigated the moderating role of probability size on foreground-only effects, using a wide range of probability sizes (from 0.1% to 40%). Additionally, we examined the moderating role of the size of the risk reduction, that is, the extent to which a protective behavior reduces the risk. Across both experiments, foreground-only effects on perceived risk and risk aversion were weaker for larger probabilities. Experiment 2 also revealed that foreground-only effects were weaker for smaller risk reductions, while foreground-only displays decreased understanding of absolute risk magnitudes independently of probability size. These findings suggest that the greater effectiveness of foreground-only versus foreground+background displays for increasing perceived risk and risk aversion diminishes with larger probability sizes and smaller risk reductions. Moreover, if the goal is to promote understanding of absolute risk magnitudes, foreground+background displays should be used rather than foreground-only displays regardless of probability size. Our findings also help to refine and extend existing theoretical accounts of foreground-only effects to situations involving a wide range of probability sizes.  相似文献   

4.
A large share of accidental and nonaccidental poisonings are caused by household cleaning and washing products, such as drain cleaner or laundry detergent. The main goal of this article was to investigate consumers’ risk perception and misconceptions of a variety of cleaning and washing products in order to inform future risk communication efforts. For this, a sorting task including 33 commonly available household cleaning and washing products was implemented. A total of 60 female consumers were asked to place the cleaning and washing products on a reference line 3 m in length with the poles “dangerous” and “not dangerous.” The gathered data were analyzed qualitatively and by means of multidimensional scaling, cluster analysis, and linear regression. The dimensionality of the sorting data suggests that both analytically (i.e., written and graphical hazard notes and perceived effectiveness) and intuitively driven risk judgments (i.e., eco vs. regular products) were applied by the participants. Furthermore, results suggest the presence of misconceptions, particularly related to consumers’ perceptions of eco cleaning products, which were generally regarded as safer than their regular counterparts. Future risk communication should aim at dispelling these misconceptions and promoting accurate risk perceptions of particular household cleaning and washing products.  相似文献   

5.
Ali Mosleh 《Risk analysis》2012,32(11):1888-1900
Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from “nominal predictions” due to “upsetting events” such as the 2008 global banking crisis.  相似文献   

6.
Qualitative systems for rating animal antimicrobial risks using ordered categorical labels such as “high,”“medium,” and “low” can potentially simplify risk assessment input requirements used to inform risk management decisions. But do they improve decisions? This article compares the results of qualitative and quantitative risk assessment systems and establishes some theoretical limitations on the extent to which they are compatible. In general, qualitative risk rating systems satisfying conditions found in real‐world rating systems and guidance documents and proposed as reasonable make two types of errors: (1) Reversed rankings, i.e., assigning higher qualitative risk ratings to situations that have lower quantitative risks; and (2) Uninformative ratings, e.g., frequently assigning the most severe qualitative risk label (such as “high”) to situations with arbitrarily small quantitative risks and assigning the same ratings to risks that differ by many orders of magnitude. Therefore, despite their appealing consensus‐building properties, flexibility, and appearance of thoughtful process in input requirements, qualitative rating systems as currently proposed often do not provide sufficient information to discriminate accurately between quantitatively small and quantitatively large risks. The value of information (VOI) that they provide for improving risk management decisions can be zero if most risks are small but a few are large, since qualitative ratings may then be unable to confidently distinguish the large risks from the small. These limitations suggest that it is important to continue to develop and apply practical quantitative risk assessment methods, since qualitative ones are often unreliable.  相似文献   

7.
Wildfire is a persistent and growing threat across much of the western United States. Understanding how people living in fire‐prone areas perceive this threat is essential to the design of effective risk management policies. Drawing on the social amplification of risk framework, we develop a conceptual model of wildfire risk perceptions that incorporates the social processes that likely shape how individuals in fire‐prone areas come to understand this risk, highlighting the role of information sources and social interactions. We classify information sources as expert or nonexpert, and group social interactions according to two dimensions: formal versus informal, and generic versus fire‐specific. Using survey data from two Colorado counties, we empirically examine how information sources and social interactions relate to the perceived probability and perceived consequences of a wildfire. Our results suggest that social amplification processes play a role in shaping how individuals in this area perceive wildfire risk. A key finding is that both “vertical” (i.e., expert information sources and formal social interactions) and “horizontal” (i.e., nonexpert information and informal interactions) interactions are associated with perceived risk of experiencing a wildfire. We also find evidence of perceived “risk interdependency”—that is, homeowners’ perceptions of risk are higher when vegetation on neighboring properties is perceived to be dense. Incorporating social amplification processes into community‐based wildfire education programs and evaluating these programs’ effectiveness constitutes an area for future inquiry.  相似文献   

8.
A major issue in all risk communication efforts is the distinction between the terms “risk” and “hazard.” The potential to harm a target such as human health or the environment is normally defined as a hazard, whereas risk also encompasses the probability of exposure and the extent of damage. What can be observed again and again in risk communication processes are misunderstandings and communication gaps related to these crucial terms. We asked a sample of 53 experts from public authorities, business and industry, and environmental and consumer organizations in Germany to outline their understanding and use of these terms using both the methods of expert interviews and focus groups. The empirical study made clear that the terms risk and hazard are perceived and used very differently in risk communication depending on the perspective of the stakeholders. Several factors can be identified, such as responsibility for hazard avoidance, economic interest, or a watchdog role. Thus, communication gaps can be reduced to a four‐fold problem matrix comprising a semantic, conceptual, strategic, and control problem. The empirical study made clear that risks and hazards are perceived very differently depending on the stakeholders’ perspective. Their own worldviews played a major role in their specific use of the two terms hazards and risks in communication.  相似文献   

9.
Modern theories in cognitive psychology and neuroscience indicate that there are two fundamental ways in which human beings comprehend risk. The “analytic system” uses algorithms and normative rules, such as probability calculus, formal logic, and risk assessment. It is relatively slow, effortful, and requires conscious control. The “experiential system” is intuitive, fast, mostly automatic, and not very accessible to conscious awareness. The experiential system enabled human beings to survive during their long period of evolution and remains today the most natural and most common way to respond to risk. It relies on images and associations, linked by experience to emotion and affect (a feeling that something is good or bad). This system represents risk as a feeling that tells us whether it is safe to walk down this dark street or drink this strange‐smelling water. Proponents of formal risk analysis tend to view affective responses to risk as irrational. Current wisdom disputes this view. The rational and the experiential systems operate in parallel and each seems to depend on the other for guidance. Studies have demonstrated that analytic reasoning cannot be effective unless it is guided by emotion and affect. Rational decision making requires proper integration of both modes of thought. Both systems have their advantages, biases, and limitations. Now that we are beginning to understand the complex interplay between emotion and reason that is essential to rational behavior, the challenge before us is to think creatively about what this means for managing risk. On the one hand, how do we apply reason to temper the strong emotions engendered by some risk events? On the other hand, how do we infuse needed “doses of feeling” into circumstances where lack of experience may otherwise leave us too “coldly rational”? This article addresses these important questions.  相似文献   

10.
Yacov Y. Haimes 《Risk analysis》2011,31(8):1175-1186
This article highlights the complexity of the quantification of the multidimensional risk function, develops five systems‐based premises on quantifying the risk of terrorism to a threatened system, and advocates the quantification of vulnerability and resilience through the states of the system. The five premises are: (i) There exists interdependence between a specific threat to a system by terrorist networks and the states of the targeted system, as represented through the system's vulnerability, resilience, and criticality‐impact. (ii) A specific threat, its probability, its timing, the states of the targeted system, and the probability of consequences can be interdependent. (iii) The two questions in the risk assessment process: “What is the likelihood?” and “What are the consequences?” can be interdependent. (iv) Risk management policy options can reduce both the likelihood of a threat to a targeted system and the associated likelihood of consequences by changing the states (including both vulnerability and resilience) of the system. (v) The quantification of risk to a vulnerable system from a specific threat must be built on a systemic and repeatable modeling process, by recognizing that the states of the system constitute an essential step to construct quantitative metrics of the consequences based on intelligence gathering, expert evidence, and other qualitative information. The fact that the states of all systems are functions of time (among other variables) makes the time frame pivotal in each component of the process of risk assessment, management, and communication. Thus, risk to a system, caused by an initiating event (e.g., a threat) is a multidimensional function of the specific threat, its probability and time frame, the states of the system (representing vulnerability and resilience), and the probabilistic multidimensional consequences.  相似文献   

11.
Two images, “black swans” and “perfect storms,” have struck the public's imagination and are used—at times indiscriminately—to describe the unthinkable or the extremely unlikely. These metaphors have been used as excuses to wait for an accident to happen before taking risk management measures, both in industry and government. These two images represent two distinct types of uncertainties (epistemic and aleatory). Existing statistics are often insufficient to support risk management because the sample may be too small and the system may have changed. Rationality as defined by the von Neumann axioms leads to a combination of both types of uncertainties into a single probability measure—Bayesian probability—and accounts only for risk aversion. Yet, the decisionmaker may also want to be ambiguity averse. This article presents an engineering risk analysis perspective on the problem, using all available information in support of proactive risk management decisions and considering both types of uncertainty. These measures involve monitoring of signals, precursors, and near‐misses, as well as reinforcement of the system and a thoughtful response strategy. It also involves careful examination of organizational factors such as the incentive system, which shape human performance and affect the risk of errors. In all cases, including rare events, risk quantification does not allow “prediction” of accidents and catastrophes. Instead, it is meant to support effective risk management rather than simply reacting to the latest events and headlines.  相似文献   

12.
Recently Kasperson et al.(6) have proposed a conceptual framework, “The Social Amplification of Risk,” as a beginning step in developing a comprehensive theory of public experience of risk. A central goal of their effort is to systematically link technical assessments of risk with the growing findings from social scientific research. A key and growing domain of public risk experience is “desired” risk, but this is virtually neglected in the framework. This paper evaluates the scope of the “Social Amplification of Risk Framework,” asking whether it is applicable to desired risks, such as risk recreation (hang gliding, mountain climbing, and so forth). The analysis is supportive of the framework's applicability to the domain of desired risk.  相似文献   

13.
Wildfires present a complex applied risk management environment, but relatively little attention has been paid to behavioral and cognitive responses to risk among public agency wildfire managers. This study investigates responses to risk, including probability weighting and risk aversion, in a wildfire management context using a survey‐based experiment administered to federal wildfire managers. Respondents were presented with a multiattribute lottery‐choice experiment where each lottery is defined by three outcome attributes: expenditures for fire suppression, damage to private property, and exposure of firefighters to the risk of aviation‐related fatalities. Respondents choose one of two strategies, each of which includes “good” (low cost/low damage) and “bad” (high cost/high damage) outcomes that occur with varying probabilities. The choice task also incorporates an information framing experiment to test whether information about fatality risk to firefighters alters managers' responses to risk. Results suggest that managers exhibit risk aversion and nonlinear probability weighting, which can result in choices that do not minimize expected expenditures, property damage, or firefighter exposure. Information framing tends to result in choices that reduce the risk of aviation fatalities, but exacerbates nonlinear probability weighting.  相似文献   

14.
Microbial food safety risk assessment models can often at times be simplified by eliminating the need to integrate a complex dose‐response relationship across a distribution of exposure doses. This is possible if exposure pathways lead to pathogens at exposure that consistently have a small probability of causing illness. In this situation, the probability of illness will follow an approximately linear function of dose. Consequently, the predicted probability of illness per serving across all exposures is linear with respect to the expected value of dose. The majority of dose‐response functions are approximately linear when the dose is low. Nevertheless, what constitutes “low” is dependent on the parameters of the dose‐response function for a particular pathogen. In this study, a method is proposed to determine an upper bound of the exposure distribution for which the use of a linear dose‐response function is acceptable. If this upper bound is substantially larger than the expected value of exposure doses, then a linear approximation for probability of illness is reasonable. If conditions are appropriate for using the linear dose‐response approximation, for example, the expected value for exposure doses is two to three logs10 smaller than the upper bound of the linear portion of the dose‐response function, then predicting the risk‐reducing effectiveness of a proposed policy is trivial. Simple examples illustrate how this approximation can be used to inform policy decisions and improve an analyst's understanding of risk.  相似文献   

15.
This mixed‐methods study investigated consumers’ knowledge of chemicals in terms of basic principles of toxicology and then related this knowledge, in addition to other factors, to their fear of chemical substances (i.e., chemophobia). Both qualitative interviews and a large‐scale online survey were conducted in the German‐speaking part of Switzerland. A Mokken scale was developed to measure laypeople's toxicological knowledge. The results indicate that most laypeople are unaware of the similarities between natural and synthetic chemicals in terms of certain toxicological principles. Furthermore, their associations with the term “chemical substances” and the self‐reported affect prompted by these associations are mostly negative. The results also suggest that knowledge of basic principles of toxicology, self‐reported affect evoked by the term “chemical substances,” risk‐benefit perceptions concerning synthetic chemicals, and trust in regulation processes are all negatively associated with chemophobia, while general health concerns are positively related to chemophobia. Thus, to enhance informed consumer decisionmaking, it might be necessary to tackle the stigmatization of the term “chemical substances” as well as address and clarify prevalent misconceptions.  相似文献   

16.
Dr. Yellman proposes to define frequency as “a time‐rate of events of a specified type over a particular time interval.” We review why no definition of frequency, including this one, can satisfy both of two conditions: (1) the definition should agree with the ordinary meaning of frequency, such as that less frequent events are less likely to occur than more frequent events, over any particular time interval for which the frequencies of both are defined; and (2) the definition should be applicable not only to exponentially distributed times between (or until) events, but also to some nonexponential (e.g., uniformly distributed) times. We make the simple point that no definition can satisfy (1) and (2) by showing that any definition that determines which of any two uniformly distributed times has the higher “frequency” (or that determines that they have the same “frequency,” if neither is higher) must assign a higher frequency number to the distribution with the lower probability of occurrence over some time intervals. Dr. Yellman's proposed phrase, “time‐rate of events … over a particular time interval” is profoundly ambiguous in such cases, as the instantaneous failure rates vary over an infinitely wide range (e.g., from one to infinity), making it unclear which value is denoted by the phrase “time‐rate of events.”  相似文献   

17.
The dose–response relationship between folate levels and cognitive impairment among individuals with vitamin B12 deficiency is an essential component of a risk-benefit analysis approach to regulatory and policy recommendations regarding folic acid fortification. Epidemiological studies provide data that are potentially useful for addressing this research question, but the lack of analysis and reporting of data in a manner suitable for dose–response purposes hinders the application of the traditional evidence synthesis process. This study aimed to estimate a quantitative dose–response relationship between folate exposure and the risk of cognitive impairment among older adults with vitamin B12 deficiency using “probabilistic meta-analysis,” a novel approach for synthesizing data from observational studies. Second-order multistage regression was identified as the best-fit model for the association between the probability of cognitive impairment and serum folate levels based on data generated by randomly sampling probabilistic distributions with parameters estimated based on summarized information reported in relevant publications. The findings indicate a “J-shape” effect of serum folate levels on the occurrence of cognitive impairment. In particular, an excessive level of folate exposure is predicted to be associated with a higher risk of cognitive impairment, albeit with greater uncertainty than the association between low folate exposure and cognitive impairment. This study directly contributes to the development of a practical solution to synthesize observational evidence for dose–response assessment purposes, which will help strengthen future nutritional risk assessments for the purpose of informing decisions on nutrient fortification in food.  相似文献   

18.
The present study investigates U.S. Department of Agriculture inspection records in the Agricultural Quarantine Activity System database to estimate the probability of quarantine pests on propagative plant materials imported from various countries of origin and to develop a methodology ranking the risk of country–commodity combinations based on quarantine pest interceptions. Data collected from October 2014 to January 2016 were used for developing predictive models and validation study. A generalized linear model with Bayesian inference and a generalized linear mixed effects model were used to compare the interception rates of quarantine pests on different country–commodity combinations. Prediction ability of generalized linear mixed effects models was greater than that of generalized linear models. The estimated pest interception probability and confidence interval for each country–commodity combination was categorized into one of four compliance levels: “High,” “Medium,” “Low,” and “Poor/Unacceptable,” Using K‐means clustering analysis. This study presents risk‐based categorization for each country–commodity combination based on the probability of quarantine pest interceptions and the uncertainty in that assessment.  相似文献   

19.
We analyze a decentralized supply chain with a single risk‐averse retailer and multiple risk‐averse suppliers under a Conditional Value at Risk objective. We define coordinating contracts and show that the supply chain is coordinated only when the least risk‐averse agent bears the entire risk and the lowest‐cost supplier handles all production. However, due to competition, not all coordinating contracts are stable. Thus, we introduce the notion of contract core, which reflects the agents' “bargaining power” and restricts the set of coordinating contracts to a subset which is “credible.” We also study the concept of contract equilibrium, which helps to characterize contracts that are immune to opportunistic renegotiation. We show that, the concept of contract core imposes conditions on the share of profit among different agents, while the concept of contract equilibrium provide conditions on how the payment changes with the order quantity.  相似文献   

20.
Ten years ago, the National Academy of Science released its risk assessment/risk management (RA/RM) “paradigm” that served to crystallize much of the early thinking about these concepts. By defining RA as a four-step process, operationally independent from RM, the paradigm has presented society with a scheme, or a conceptually common framework, for addressing many risky situations (e.g., carcinogens, noncarcinogens, and chemical mixtures). The procedure has facilitated decision-making in a wide variety of situations and has identified the most important research needs. The past decade, however, has revealed that additional progress is needed. These areas include addressing the appropriate interaction (not isolation) between RA and RM, improving the methods for assessing risks from mixtures, dealing with “adversity of effect,” deciding whether “hazard” should imply an exposure to environmental conditions or to laboratory conditions, and evolving the concept to include both health and ecological risk. Interest in and expectations of risk assessment are increasing rapidly. The emerging concept of “comparative risk” (i.e., distinguishing between large risks and smaller risks that may be qualitatively different) is at a level comparable to that held by the concept of “risk” just 10 years ago. Comparative risk stands in need of a paradigm of its own, especially given the current economic limitations. “Times are tough; Brother, can you paradigm?”  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号