首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Risk analysis》2018,38(6):1183-1201
In assessing environmental health risks, the risk characterization step synthesizes information gathered in evaluating exposures to stressors together with dose–response relationships, characteristics of the exposed population, and external environmental conditions. This article summarizes key steps of a cumulative risk assessment (CRA) followed by a discussion of considerations for characterizing cumulative risks. Cumulative risk characterizations differ considerably from single chemical‐ or single source‐based risk characterization. CRAs typically focus on a specific population instead of a pollutant or pollutant source and should include an evaluation of all relevant sources contributing to the exposures in the population and other factors that influence dose–response relationships. Second, CRAs may include influential environmental and population‐specific conditions, involving multiple chemical and nonchemical stressors. Third, a CRA could examine multiple health effects, reflecting joint toxicity and the potential for toxicological interactions. Fourth, the complexities often necessitate simplifying methods, including judgment‐based and semi‐quantitative indices that collapse disparate data into numerical scores. Fifth, because of the higher dimensionality and potentially large number of interactions, information needed to quantify risk is typically incomplete, necessitating an uncertainty analysis. Three approaches that could be used for characterizing risks in a CRA are presented: the multiroute hazard index, stressor grouping by exposure and toxicity, and indices for screening multiple factors and conditions. Other key roles of the risk characterization in CRAs are also described, mainly the translational aspect of including a characterization summary for lay readers (in addition to the technical analysis), and placing the results in the context of the likely risk‐based decisions.  相似文献   

2.
Currently, there is a trend away from the use of single (often conservative) estimates of risk to summarize the results of risk analyses in favor of stochastic methods which provide a more complete characterization of risk. The use of such stochastic methods leads to a distribution of possible values of risk, taking into account both uncertainty and variability in all of the factors affecting risk. In this article, we propose a general framework for the analysis of uncertainty and variability for use in the commonly encountered case of multiplicative risk models, in which risk may be expressed as a product of two or more risk factors. Our analytical methods facilitate the evaluation of overall uncertainty and variability in risk assessment, as well as the contributions of individual risk factors to both uncertainty and variability which is cumbersome using Monte Carlo methods. The use of these methods is illustrated in the analysis of potential cancer risks due to the ingestion of radon in drinking water.  相似文献   

3.
A Scale of Risk     
This article proposes a conceptual framework for ranking the relative gravity of diverse risks. This framework identifies the moral considerations that should inform the evaluation and comparison of diverse risks. A common definition of risk includes two dimensions: the probability of occurrence and the associated consequences of a set of hazardous scenarios. This article first expands this definition to include a third dimension: the source of a risk. The source of a risk refers to the agents involved in the creation or maintenance of a risk and captures a central moral concern about risks. Then, a scale of risk is proposed to categorize risks along a multidimensional ranking, based on a comparative evaluation of the consequences, probability, and source of a given risk. A risk is ranked higher on the scale the larger the consequences, the greater the probability, and the more morally culpable the source. The information from the proposed comparative evaluation of risks can inform the selection of priorities for risk mitigation.  相似文献   

4.
The abandoned mine legacy is critical in many countries around the world, where mine cave-ins and surface subsidence disruptions are perpetual risks that can affect the population, infrastructure, historical legacies, land use, and the environment. This article establishes abandoned metal mine failure risk evaluation approaches and quantification techniques based on the Canadian mining experience. These utilize clear geomechanics considerations such as failure mechanisms, which are dependent on well-defined rock mass parameters. Quantified risk is computed using probability of failure (probabilistics using limit-equilibrium factors of safety or applicable numerical modeling factor of safety quantifications) times a consequence impact value. Semi-quantified risk can be based on failure-case-study-based empirical data used in calculating probability of failure, and personal experience can provide qualified hazard and impact consequence assessments. The article provides outlines for land use and selection of remediation measures based on risk.  相似文献   

5.
Large parts of the Netherlands are below sea level. Therefore, it is important to have insight into the possible consequences and risks of flooding. In this article, an analysis of the risks due to flooding of the dike ring area South Holland in the Netherlands is presented. For different flood scenarios the potential number of fatalities is estimated. Results indicate that a flood event in this area can expose large and densely populated areas and result in hundreds to thousands of fatalities. Evacuation of South Holland before a coastal flood will be difficult due to the large amount of time required for evacuation and the limited time available. By combination with available information regarding the probability of occurrence of different flood scenarios, the flood risks have been quantified. The probability of death for a person in South Holland due to flooding, the so‐called individual risk, is small. The probability of a flood disaster with many fatalities, the so‐called societal risk, is relatively large in comparison with the societal risks in other sectors in the Netherlands, such as the chemical sector and aviation. The societal risk of flooding appears to be unacceptable according to some of the existing risk limits that have been proposed in literature. These results indicate the necessity of a further societal discussion on the acceptable level of flood risk in the Netherlands and the need for additional risk reducing measures.  相似文献   

6.
Concern about the degree of uncertainty and potential conservatism in deterministic point estimates of risk has prompted researchers to turn increasingly to probabilistic methods for risk assessment. With Monte Carlo simulation techniques, distributions of risk reflecting uncertainty and/or variability are generated as an alternative. In this paper the compounding of conservatism(1) between the level associated with point estimate inputs selected from probability distributions and the level associated with the deterministic value of risk calculated using these inputs is explored. Two measures of compounded conservatism are compared and contrasted. The first measure considered, F , is defined as the ratio of the risk value, R d, calculated deterministically as a function of n inputs each at the j th percentile of its probability distribution, and the risk value, R j that falls at the j th percentile of the simulated risk distribution (i.e., F=Rd/Rj). The percentile of the simulated risk distribution which corresponds to the deterministic value, Rd , serves as a second measure of compounded conservatism. Analytical results for simple products of lognormal distributions are presented. In addition, a numerical treatment of several complex cases is presented using five simulation analyses from the literature to illustrate. Overall, there are cases in which conservatism compounds dramatically for deterministic point estimates of risk constructed from upper percentiles of input parameters, as well as those for which the effect is less notable. The analytical and numerical techniques discussed are intended to help analysts explore the factors that influence the magnitude of compounding conservatism in specific cases.  相似文献   

7.
针对目前缺乏美元当日汇率对其他汇率市场隔夜风险影响的研究,本文在CAViaR模型中的AS模型和SAV模型基础上提出隔夜-AS模型和隔夜-SAV模型来测量汇率隔夜风险,并对日元汇率,人民币汇率和港币汇率2009年到2014年的数据进行实证分析,研究结果表明隔夜-AS模型和隔夜-SAV模型均优于AS模型和SAV模型,且隔夜-AS模型又优于隔夜-SAV模型。这三个汇率的隔夜风险均受到滞后风险的影响,且人民币汇率所受滞后风险最大,美元指数的波动都将加大这三个汇率市场的隔夜风险,美元对日元和港币汇率的冲击大于对人民币汇率的冲击,美元走弱对这三个市场隔夜风险影响大于美元走强所带来的影响,这些都为我国汇率隔夜风险的管理提供了新的方法和思路。  相似文献   

8.
陆静 《管理工程学报》2012,26(3):136-145
尽管高级计量法由于具有计算精确和节约监管资本等优点而被多数商业银行所青睐,但对于采用哪一种方法来刻画低频高危的操作风险尾部数据却没有一致认识。本文根据巴塞尔委员会关于操作风险计量的原则,采用分块极大值方法和概率加权矩参数估计法,对中国商业银行1990—2009年间的操作风险数据进行了实证。从图形检验和数值检验结果来看,该模型估计的参数具有较高的拟合优度,能够较好地拟合操作风险极端值的尾部分布,为商业银行计量操作风险资本提供了较高的参考价值。  相似文献   

9.
林宇  魏宇  程宏伟 《管理评论》2012,(1):18-25,51
针对金融市场呈现出的非对称结构,以新兴市场的中国大陆沪市上证综指(SSEC)和成熟市场的标准普尔指数(S&P500)作为代表性的研究对象,运用有偏学生分布(SKST)来刻画金融收益的有偏非对称分布形态;运用APARCH等模型来刻画金融收益条件波动率的非对称波动性,并以此来开展风险测度研究;最后运用返回测试中LRT和DQR来检验风险测度的准确性。实证结果表明:没有哪种金融收益的条件非对称波动模型具有绝对优越的风险测度能力;在标准收益服从的分布上,新兴市场SSEC与成熟市场S&P500市场却又表现出明显的不同,Normal分布并不适合SSEC,但能适合S&P500;ST能够适应SSEC,却不能适应S&P500,而SKST能够适应两种市场;对于S&P500,在99%这样高的置信水平下是Normal优越,而在95%的置信水平下却是SKST优秀,对于SSEC,SKST分布的准确性在两个水平下都是最高。  相似文献   

10.
Decades of research identify risk perception as a largely intuitive and affective construct, in contrast to the more deliberative assessments of probability and consequences that form the foundation of risk assessment. However, a review of the literature reveals that many of the risk perception measures employed in survey research with human subjects are either generic in nature, not capturing any particular affective, probabilistic, or consequential dimension of risk; or focused solely on judgments of probability. The goal of this research was to assess a multidimensional measure of risk perception across multiple hazards to identify a measure that will be broadly useful for assessing perceived risk moving forward. Our results support the idea of risk perception being multidimensional, but largely a function of individual affective reactions to the hazard. We also find that our measure of risk perception holds across multiple types of hazards, ranging from those that are behavioral in nature (e.g., health and safety behaviors), to those that are technological (e.g., pollution), or natural (e.g., extreme weather). We suggest that a general, unidimensional measure of risk may accurately capture one's perception of the severity of the consequences, and the discrete emotions that are felt in response to those potential consequences. However, such a measure is not likely to capture the perceived probability of experiencing the outcomes, nor will it be as useful at understanding one's motivation to take mitigation action.  相似文献   

11.
Upon shutting down operations in early 2020 due to the COVID-19 pandemic, the movie industry assembled teams of experts to help develop guidelines for returning to operation. It resulted in a joint report, The Safe Way Forward, which was created in consultation with union members and provided the basis for negotiations with the studios. A centerpiece of the report was a set of heatmaps displaying SARS-CoV-2 risks for a shoot, as a function of testing rate, community infection prevalence, community transmission rate (R0), and risk measure (either expected number of cases or probability of at least one case). We develop and demonstrate a methodology for evaluating such complex displays, in terms of how well they inform potential users, in this case, workers deciding whether the risks of a shoot are acceptable. We ask whether individuals making hypothetical return-to-work decisions can (a) read display entries, (b) compare display entries, and (c) make inferences based on display entries. Generally speaking, respondents recruited through the Amazon MTurk platform could interpret the display information accurately and make coherent decisions, suggesting that heatmaps can communicate complex risks to lay audiences. Although these heatmaps were created for practical, rather than theoretical, purposes, these results provide partial support for theoretical accounts of visual information processing and identify challenges in applying them to complex settings.  相似文献   

12.
Differences in the conceptual frameworks of scientists and nonscientists may create barriers to risk communication. This article examines two such conceptual problems. First, the logic of "direct inference" from group statistics to probabilities about specific individuals suggests that individuals might be acting rationally in refusing to apply to themselves the conclusions of regulatory risk assessments. Second, while regulators and risk assessment scientists often use an "objectivist" or "relative frequency" interpretation of probability statements, members of the public are more likely to adopt a "subjectivist" or "degree of confidence" interpretation when estimating their personal risks, and either misunderstand or significantly discount the relevance of risk assessment conclusions. If these analyses of inference and probability are correct, there may be a conceptual gulf at the center of risk communication that cannot be bridged by additional data about the magnitude of group risk. Suggestions are made for empirical studies that might help regulators deal with this conceptual gulf.  相似文献   

13.
与传统文献将风险下降比率作为风险对冲效率指标不同,本文引入期望效用理论来比较最小方差对冲策略、最小在险价值(VaR)对冲策略和最小条件在险价值(CVaR)对冲策略的对冲效率,从而将人们的风险态度同对冲策略选择联系起来,以实现不同风险态度的投资者选择不同风险对冲策略的目的。借用风险中性效用函数、二次效用函数和CARA效用函数,本文严格证明:在这三种对冲策略中,最小方差对冲策略过于保守,最小VaR对冲策略最为激进,风险厌恶程度大的投资者偏好最小方差对冲策略,风险中性投资者和风险厌恶程度小的投资者更偏好最小VaR对冲策略,最小CVaR对冲策略介于二者之间。  相似文献   

14.
银行风险之间的相关关系会使风险之间相互转化,相互影响,令风险呈现出放大或者缩小的趋势,显著影响着银行风险度量结果的准确性。银行风险集成致力于在充分考虑银行风险相关关系的基础上,对银行风险进行较为准确的度量。但是银行风险具有相关关系种类繁多、表现形式复杂、以及数据的可获得性差等显著特征,导致银行风险的集成度量领域存在诸多挑战。本文对相关性下的银行风险集成研究进行综述,具体从集成对象、集成方法和集成数据三个层次系统展开。首先对银行风险及其蕴含的种类繁多的相关关系类型进行解析,然后分析银行风险相关关系表征出的多种复杂特性,根据对这些特性的刻画能力对银行风险的集成方法进行划分和比较,最后总结了获取银行风险集成数据的多种途径。在此基础上,进一步分析了银行风险集成研究的难点和未来趋势。  相似文献   

15.
基于EVT-POT-SV-MT模型的极值风险度量   总被引:1,自引:0,他引:1  
针对金融资产收益的异常变化,采用SV-MT模型对风险资产的预期收益做风险补偿并捕捉收益序列的厚尾性、波动的异方差性等特征,将收益序列转化为标准残差序列,通过SV-MT模型与极值理论相结合拟合标准残差的尾部分布,建立了一种新的金融风险度量模型——基于EVT-POT-SV-MT的动态VaR模型.通过该模型对上证综指做实证分析,结果表明该模型能够合理有效地度量上证综指收益的风险.  相似文献   

16.
A major issue in all risk communication efforts is the distinction between the terms “risk” and “hazard.” The potential to harm a target such as human health or the environment is normally defined as a hazard, whereas risk also encompasses the probability of exposure and the extent of damage. What can be observed again and again in risk communication processes are misunderstandings and communication gaps related to these crucial terms. We asked a sample of 53 experts from public authorities, business and industry, and environmental and consumer organizations in Germany to outline their understanding and use of these terms using both the methods of expert interviews and focus groups. The empirical study made clear that the terms risk and hazard are perceived and used very differently in risk communication depending on the perspective of the stakeholders. Several factors can be identified, such as responsibility for hazard avoidance, economic interest, or a watchdog role. Thus, communication gaps can be reduced to a four‐fold problem matrix comprising a semantic, conceptual, strategic, and control problem. The empirical study made clear that risks and hazards are perceived very differently depending on the stakeholders’ perspective. Their own worldviews played a major role in their specific use of the two terms hazards and risks in communication.  相似文献   

17.
Qualitative systems for rating animal antimicrobial risks using ordered categorical labels such as “high,”“medium,” and “low” can potentially simplify risk assessment input requirements used to inform risk management decisions. But do they improve decisions? This article compares the results of qualitative and quantitative risk assessment systems and establishes some theoretical limitations on the extent to which they are compatible. In general, qualitative risk rating systems satisfying conditions found in real‐world rating systems and guidance documents and proposed as reasonable make two types of errors: (1) Reversed rankings, i.e., assigning higher qualitative risk ratings to situations that have lower quantitative risks; and (2) Uninformative ratings, e.g., frequently assigning the most severe qualitative risk label (such as “high”) to situations with arbitrarily small quantitative risks and assigning the same ratings to risks that differ by many orders of magnitude. Therefore, despite their appealing consensus‐building properties, flexibility, and appearance of thoughtful process in input requirements, qualitative rating systems as currently proposed often do not provide sufficient information to discriminate accurately between quantitatively small and quantitatively large risks. The value of information (VOI) that they provide for improving risk management decisions can be zero if most risks are small but a few are large, since qualitative ratings may then be unable to confidently distinguish the large risks from the small. These limitations suggest that it is important to continue to develop and apply practical quantitative risk assessment methods, since qualitative ones are often unreliable.  相似文献   

18.
The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model. However, under extremely unfavorable market conditions, results indicate that f(4) can be a more valid measure of risk than volatility.  相似文献   

19.
Following the 2013 Chelyabinsk event, the risks posed by asteroids attracted renewed interest, from both the scientific and policy‐making communities. It reminded the world that impacts from near‐Earth objects (NEOs), while rare, have the potential to cause great damage to cities and populations. Point estimates of the risk (such as mean numbers of casualties) have been proposed, but because of the low‐probability, high‐consequence nature of asteroid impacts, these averages provide limited actionable information. While more work is needed to further refine its input distributions (e.g., NEO diameters), the probabilistic model presented in this article allows a more complete evaluation of the risk of NEO impacts because the results are distributions that cover the range of potential casualties. This model is based on a modularized simulation that uses probabilistic inputs to estimate probabilistic risk metrics, including those of rare asteroid impacts. Illustrative results of this analysis are presented for a period of 100 years. As part of this demonstration, we assess the effectiveness of civil defense measures in mitigating the risk of human casualties. We find that they are likely to be beneficial but not a panacea. We also compute the probability—but not the consequences—of an impact with global effects (“cataclysm”). We conclude that there is a continued need for NEO observation, and for analyses of the feasibility and risk‐reduction effectiveness of space missions designed to deflect or destroy asteroids that threaten the Earth.  相似文献   

20.
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location‐scale families (including the log‐normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号