首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
Regulatory agencies often perform microbial risk assessments to evaluate the change in the number of human illnesses as the result of a new policy that reduces the level of contamination in the food supply. These agencies generally have regulatory authority over the production and retail sectors of the farm‐to‐table continuum. Any predicted change in contamination that results from new policy that regulates production practices occurs many steps prior to consumption of the product. This study proposes a framework for conducting microbial food‐safety risk assessments; this framework can be used to quantitatively assess the annual effects of national regulatory policies. Advantages of the framework are that estimates of human illnesses are consistent with national disease surveillance data (which are usually summarized on an annual basis) and some of the modeling steps that occur between production and consumption can be collapsed or eliminated. The framework leads to probabilistic models that include uncertainty and variability in critical input parameters; these models can be solved using a number of different Bayesian methods. The Bayesian synthesis method performs well for this application and generates posterior distributions of parameters that are relevant to assessing the effect of implementing a new policy. An example, based on Campylobacter and chicken, estimates the annual number of illnesses avoided by a hypothetical policy; this output could be used to assess the economic benefits of a new policy. Empirical validation of the policy effect is also examined by estimating the annual change in the numbers of illnesses observed via disease surveillance systems.  相似文献   

2.
A better understanding of the uncertainty that exists in models used for seismic risk assessment is critical to improving risk-based decisions pertaining to earthquake safety. Current models estimating the probability of collapse of a building do not consider comprehensively the nature and impact of uncertainty. This article presents a model framework to enhance seismic risk assessment and thus gives decisionmakers a fuller understanding of the nature and limitations of the estimates. This can help ensure that risks are not over- or underestimated and the value of acquiring accurate data is appreciated fully. The methodology presented provides a novel treatment of uncertainties in input variables, their propagation through the model, and their effect on the results. The study presents ranges of possible annual collapse probabilities for different case studies on buildings in different parts of the world, exposed to different levels of seismicity, and with different vulnerabilities. A global sensitivity analysis was conducted to determine the significance of uncertain variables. Two key outcomes are (1) that the uncertainty in ground-motion conversion equations has the largest effect on the uncertainty in the calculation of annual collapse probability; and (2) the vulnerability of a building appears to have an effect on the range of annual collapse probabilities produced, i.e., the level of uncertainty in the estimate of annual collapse probability, with less vulnerable buildings having a smaller uncertainty.  相似文献   

3.
Over the past decade, terrorism risk has become a prominent consideration in protecting the well‐being of individuals and organizations. More recently, there has been interest in not only quantifying terrorism risk, but also placing it in the context of an all‐hazards environment in which consideration is given to accidents and natural hazards, as well as intentional acts. This article discusses the development of a regional terrorism risk assessment model designed for this purpose. The approach taken is to model terrorism risk as a dependent variable, expressed in expected annual monetary terms, as a function of attributes of population concentration and critical infrastructure. This allows for an assessment of regional terrorism risk in and of itself, as well as in relation to man‐made accident and natural hazard risks, so that mitigation resources can be allocated in an effective manner. The adopted methodology incorporates elements of two terrorism risk modeling approaches (event‐based models and risk indicators), producing results that can be utilized at various jurisdictional levels. The validity, strengths, and limitations of the model are discussed in the context of a case study application within the United States.  相似文献   

4.
The effective use of evidence and its resultant knowledge is increasingly recognized as critical in risk analysis. This, in turn, has led to a growing concern over issues of epistemology in risk communication, and, in particular, interest in how knowledge is constructed and employed by the key players in risk--scientists, policy makers, and the public. This article uses a critical theoretical approach to explore how evidence is recognized and validated, and how limits are placed on knowledge by scientists, policy makers, and the public. It brings together developments in the sociology of science, policy and policy development, public understandings of science, and risk communication and analysis to explicate the differing forms of rationality employed by each group. The work concludes that each group employs different, although equally legitimate, forms of rationality when evaluating evidence and generating knowledge around risky environment and health issues. Scientists, policy makers, and the public employ scientific, political, and social rationality, respectively. These differing forms of rationality reflect underlying epistemological distances from which can develop considerable misunderstandings and misinterpretations.  相似文献   

5.
This article presents a flood risk analysis model that considers the spatially heterogeneous nature of flood events. The basic concept of this approach is to generate a large sample of flood events that can be regarded as temporal extrapolation of flood events. These are combined with cumulative flood impact indicators, such as building damages, to finally derive time series of damages for risk estimation. Therefore, a multivariate modeling procedure that is able to take into account the spatial characteristics of flooding, the regionalization method top‐kriging, and three different impact indicators are combined in a model chain. Eventually, the expected annual flood impact (e.g., expected annual damages) and the flood impact associated with a low probability of occurrence are determined for a study area. The risk model has the potential to augment the understanding of flood risk in a region and thereby contribute to enhanced risk management of, for example, risk analysts and policymakers or insurance companies. The modeling framework was successfully applied in a proof‐of‐concept exercise in Vorarlberg (Austria). The results of the case study show that risk analysis has to be based on spatially heterogeneous flood events in order to estimate flood risk adequately.  相似文献   

6.
Catastrophic events, such as floods, earthquakes, hurricanes, and tsunamis, are rare, yet the cumulative risk of each event occurring at least once over an extended time period can be substantial. In this work, we assess the perception of cumulative flood risks, how those perceptions affect the choice of insurance, and whether perceptions and choices are influenced by cumulative risk information. We find that participants' cumulative risk judgments are well represented by a bimodal distribution, with a group that severely underestimates the risk and a group that moderately overestimates it. Individuals who underestimate cumulative risks make more risk‐seeking choices compared to those who overestimate cumulative risks. Providing explicit cumulative risk information for relevant time periods, as opposed to annual probabilities, is an inexpensive and effective way to improve both the perception of cumulative risk and the choices people make to protect against that risk.  相似文献   

7.
Critical infrastructure systems must be both robust and resilient in order to ensure the functioning of society. To improve the performance of such systems, we often use risk and vulnerability analysis to find and address system weaknesses. A critical component of such analyses is the ability to accurately determine the negative consequences of various types of failures in the system. Numerous mathematical and simulation models exist that can be used to this end. However, there are relatively few studies comparing the implications of using different modeling approaches in the context of comprehensive risk analysis of critical infrastructures. In this article, we suggest a classification of these models, which span from simple topologically‐oriented models to advanced physical‐flow‐based models. Here, we focus on electric power systems and present a study aimed at understanding the tradeoffs between simplicity and fidelity in models used in the context of risk analysis. Specifically, the purpose of this article is to compare performance estimates achieved with a spectrum of approaches typically used for risk and vulnerability analysis of electric power systems and evaluate if more simplified topological measures can be combined using statistical methods to be used as a surrogate for physical flow models. The results of our work provide guidance as to appropriate models or combinations of models to use when analyzing large‐scale critical infrastructure systems, where simulation times quickly become insurmountable when using more advanced models, severely limiting the extent of analyses that can be performed.  相似文献   

8.
Risk of Extreme Events Under Nonstationary Conditions   总被引:5,自引:0,他引:5  
The concept of the return period is widely used in the analysis of the risk of extreme events and in engineering design. For example, a levee can be designed to protect against the 100-year flood, the flood which on average occurs once in 100 years. Use of the return period typically assumes that the probability of occurrence of an extreme event in the current or any future year is the same. However, there is evidence that potential climate change may affect the probabilities of some extreme events such as floods and droughts. In turn, this would affect the level of protection provided by the current infrastructure. For an engineering project, the risk of an extreme event in a future year could greatly exceed the average annual risk over the design life of the project. An equivalent definition of the return period under stationary conditions is the expected waiting time before failure. This paper examines how this definition can be adapted to nonstationary conditions. Designers of flood control projects should be aware that alternative definitions of the return period imply different risk under nonstationary conditions. The statistics of extremes and extreme value distributions are useful to examine extreme event risk. This paper uses a Gumbel Type I distribution to model the probability of failure under nonstationary conditions. The probability of an extreme event under nonstationary conditions depends on the rate of change of the parameters of the underlying distribution.  相似文献   

9.
大群体应急决策风险来源众多,且对决策的影响不容忽视。本文从个体因素和群体因素两方面对大群体应急决策风险进行系统识别,并将各风险因素与两类群体效应(认知冲突和关系冲突)进行关联,建立大群体应急决策风险致因体系。在此基础上,设置由个体认可度、群体结构、沟通方式、决策策略和外部影响组成的仿真变量,然后基于观点动力学利用Netlogo工具建立大群体应急决策风险致因多主体仿真模型,最后通过案例模拟得出各风险因素致因机理的一般规律。仿真结果表明:控制高认可度决策主体的比例,增加聚集间交互,采取必要的预见性措施,对降低决策风险,提高决策共识速度,应对决策环境的高动态性具有积极作用。研究有助于掌握大群体应急决策风险因素的组成及其影响规律,为应急决策的策略引导提供参考和借鉴。  相似文献   

10.
Global supplier development is a multi-criterion decision problem which includes both qualitative and quantitative factors. The global supplier selection problem is more complex than domestic one and it needs more critical analysis. The aim of this paper is to identify and discuss some of the important and critical decision criteria including risk factors for the development of an efficient system for global supplier selection. Fuzzy extended analytic hierarchy process (FEAHP) based methodology will be discussed to tackle the different decision criteria like cost, quality, service performance and supplier's profile including the risk factors involved in the selection of global supplier in the current business scenario. FEAHP is an efficient tool to handle the fuzziness of the data involved in deciding the preferences of different decision variables. The linguistic level of comparisons produced by the customers and experts for each comparison are tapped in the form triangular fuzzy numbers to construct fuzzy pair-wise comparison matrices. The implementation of the system is demonstrated by a problem having four stages of hierarchy which contains different criteria and attributes at wider perspective. The proposed model can provide not only a framework for the organization to select the global supplier but also has the capability to deploy the organization's strategy to its supplier.  相似文献   

11.
Longitudinal studies are the gold standard of empirical work and stress research whenever experiments are not plausible. Frequently, scales are used to assess risk factors and their consequences, and cross-lagged effects are estimated to determine possible risks. Methods to translate cross-lagged effects into risk ratios to facilitate risk assessment do not yet exist, which creates a divide between psychological and epidemiological work stress research. The aim of the present paper is to demonstrate how cross-lagged effects can be used to assess the risk ratio of different levels of psychosocial safety climate (PSC) in organisations, an important psychosocial risk for the development of depression. We used available longitudinal evidence from the Australian Workplace Barometer (N?=?1905) to estimate cross-lagged effects of PSC on depression. We applied continuous time modelling to obtain time-scalable cross effects. These were further investigated in a 4-year Monte Carlo simulation, which translated them into 4-year incident rates. Incident rates were determined by relying on clinically relevant 2-year periods of depression. We suggest a critical value of PSC?=?26 (corresponding to ?1.4 SD), which is indicative of more than 100% increased incidents of persistent depressive disorder in 4-year periods compared to average levels of PSC across 4 years.  相似文献   

12.
Tim Bedford 《Risk analysis》2013,33(10):1884-1898
Group risk is usually represented by FN curves showing the frequency of different accident sizes for a given activity. Many governments regulate group risk through FN criterion lines, which define the tolerable location of an FN curve. However, to compare different risk reduction alternatives, one must be able to rank FN curves. The two main problems in doing this are that the FN curve contains multiple frequencies, and that there are usually large epistemic uncertainties about the curve. Since the mid 1970s, a number of authors have used the concept of “disutility” to summarize FN curves in which a family of disutility functions was defined with a single parameter controlling the degree of “risk aversion.” Here, we show it to be risk neutral, disaster averse, and insensitive to epistemic uncertainty on accident frequencies. A new approach is outlined that has a number of attractive properties. The formulation allows us to distinguish between risk aversion and disaster aversion, two concepts that have been confused in the literature until now. A two‐parameter family of disutilities generalizing the previous approach is defined, where one parameter controls risk aversion and the other disaster aversion. The family is sensitive to epistemic uncertainties. Such disutilities may, for example, be used to compare the impact of system design changes on group risks, or might form the basis for valuing reductions in group risk in a cost‐benefit analysis.  相似文献   

13.
The authors of this article have developed six probabilistic causal models for critical risks in tunnel works. The details of the models' development and evaluation were reported in two earlier publications of this journal. Accordingly, as a remaining step, this article is focused on the investigation into the use of these models in a real case study project. The use of the models is challenging given the need to provide information on risks that usually are both project and context dependent. The latter is of particular concern in underground construction projects. Tunnel risks are the consequences of interactions between site‐ and project‐ specific factors. Large variations and uncertainties in ground conditions as well as project singularities give rise to particular risk factors with very specific impacts. These circumstances mean that existing risk information, gathered from previous projects, is extremely difficult to use in other projects. This article considers these issues and addresses the extent to which prior risk‐related knowledge, in the form of causal models, as the models developed for the investigation, can be used to provide useful risk information for the case study project. The identification and characterization of the causes and conditions that lead to failures and their interactions as well as their associated probabilistic information is assumed to be risk‐related knowledge in this article. It is shown that, irrespective of existing constraints on using information and knowledge from past experiences, construction risk‐related knowledge can be transferred and used from project to project in the form of comprehensive models based on probabilistic‐causal relationships. The article also shows that the developed models provide guidance as to the use of specific remedial measures by means of the identification of critical risk factors, and therefore they support risk management decisions. Similarly, a number of limitations of the models are discussed.  相似文献   

14.
A Semiparametric Approach to Risk Assessment for Quantitative Outcomes   总被引:4,自引:0,他引:4  
Characterizing the dose-effect relationship and estimating acceptable exposure levels are the primary goals of quantitative risk assessment. A semiparametric approach is proposed for risk assessment with continuously measured or quantitative outcomes which has advantages over existing methods by requiring fewer assumptions. The approach is based on pairwise ranking between the response values in the control group and those in the exposed groups. The work generalizes the rank-based Wilcoxon-Mann-Whitney test, which for the two-group comparison is effectively a test of whether a response from the control group is different from (larger than) a response in an exposed group. We develop a regression framework that naturally extends this metric to model the dose effect in terms of a risk function. Parameters of the regression model can be estimated with standard software. However, inference requires an additional step to estimate the variance structure of the estimated parameters. An effective dose (ED) and associated lower confidence limit (LED) are easily calculated. The method is supported by a simulation study and is illustrated with a study on the effects of aconiazide. The method offers flexible modeling of the dose effect, and since it is rank-based, it is more resistant to outliers, nonconstant variance, and other departures from normality than previously described approaches.  相似文献   

15.
The abandoned mine legacy is critical in many countries around the world, where mine cave-ins and surface subsidence disruptions are perpetual risks that can affect the population, infrastructure, historical legacies, land use, and the environment. This article establishes abandoned metal mine failure risk evaluation approaches and quantification techniques based on the Canadian mining experience. These utilize clear geomechanics considerations such as failure mechanisms, which are dependent on well-defined rock mass parameters. Quantified risk is computed using probability of failure (probabilistics using limit-equilibrium factors of safety or applicable numerical modeling factor of safety quantifications) times a consequence impact value. Semi-quantified risk can be based on failure-case-study-based empirical data used in calculating probability of failure, and personal experience can provide qualified hazard and impact consequence assessments. The article provides outlines for land use and selection of remediation measures based on risk.  相似文献   

16.
国家风险是经济活动主体在国际业务中所面临的来自其他国家的风险, 深入研究其内在特征, 对于理解和把握国家风险的动态演化规律有着重要意义。鉴于国家风险复杂易变的特点, 本文提出了一种基于"分解重构"思想的多尺度特征提取与识别的研究框架, 利用Ensemble EMD方法将原始国家风险值分解到短期、中期和长期三个时间尺度上, 引入方差贡献率、相关系数和Shapley值刻画各尺度与原始国家风险序列间的波动特征、模态特征以及全局重要度。以12个OPEC石油输出国为样本, 实证结果发现:利用各尺度的模态特征和波动特征可以很好地实现样本国国家风险的分类管理, 且分类具有较好的一致性;由Shapley值获得不同尺度的全局重要度, 对于全部样本国呈现出一致且稳定的内在固有特征, 即短期、中期和长期三尺度对国家风险的"贡献度"约为1:1:3。这不仅能够为国家风险管理提供了更为丰富的动态特征信息, 而且对于更为全面的国家风险特征识别、监测与预测提供了一种新的研究方法。  相似文献   

17.
The World Trade Organization introduced the concept of appropriate level of protection (ALOP) as a public health target. For this public health objective to be interpretable by the actors in the food chain, the concept of food safety objective (FSO) was proposed by the International Commission on Microbiological Specifications for Foods and adopted later by the Codex Alimentarius Food Hygiene Committee. The way to translate an ALOP into a FSO is still in debate. The purpose of this article is to develop a methodological tool to derive a FSO from an ALOP being expressed as a maximal annual marginal risk. We explore the different models relating the annual marginal risk to the parameters of the FSO depending on whether the variability in the survival probability and in the concentration of the pathogen are considered or not. If they are not, determination of the FSO is straightforward. If they are, we propose to use stochastic Monte Carlo simulation models and logistic discriminant analysis in order to determine which sets of parameters are compatible with the ALOP. The logistic discriminant function was chosen such that the kappa coefficient is maximized. We illustrate this method by the example of the risks of listeriosis and salmonellosis in one type of soft cheese. We conclude that the definition of the FSO should integrate three dimensions: the prevalence of contamination, the average concentration per contaminated typical serving, and the dispersion of the concentration among those servings.  相似文献   

18.
汪方军  常华  罗祯 《管理学报》2008,5(5):769-772
以沪深两市能源类102家A股上市公司2004~2006年间306组观测值为对象,研究公司绩效、财务风险与信息披露及时性之间的关系。研究结果表明,公司绩效和年报披露及时性显著正相关;财务风险和年报披露及时性显著负相关;财务风险能通过公司绩效影响年报披露及时性;财务风险会对公司绩效和年报披露及时性的关系产生明显的扰动作用。  相似文献   

19.
This article presents a framework for using probabilistic terrorism risk modeling in regulatory analysis. We demonstrate the framework with an example application involving a regulation under consideration, the Western Hemisphere Travel Initiative for the Land Environment, (WHTI‐L). First, we estimate annualized loss from terrorist attacks with the Risk Management Solutions (RMS) Probabilistic Terrorism Model. We then estimate the critical risk reduction, which is the risk‐reducing effectiveness of WHTI‐L needed for its benefit, in terms of reduced terrorism loss in the United States, to exceed its cost. Our analysis indicates that the critical risk reduction depends strongly not only on uncertainties in the terrorism risk level, but also on uncertainty in the cost of regulation and how casualties are monetized. For a terrorism risk level based on the RMS standard risk estimate, the baseline regulatory cost estimate for WHTI‐L, and a range of casualty cost estimates based on the willingness‐to‐pay approach, our estimate for the expected annualized loss from terrorism ranges from $2.7 billion to $5.2 billion. For this range in annualized loss, the critical risk reduction for WHTI‐L ranges from 7% to 13%. Basing results on a lower risk level that results in halving the annualized terrorism loss would double the critical risk reduction (14–26%), and basing the results on a higher risk level that results in a doubling of the annualized terrorism loss would cut the critical risk reduction in half (3.5–6.6%). Ideally, decisions about terrorism security regulations and policies would be informed by true benefit‐cost analyses in which the estimated benefits are compared to costs. Such analyses for terrorism security efforts face substantial impediments stemming from the great uncertainty in the terrorist threat and the very low recurrence interval for large attacks. Several approaches can be used to estimate how a terrorism security program or regulation reduces the distribution of risks it is intended to manage. But, continued research to develop additional tools and data is necessary to support application of these approaches. These include refinement of models and simulations, engagement of subject matter experts, implementation of program evaluation, and estimating the costs of casualties from terrorism events.  相似文献   

20.
Terje Aven 《Risk analysis》2007,27(2):303-312
To protect people from hazards, the common safety regulation regime in many industries is based on the use of minimum standards formulated as risk acceptance or tolerability limits. The limits are seen as absolute, and in principle these should be met regardless of costs. The justification is ethical - people should not be exposed to a risk level exceeding certain limits. In this article, we discuss this approach to safety regulation and its justification. We argue that the use of such limits is based on some critical assumptions; that low accident risk has a value in itself, that risk can be accurately measured and the authorities specify the limits. However, these assumptions are not in general valid, and hence the justification of the approach can be questioned. In the article, we look closer into these issues, and we conclude that there is a need for rethinking this regulation approach - its ethical justification is not stronger than for alternative approaches. Essential for the analysis is the distinction between ethics of the mind and ethics of the consequences, which has several implications that are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号