首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In risk analysis problems, the decision‐making process is supported by the utilization of quantitative models. Assessing the relevance of interactions is an essential information in the interpretation of model results. By such knowledge, analysts and decisionmakers are able to understand whether risk is apportioned by individual factor contributions or by their joint action. However, models are oftentimes large, requiring a high number of input parameters, and complex, with individual model runs being time consuming. Computational complexity leads analysts to utilize one‐parameter‐at‐a‐time sensitivity methods, which prevent one from assessing interactions. In this work, we illustrate a methodology to quantify interactions in probabilistic safety assessment (PSA) models by varying one parameter at a time. The method is based on a property of the functional ANOVA decomposition of a finite change that allows to exactly determine the relevance of factors when considered individually or together with their interactions with all other factors. A set of test cases illustrates the technique. We apply the methodology to the analysis of the core damage frequency of the large loss of coolant accident of a nuclear reactor. Numerical results reveal the nonadditive model structure, allow to quantify the relevance of interactions, and to identify the direction of change (increase or decrease in risk) implied by individual factor variations and by their cooperation.  相似文献   

2.
3.
Most attacker–defender games consider players as risk neutral, whereas in reality attackers and defenders may be risk seeking or risk averse. This article studies the impact of players' risk preferences on their equilibrium behavior and its effect on the notion of deterrence. In particular, we study the effects of risk preferences in a single‐period, sequential game where a defender has a continuous range of investment levels that could be strategically chosen to potentially deter an attack. This article presents analytic results related to the effect of attacker and defender risk preferences on the optimal defense effort level and their impact on the deterrence level. Numerical illustrations and some discussion of the effect of risk preferences on deterrence and the utility of using such a model are provided, as well as sensitivity analysis of continuous attack investment levels and uncertainty in the defender's beliefs about the attacker's risk preference. A key contribution of this article is the identification of specific scenarios in which the defender using a model that takes into account risk preferences would be better off than a defender using a traditional risk‐neutral model. This study provides insights that could be used by policy analysts and decisionmakers involved in investment decisions in security and safety.  相似文献   

4.
Operators of long field‐life systems like airplanes are faced with hazards in the supply of spare parts. If the original manufacturers or suppliers of parts end their supply, this may have large impacts on operating costs of firms needing these parts. Existing end‐of‐supply evaluation methods are focused mostly on the downstream supply chain, which is of interest mainly to spare part manufacturers. Firms that purchase spare parts have limited information on parts sales, and indicators of end‐of‐supply risk can also be found in the upstream supply chain. This article proposes a methodology for firms purchasing spare parts to manage end‐of‐supply risk by utilizing proportional hazard models in terms of supply chain conditions of the parts. The considered risk indicators fall into four main categories, of which two are related to supply (price and lead time) and two others are related to demand (cycle time and throughput). The methodology is demonstrated using data on about 2,000 spare parts collected from a maintenance repair organization in the aviation industry. Cross‐validation results and out‐of‐sample risk assessments show good performance of the method to identify spare parts with high end‐of‐supply risk. Further validation is provided by survey results obtained from the maintenance repair organization, which show strong agreement between the firm's and the model's identification of high‐risk spare parts.  相似文献   

5.
Longitudinal data are important in exposure and risk assessments, especially for pollutants with long half‐lives in the human body and where chronic exposures to current levels in the environment raise concerns for human health effects. It is usually difficult and expensive to obtain large longitudinal data sets for human exposure studies. This article reports a new simulation method to generate longitudinal data with flexible numbers of subjects and days. Mixed models are used to describe the variance‐covariance structures of input longitudinal data. Based on estimated model parameters, simulation data are generated with similar statistical characteristics compared to the input data. Three criteria are used to determine similarity: the overall mean and standard deviation, the variance components percentages, and the average autocorrelation coefficients. Upon the discussion of mixed models, a simulation procedure is produced and numerical results are shown through one human exposure study. Simulations of three sets of exposure data successfully meet above criteria. In particular, simulations can always retain correct weights of inter‐ and intrasubject variances as in the input data. Autocorrelations are also well followed. Compared with other simulation algorithms, this new method stores more information about the input overall distribution so as to satisfy the above multiple criteria for statistical targets. In addition, it generates values from numerous data sources and simulates continuous observed variables better than current data methods. This new method also provides flexible options in both modeling and simulation procedures according to various user requirements.  相似文献   

6.
To better understand the risk of exposure to food allergens, food challenge studies are designed to slowly increase the dose of an allergen delivered to allergic individuals until an objective reaction occurs. These dose‐to‐failure studies are used to determine acceptable intake levels and are analyzed using parametric failure time models. Though these models can provide estimates of the survival curve and risk, their parametric form may misrepresent the survival function for doses of interest. Different models that describe the data similarly may produce different dose‐to‐failure estimates. Motivated by predictive inference, we developed a Bayesian approach to combine survival estimates based on posterior predictive stacking, where the weights are formed to maximize posterior predictive accuracy. The approach defines a model space that is much larger than traditional parametric failure time modeling approaches. In our case, we use the approach to include random effects accounting for frailty components. The methodology is investigated in simulation, and is used to estimate allergic population eliciting doses for multiple food allergens.  相似文献   

7.
How can risk analysts help to improve policy and decision making when the correct probabilistic relation between alternative acts and their probable consequences is unknown? This practical challenge of risk management with model uncertainty arises in problems from preparing for climate change to managing emerging diseases to operating complex and hazardous facilities safely. We review constructive methods for robust and adaptive risk analysis under deep uncertainty. These methods are not yet as familiar to many risk analysts as older statistical and model‐based methods, such as the paradigm of identifying a single “best‐fitting” model and performing sensitivity analyses for its conclusions. They provide genuine breakthroughs for improving predictions and decisions when the correct model is highly uncertain. We demonstrate their potential by summarizing a variety of practical risk management applications.  相似文献   

8.
Risk analysts frequently view the regulation of risks as being largely a matter of decision theory. According to this view, risk analysis methods provide information on the likelihood and severity of various possible outcomes; this information should then be assessed using a decision‐theoretic approach (such as cost/benefit analysis) to determine whether the risks are acceptable, and whether additional regulation is warranted. However, this view ignores the fact that in many industries (particularly industries that are technologically sophisticated and employ specialized risk and safety experts), risk analyses may be done by regulated firms, not by the regulator. Moreover, those firms may have more knowledge about the levels of safety at their own facilities than the regulator does. This creates a situation in which the regulated firm has both the opportunity—and often also the motive—to provide inaccurate (in particular, favorably biased) risk information to the regulator, and hence the regulator has reason to doubt the accuracy of the risk information provided by regulated parties. Researchers have argued that decision theory is capable of dealing with many such strategic interactions as well as game theory can. This is especially true in two‐player, two‐stage games in which the follower has a unique best strategy in response to the leader's strategy, as appears to be the case in the situation analyzed in this article. However, even in such cases, we agree with Cox that game‐theoretic methods and concepts can still be useful. In particular, the tools of mechanism design, and especially the revelation principle, can simplify the analysis of such games because the revelation principle provides rigorous assurance that it is sufficient to analyze only games in which licensees truthfully report their risk levels, making the problem more manageable. Without that, it would generally be necessary to consider much more complicated forms of strategic behavior (including deception), to identify optimal regulatory strategies. Therefore, we believe that the types of regulatory interactions analyzed in this article are better modeled using game theory rather than decision theory. In particular, the goals of this article are to review the relevant literature in game theory and regulatory economics (to stimulate interest in this area among risk analysts), and to present illustrative results showing how the application of game theory can provide useful insights into the theory and practice of risk‐informed regulation.  相似文献   

9.
Recent headlines and scientific articles projecting significant human health benefits from changes in exposures too often depend on unvalidated subjective expert judgments and modeling assumptions, especially about the causal interpretation of statistical associations. Some of these assessments are demonstrably biased toward false positives and inflated effects estimates. More objective, data‐driven methods of causal analysis are available to risk analysts. These can help to reduce bias and increase the credibility and realism of health effects risk assessments and causal claims. For example, quasi‐experimental designs and analysis allow alternative (noncausal) explanations for associations to be tested, and refuted if appropriate. Panel data studies examine empirical relations between changes in hypothesized causes and effects. Intervention and change‐point analyses identify effects (e.g., significant changes in health effects time series) and estimate their sizes. Granger causality tests, conditional independence tests, and counterfactual causality models test whether a hypothesized cause helps to predict its presumed effects, and quantify exposure‐specific contributions to response rates in differently exposed groups, even in the presence of confounders. Causal graph models let causal mechanistic hypotheses be tested and refined using biomarker data. These methods can potentially revolutionize the study of exposure‐induced health effects, helping to overcome pervasive false‐positive biases and move the health risk assessment scientific community toward more accurate assessments of the impacts of exposures and interventions on public health.  相似文献   

10.
Cryptosporidium human dose‐response data from seven species/isolates are used to investigate six models of varying complexity that estimate infection probability as a function of dose. Previous models attempt to explicitly account for virulence differences among C. parvum isolates, using three or six species/isolates. Four (two new) models assume species/isolate differences are insignificant and three of these (all but exponential) allow for variable human susceptibility. These three human‐focused models (fractional Poisson, exponential with immunity and beta‐Poisson) are relatively simple yet fit the data significantly better than the more complex isolate‐focused models. Among these three, the one‐parameter fractional Poisson model is the simplest but assumes that all Cryptosporidium oocysts used in the studies were capable of initiating infection. The exponential with immunity model does not require such an assumption and includes the fractional Poisson as a special case. The fractional Poisson model is an upper bound of the exponential with immunity model and applies when all oocysts are capable of initiating infection. The beta Poisson model does not allow an immune human subpopulation; thus infection probability approaches 100% as dose becomes huge. All three of these models predict significantly (>10x) greater risk at the low doses that consumers might receive if exposed through drinking water or other environmental exposure (e.g., 72% vs. 4% infection probability for a one oocyst dose) than previously predicted. This new insight into Cryptosporidium risk suggests additional inactivation and removal via treatment may be needed to meet any specified risk target, such as a suggested 10?4 annual risk of Cryptosporidium infection.  相似文献   

11.
Steven M. Quiring 《Risk analysis》2011,31(12):1897-1906
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out‐of‐sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy.  相似文献   

12.
The authors of this article have developed six probabilistic causal models for critical risks in tunnel works. The details of the models' development and evaluation were reported in two earlier publications of this journal. Accordingly, as a remaining step, this article is focused on the investigation into the use of these models in a real case study project. The use of the models is challenging given the need to provide information on risks that usually are both project and context dependent. The latter is of particular concern in underground construction projects. Tunnel risks are the consequences of interactions between site‐ and project‐ specific factors. Large variations and uncertainties in ground conditions as well as project singularities give rise to particular risk factors with very specific impacts. These circumstances mean that existing risk information, gathered from previous projects, is extremely difficult to use in other projects. This article considers these issues and addresses the extent to which prior risk‐related knowledge, in the form of causal models, as the models developed for the investigation, can be used to provide useful risk information for the case study project. The identification and characterization of the causes and conditions that lead to failures and their interactions as well as their associated probabilistic information is assumed to be risk‐related knowledge in this article. It is shown that, irrespective of existing constraints on using information and knowledge from past experiences, construction risk‐related knowledge can be transferred and used from project to project in the form of comprehensive models based on probabilistic‐causal relationships. The article also shows that the developed models provide guidance as to the use of specific remedial measures by means of the identification of critical risk factors, and therefore they support risk management decisions. Similarly, a number of limitations of the models are discussed.  相似文献   

13.
Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose‐response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose‐response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose‐response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose‐response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.  相似文献   

14.
This article develops a methodology for quantifying model risk in quantile risk estimates. The application of quantile estimates to risk assessment has become common practice in many disciplines, including hydrology, climate change, statistical process control, insurance and actuarial science, and the uncertainty surrounding these estimates has long been recognized. Our work is particularly important in finance, where quantile estimates (called Value‐at‐Risk) have been the cornerstone of banking risk management since the mid 1980s. A recent amendment to the Basel II Accord recommends additional market risk capital to cover all sources of “model risk” in the estimation of these quantiles. We provide a novel and elegant framework whereby quantile estimates are adjusted for model risk, relative to a benchmark which represents the state of knowledge of the authority that is responsible for model risk. A simulation experiment in which the degree of model risk is controlled illustrates how to quantify Value‐at‐Risk model risk and compute the required regulatory capital add‐on for banks. An empirical example based on real data shows how the methodology can be put into practice, using only two time series (daily Value‐at‐Risk and daily profit and loss) from a large bank. We conclude with a discussion of potential applications to nonfinancial risks.  相似文献   

15.
We present a method for forecasting sales using financial market information and test this method on annual data for US public retailers. Our method is motivated by the permanent income hypothesis in economics, which states that the amount of consumer spending and the mix of spending between discretionary and necessity items depend on the returns achieved on equity portfolios held by consumers. Taking as input forecasts from other sources, such as equity analysts or time‐series models, we construct a market‐based forecast by augmenting the input forecast with one additional variable, lagged return on an aggregate financial market index. For this, we develop and estimate a martingale model of joint evolution of sales forecasts and the market index. We show that the market‐based forecast achieves an average 15% reduction in mean absolute percentage error compared with forecasts given by equity analysts at the same time instant on out‐of‐sample data. We extensively analyze the performance improvement using alternative model specifications and statistics. We also show that equity analysts do not incorporate lagged financial market returns in their forecasts. Our model yields correlation coefficients between retail sales and market returns for all firms in the data set. Besides forecasting, these results can be applied in risk management and hedging.  相似文献   

16.
《Risk analysis》2018,38(2):255-271
Most risk analysis approaches are static; failing to capture evolving conditions. Blowout, the most feared accident during a drilling operation, is a complex and dynamic event. The traditional risk analysis methods are useful in the early design stage of drilling operation while falling short during evolving operational decision making. A new dynamic risk analysis approach is presented to capture evolving situations through dynamic probability and consequence models. The dynamic consequence models, the focus of this study, are developed in terms of loss functions. These models are subsequently integrated with the probability to estimate operational risk, providing a real‐time risk analysis. The real‐time evolving situation is considered dependent on the changing bottom‐hole pressure as drilling progresses. The application of the methodology and models are demonstrated with a case study of an offshore drilling operation evolving to a blowout.  相似文献   

17.
Critical infrastructure systems must be both robust and resilient in order to ensure the functioning of society. To improve the performance of such systems, we often use risk and vulnerability analysis to find and address system weaknesses. A critical component of such analyses is the ability to accurately determine the negative consequences of various types of failures in the system. Numerous mathematical and simulation models exist that can be used to this end. However, there are relatively few studies comparing the implications of using different modeling approaches in the context of comprehensive risk analysis of critical infrastructures. In this article, we suggest a classification of these models, which span from simple topologically‐oriented models to advanced physical‐flow‐based models. Here, we focus on electric power systems and present a study aimed at understanding the tradeoffs between simplicity and fidelity in models used in the context of risk analysis. Specifically, the purpose of this article is to compare performance estimates achieved with a spectrum of approaches typically used for risk and vulnerability analysis of electric power systems and evaluate if more simplified topological measures can be combined using statistical methods to be used as a surrogate for physical flow models. The results of our work provide guidance as to appropriate models or combinations of models to use when analyzing large‐scale critical infrastructure systems, where simulation times quickly become insurmountable when using more advanced models, severely limiting the extent of analyses that can be performed.  相似文献   

18.
Behavioral economics has captured the interest of scholars and the general public by demonstrating ways in which individuals make decisions that appear irrational. While increasing attention is being focused on the implications of this research for the design of risk‐reducing policies, less attention has been paid to how it affects the economic valuation of policy consequences. This article considers the latter issue, reviewing the behavioral economics literature and discussing its implications for the conduct of benefit‐cost analysis, particularly in the context of environmental, health, and safety regulations. We explore three concerns: using estimates of willingness to pay or willingness to accept compensation for valuation, considering the psychological aspects of risk when valuing mortality‐risk reductions, and discounting future consequences. In each case, we take the perspective that analysts should avoid making judgments about whether values are “rational” or “irrational.” Instead, they should make every effort to rely on well‐designed studies, using ranges, sensitivity analysis, or probabilistic modeling to reflect uncertainty. More generally, behavioral research has led some to argue for a more paternalistic approach to policy analysis. We argue instead for continued focus on describing the preferences of those affected, while working to ensure that these preferences are based on knowledge and careful reflection.  相似文献   

19.
Mark R. Powell 《Risk analysis》2015,35(12):2172-2182
Recently, there has been considerable interest in developing risk‐based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk‐based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean‐variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false “optimal” portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out‐of‐sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk‐based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers.  相似文献   

20.
《Risk analysis》2018,38(6):1183-1201
In assessing environmental health risks, the risk characterization step synthesizes information gathered in evaluating exposures to stressors together with dose–response relationships, characteristics of the exposed population, and external environmental conditions. This article summarizes key steps of a cumulative risk assessment (CRA) followed by a discussion of considerations for characterizing cumulative risks. Cumulative risk characterizations differ considerably from single chemical‐ or single source‐based risk characterization. CRAs typically focus on a specific population instead of a pollutant or pollutant source and should include an evaluation of all relevant sources contributing to the exposures in the population and other factors that influence dose–response relationships. Second, CRAs may include influential environmental and population‐specific conditions, involving multiple chemical and nonchemical stressors. Third, a CRA could examine multiple health effects, reflecting joint toxicity and the potential for toxicological interactions. Fourth, the complexities often necessitate simplifying methods, including judgment‐based and semi‐quantitative indices that collapse disparate data into numerical scores. Fifth, because of the higher dimensionality and potentially large number of interactions, information needed to quantify risk is typically incomplete, necessitating an uncertainty analysis. Three approaches that could be used for characterizing risks in a CRA are presented: the multiroute hazard index, stressor grouping by exposure and toxicity, and indices for screening multiple factors and conditions. Other key roles of the risk characterization in CRAs are also described, mainly the translational aspect of including a characterization summary for lay readers (in addition to the technical analysis), and placing the results in the context of the likely risk‐based decisions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号