首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 53 毫秒
1.
Yacov Y. Haimes 《Risk analysis》2011,31(8):1175-1186
This article highlights the complexity of the quantification of the multidimensional risk function, develops five systems‐based premises on quantifying the risk of terrorism to a threatened system, and advocates the quantification of vulnerability and resilience through the states of the system. The five premises are: (i) There exists interdependence between a specific threat to a system by terrorist networks and the states of the targeted system, as represented through the system's vulnerability, resilience, and criticality‐impact. (ii) A specific threat, its probability, its timing, the states of the targeted system, and the probability of consequences can be interdependent. (iii) The two questions in the risk assessment process: “What is the likelihood?” and “What are the consequences?” can be interdependent. (iv) Risk management policy options can reduce both the likelihood of a threat to a targeted system and the associated likelihood of consequences by changing the states (including both vulnerability and resilience) of the system. (v) The quantification of risk to a vulnerable system from a specific threat must be built on a systemic and repeatable modeling process, by recognizing that the states of the system constitute an essential step to construct quantitative metrics of the consequences based on intelligence gathering, expert evidence, and other qualitative information. The fact that the states of all systems are functions of time (among other variables) makes the time frame pivotal in each component of the process of risk assessment, management, and communication. Thus, risk to a system, caused by an initiating event (e.g., a threat) is a multidimensional function of the specific threat, its probability and time frame, the states of the system (representing vulnerability and resilience), and the probabilistic multidimensional consequences.  相似文献   

2.
Terje Aven 《Risk analysis》2011,31(4):515-522
Recently, considerable attention has been paid to a systems‐based approach to risk, vulnerability, and resilience analysis. It is argued that risk, vulnerability, and resilience are inherently and fundamentally functions of the states of the system and its environment. Vulnerability is defined as the manifestation of the inherent states of the system that can be subjected to a natural hazard or be exploited to adversely affect that system, whereas resilience is defined as the ability of the system to withstand a major disruption within acceptable degradation parameters and to recover within an acceptable time, and composite costs, and risks. Risk, on the other hand, is probability based, defined by the probability and severity of adverse effects (i.e., the consequences). In this article, we look more closely into this approach. It is observed that the key concepts are inconsistent in the sense that the uncertainty (probability) dimension is included for the risk definition but not for vulnerability and resilience. In the article, we question the rationale for this inconsistency. The suggested approach is compared with an alternative framework that provides a logically defined structure for risk, vulnerability, and resilience, where all three concepts are incorporating the uncertainty (probability) dimension.  相似文献   

3.
Resilient infrastructure systems are essential for cities to withstand and rapidly recover from natural and human‐induced disasters, yet electric power, transportation, and other infrastructures are highly vulnerable and interdependent. New approaches for characterizing the resilience of sets of infrastructure systems are urgently needed, at community and regional scales. This article develops a practical approach for analysts to characterize a community's infrastructure vulnerability and resilience in disasters. It addresses key challenges of incomplete incentives, partial information, and few opportunities for learning. The approach is demonstrated for Metro Vancouver, Canada, in the context of earthquake and flood risk. The methodological approach is practical and focuses on potential disruptions to infrastructure services. In spirit, it resembles probability elicitation with multiple experts; however, it elicits disruption and recovery over time, rather than uncertainties regarding system function at a given point in time. It develops information on regional infrastructure risk and engages infrastructure organizations in the process. Information sharing, iteration, and learning among the participants provide the basis for more informed estimates of infrastructure system robustness and recovery that incorporate the potential for interdependent failures after an extreme event. Results demonstrate the vital importance of cross‐sectoral communication to develop shared understanding of regional infrastructure disruption in disasters. For Vancouver, specific results indicate that in a hypothetical M7.3 earthquake, virtually all infrastructures would suffer severe disruption of service in the immediate aftermath, with many experiencing moderate disruption two weeks afterward. Electric power, land transportation, and telecommunications are identified as core infrastructure sectors.  相似文献   

4.
The increased frequency of extreme events in recent years highlights the emerging need for the development of methods that could contribute to the mitigation of the impact of such events on critical infrastructures, as well as boost their resilience against them. This article proposes an online spatial risk analysis capable of providing an indication of the evolving risk of power systems regions subject to extreme events. A Severity Risk Index (SRI) with the support of real‐time monitoring assesses the impact of the extreme events on the power system resilience, with application to the effect of windstorms on transmission networks. The index considers the spatial and temporal evolution of the extreme event, system operating conditions, and the degraded system performance during the event. SRI is based on probabilistic risk by condensing the probability and impact of possible failure scenarios while the event is spatially moving across a power system. Due to the large number of possible failures during an extreme event, a scenario generation and reduction algorithm is applied in order to reduce the computation time. SRI provides the operator with a probabilistic assessment that could lead to effective resilience‐based decisions for risk mitigation. The IEEE 24‐bus Reliability Test System has been used to demonstrate the effectiveness of the proposed online risk analysis, which was embedded in a sequential Monte Carlo simulation for capturing the spatiotemporal effects of extreme events and evaluating the effectiveness of the proposed method.  相似文献   

5.
6.
Quantitative risk analysis (QRA) is a systematic approach for evaluating likelihood, consequences, and risk of adverse events. QRA based on event (ETA) and fault tree analyses (FTA) employs two basic assumptions. The first assumption is related to likelihood values of input events, and the second assumption is regarding interdependence among the events (for ETA) or basic events (for FTA). Traditionally, FTA and ETA both use crisp probabilities; however, to deal with uncertainties, the probability distributions of input event likelihoods are assumed. These probability distributions are often hard to come by and even if available, they are subject to incompleteness (partial ignorance) and imprecision. Furthermore, both FTA and ETA assume that events (or basic events) are independent. In practice, these two assumptions are often unrealistic. This article focuses on handling uncertainty in a QRA framework of a process system. Fuzzy set theory and evidence theory are used to describe the uncertainties in the input event likelihoods. A method based on a dependency coefficient is used to express interdependencies of events (or basic events) in ETA and FTA. To demonstrate the approach, two case studies are discussed.  相似文献   

7.
The concept of resilience and its relevance to disaster risk management has increasingly gained attention in recent years. It is common for risk and resilience studies to model system recovery by analyzing a single or aggregated measure of performance, such as economic output or system functionality. However, the history of past disasters and recent risk literature suggest that a single-dimension view of relevant systems is not only insufficient, but can compromise the ability to manage risk for these systems. In this article, we explore how multiple dimensions influence the ability for complex systems to function and effectively recover after a disaster. In particular, we compile evidence from the many competing resilience perspectives to identify the most critical resilience dimensions across several academic disciplines, applications, and disaster events. The findings demonstrate the need for a conceptual framework that decomposes resilience into six primary dimensions: workforce/population, economy, infrastructure, geography, hierarchy, and time (WEIGHT). These dimensions are not typically addressed holistically in the literature; often they are either modeled independently or in piecemeal combinations. The current research is the first to provide a comprehensive discussion of each resilience dimension and discuss how these dimensions can be integrated into a cohesive framework, suggesting that no single dimension is sufficient for a holistic analysis of a disaster risk management. Through this article, we also aim to spark discussions among researchers and policymakers to develop a multicriteria decision framework for evaluating the efficacy of resilience strategies. Furthermore, the WEIGHT dimensions may also be used to motivate the generation of new approaches for data analytics of resilience-related knowledge bases.  相似文献   

8.
Recent studies in system resilience have proposed metrics to understand the ability of systems to recover from a disruptive event, often offering a qualitative treatment of resilience. This work provides a quantitative treatment of resilience and focuses specifically on measuring resilience in infrastructure networks. Inherent cost metrics are introduced: loss of service cost and total network restoration cost. Further, “costs” of network resilience are often shared across multiple infrastructures and industries that rely upon those networks, particularly when such networks become inoperable in the face of disruptive events. As such, this work integrates the quantitative resilience approach with a model describing the regional, multi‐industry impacts of a disruptive event to measure the interdependent impacts of network resilience. The approaches discussed in this article are deployed in a case study of an inland waterway transportation network, the Mississippi River Navigation System.  相似文献   

9.
Recent natural and man‐made catastrophes, such as the Fukushima nuclear power plant, flooding caused by Hurricane Katrina, the Deepwater Horizon oil spill, the Haiti earthquake, and the mortgage derivatives crisis, have renewed interest in the concept of resilience, especially as it relates to complex systems vulnerable to multiple or cascading failures. Although the meaning of resilience is contested in different contexts, in general resilience is understood to mean the capacity to adapt to changing conditions without catastrophic loss of form or function. In the context of engineering systems, this has sometimes been interpreted as the probability that system conditions might exceed an irrevocable tipping point. However, we argue that this approach improperly conflates resilience and risk perspectives by expressing resilience exclusively in risk terms. In contrast, we describe resilience as an emergent property of what an engineering system does, rather than a static property the system has. Therefore, resilience cannot be measured at the systems scale solely from examination of component parts. Instead, resilience is better understood as the outcome of a recursive process that includes: sensing, anticipation, learning, and adaptation. In this approach, resilience analysis can be understood as differentiable from, but complementary to, risk analysis, with important implications for the adaptive management of complex, coupled engineering systems. Management of the 2011 flooding in the Mississippi River Basin is discussed as an example of the successes and challenges of resilience‐based management of complex natural systems that have been extensively altered by engineered structures.  相似文献   

10.
Yacov Y. Haimes 《Risk analysis》2012,32(9):1451-1467
This article is grounded on the premise that the complex process of risk assessment, management, and communication, when applied to systems of systems, should be guided by universal systems‐based principles. It is written from the perspective of systems engineering with the hope and expectation that the principles introduced here will be supplemented and complemented by principles from the perspectives of other disciplines. Indeed, there is no claim that the following 10 guiding principles constitute a complete set; rather, the intent is to initiate a discussion on this important subject that will incrementally lead us to a more complete set of guiding principles. The 10 principles are as follows: First Principle: Holism is the common denominator that bridges risk analysis and systems engineering. Second Principle: The process of risk modeling, assessment, management, and communication must be systemic and integrated. Third Principle: Models and state variables are central to quantitative risk analysis. Fourth Principle: Multiple models are required to represent the essence of the multiple perspectives of complex systems of systems. Fifth Principle: Meta‐modeling and subsystems integration must be derived from the intrinsic states of the system of systems. Sixth Principle: Multiple conflicting and competing objectives are inherent in risk management. Seventh Principle: Risk analysis must account for epistemic and aleatory uncertainties. Eighth Principle: Risk analysis must account for risks of low probability with extreme consequences. Ninth Principle: The time frame is central to quantitative risk analysis. Tenth Principle: Risk analysis must be holistic, adaptive, incremental, and sustainable, and it must be supported with appropriate data collection, metrics with which to measure efficacious progress, and criteria on the basis of which to act. The relevance and efficacy of each guiding principle is demonstrated by applying it to the U.S. Federal Aviation Administration complex Next Generation (NextGen) system of systems.  相似文献   

11.
Probabilistic risk assessment (PRA) is a useful tool to assess complex interconnected systems. This article leverages the capabilities of PRA tools developed for industrial and nuclear risk analysis in community resilience evaluations by modeling the food security of a community in terms of its built environment as an integrated system. To this end, we model the performance of Gilroy, CA, a moderate‐size town, with regard to disruptions in its food supply caused by a severe earthquake. The food retailers of Gilroy, along with the electrical power network, water network elements, and bridges are considered as components of a system. Fault and event trees are constructed to model the requirements for continuous food supply to community residents and are analyzed efficiently using binary decision diagrams (BDDs). The study also identifies shortcomings in approximate classical system analysis methods in assessing community resilience. Importance factors are utilized to rank the importance of various factors to the overall risk of food insecurity. Finally, the study considers the impact of various sources of uncertainties in the hazard modeling and performance of infrastructure on food security measures. The methodology can be applicable for any existing critical infrastructure system and has potential extensions to other hazards.  相似文献   

12.
Given the ubiquitous nature of infrastructure networks in today's society, there is a global need to understand, quantify, and plan for the resilience of these networks to disruptions. This work defines network resilience along dimensions of reliability, vulnerability, survivability, and recoverability, and quantifies network resilience as a function of component and network performance. The treatment of vulnerability and recoverability as random variables leads to stochastic measures of resilience, including time to total system restoration, time to full system service resilience, and time to a specific α% resilience. Ultimately, a means to optimize network resilience strategies is discussed, primarily through an adaption of the Copeland Score for nonparametric stochastic ranking. The measures of resilience and optimization techniques are applied to inland waterway networks, an important mode in the larger multimodal transportation network upon which we rely for the flow of commodities. We provide a case study analyzing and planning for the resilience of commodity flows along the Mississippi River Navigation System to illustrate the usefulness of the proposed metrics.  相似文献   

13.
Reliability and higher levels of safety are thought to be achieved by using systematic approaches to managing risks. The assessment of risks has produced a range of different approaches to assessing these uncertainties, presenting models for how risks affect individuals or organizations. Contemporary risk assessment tools based on this approach have proven difficult for practitioners to use as tools for tactical and operational decision making. This article presents an alternative to these assessments by utilizing a resilience perspective, arguing that complex systems are inclined to variety and uncertainty regarding the results they produce and are therefore prone to systemic failures. A continuous improvement approach is a source of reliability when managing complex systems and is necessary to manage varieties and uncertainties. For an organization to understand how risk events occur, it is necessary to define what is believed to be the equilibrium of the system in time and space. By applying a resilience engineering (RE) perspective to risk assessment, it is possible to manage this complexity by assessing the ability to respond, monitor, learn, and anticipate risks, and in so doing to move away from the flawed frequency and consequences approach. Using a research station network in the Arctic as an example illustrates how an RE approach qualifies assessments by bridging risk assessments with value-creation processes. The article concludes by arguing that a resilience-based risk assessment can improve on current practice, including for organizations located outside the Arctic region.  相似文献   

14.
Yacov Y Haimes 《Risk analysis》2012,32(11):1834-1845
Natural and human‐induced disasters affect organizations in myriad ways because of the inherent interconnectedness and interdependencies among human, cyber, and physical infrastructures, but more importantly, because organizations depend on the effectiveness of people and on the leadership they provide to the organizations they serve and represent. These human–organizational–cyber–physical infrastructure entities are termed systems of systems. Given the multiple perspectives that characterize them, they cannot be modeled effectively with a single model. The focus of this article is: (i) the centrality of the states of a system in modeling; (ii) the efficacious role of shared states in modeling systems of systems, in identification, and in the meta‐modeling of systems of systems; and (iii) the contributions of the above to strategic preparedness, response to, and recovery from catastrophic risk to such systems. Strategic preparedness connotes a decision‐making process and its associated actions. These must be: implemented in advance of a natural or human‐induced disaster, aimed at reducing consequences (e.g., recovery time, community suffering, and cost), and/or controlling their likelihood to a level considered acceptable (through the decisionmakers’ implicit and explicit acceptance of various risks and tradeoffs). The inoperability input‐output model (IIM), which is grounded on Leontief's input/output model, has enabled the modeling of interdependent subsystems. Two separate modeling structures are introduced. These are: phantom system models (PSM), where shared states constitute the essence of modeling coupled systems; and the IIM, where interdependencies among sectors of the economy are manifested by the Leontief matrix of technological coefficients. This article demonstrates the potential contributions of these two models to each other, and thus to more informative modeling of systems of systems schema. The contributions of shared states to this modeling and to systems identification are presented with case studies.  相似文献   

15.
Max Boholm 《Risk analysis》2019,39(6):1243-1261
In risk analysis and research, the concept of risk is often understood quantitatively. For example, risk is commonly defined as the probability of an unwanted event or as its probability multiplied by its consequences. This article addresses (1) to what extent and (2) how the noun risk is actually used quantitatively. Uses of the noun risk are analyzed in four linguistic corpora, both Swedish and English (mostly American English). In total, over 16,000 uses of the noun risk are studied in 14 random (n = 500) or complete samples (where n ranges from 173 to 5,144) of, for example, news and magazine articles, fiction, and websites of government agencies. In contrast to the widespread definition of risk as a quantity, a main finding is that the noun risk is mostly used nonquantitatively. Furthermore, when used quantitatively, the quantification is seldom numerical, instead relying on less precise expressions of quantification, such as high risk and increased risk. The relatively low frequency of quantification in a wide range of language material suggests a quantification bias in many areas of risk theory, that is, overestimation of the importance of quantification in defining the concept of risk. The findings are also discussed in relation to fuzzy‐trace theory. Findings of this study confirm, as suggested by fuzzy‐trace theory, that vague representations are prominent in quantification of risk. The application of the terminology of fuzzy‐trace theory for explaining the patterns of language use are discussed.  相似文献   

16.
17.
In the Australian policy context, there has recently been a discernible shift in the discourse used when considering responses to the impacts of current weather extremes and future climate change. Commonly used terminology, such as climate change impacts and vulnerability, is now being increasingly replaced by a preference for language with more positive connotations as represented by resilience and a focus on the ‘strengthening’ of local communities. However, although this contemporary shift in emphasis has largely political roots, the scientific conceptual underpinning for resilience, and its relationship with climate change action, remains contested. To contribute to this debate, the authors argue that how adaptation is framed—in this case by the notion of resilience—can have an important influence on agenda setting, on the subsequent adaptation pathways that are pursued and on eventual adaptation outcomes. Drawing from multi-disciplinary adaptation research carried out in three urban case studies in the State of Victoria, Australia (‘Framing multi-level and multi-actor adaptation responses in the Victorian context’, funded by the Victorian Centre for Climate Change Adaptation Research (2010–2012)), this article is structured according to three main discussion points. Firstly, the importance of being explicit when framing adaptation; secondly, this study reflects on how resilience is emerging as part of adaptation discourse and narratives in different scientific, research and policy-making communities; and finally, the authors reflect on the implications of resilience framing for evolving adaptation policy and practice.  相似文献   

18.
We propose a definition of infrastructure resilience that is tied to the operation (or function) of an infrastructure as a system of interacting components and that can be objectively evaluated using quantitative models. Specifically, for any particular system, we use quantitative models of system operation to represent the decisions of an infrastructure operator who guides the behavior of the system as a whole, even in the presence of disruptions. Modeling infrastructure operation in this way makes it possible to systematically evaluate the consequences associated with the loss of infrastructure components, and leads to a precise notion of “operational resilience” that facilitates model verification, validation, and reproducible results. Using a simple example of a notional infrastructure, we demonstrate how to use these models for (1) assessing the operational resilience of an infrastructure system, (2) identifying critical vulnerabilities that threaten its continued function, and (3) advising policymakers on investments to improve resilience.  相似文献   

19.
《Risk analysis》2018,38(8):1601-1617
Resilience is the capability of a system to adjust its functionality during a disturbance or perturbation. The present work attempts to quantify resilience as a function of reliability, vulnerability, and maintainability. The approach assesses proactive and reactive defense mechanisms along with operational factors to respond to unwanted disturbances and perturbation. This article employs a Bayesian network format to build a resilience model. The application of the model is tested on hydrocarbon‐release scenarios during an offloading operation in a remote and harsh environment. The model identifies requirements for robust recovery and adaptability during an unplanned scenario related to a hydrocarbon release. This study attempts to relate the resilience capacity of a system to the system's absorptive, adaptive, and restorative capacities. These factors influence predisaster and postdisaster strategies that can be mapped to enhance the resilience of the system.  相似文献   

20.
In counterterrorism risk management decisions, the analyst can choose to represent terrorist decisions as defender uncertainties or as attacker decisions. We perform a comparative analysis of probabilistic risk analysis (PRA) methods including event trees, influence diagrams, Bayesian networks, decision trees, game theory, and combined methods on the same illustrative examples (container screening for radiological materials) to get insights into the significant differences in assumptions and results. A key tenent of PRA and decision analysis is the use of subjective probability to assess the likelihood of possible outcomes. For each technique, we compare the assumptions, probability assessment requirements, risk levels, and potential insights for risk managers. We find that assessing the distribution of potential attacker decisions is a complex judgment task, particularly considering the adaptation of the attacker to defender decisions. Intelligent adversary risk analysis and adversarial risk analysis are extensions of decision analysis and sequential game theory that help to decompose such judgments. These techniques explicitly show the adaptation of the attacker and the resulting shift in risk based on defender decisions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号