首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bin Li  Ming Li  Carol Smidts 《Risk analysis》2005,25(4):1061-1077
Probabilistic risk assessment (PRA) is a methodology to assess the probability of failure or success of a system's operation. PRA has been proved to be a systematic, logical, and comprehensive technique for risk assessment. Software plays an increasing role in modern safety critical systems. A significant number of failures can be attributed to software failures. Unfortunately, current probabilistic risk assessment concentrates on representing the behavior of hardware systems, humans, and their contributions (to a limited extent) to risk but neglects the contributions of software due to a lack of understanding of software failure phenomena. It is thus imperative to consider and model the impact of software to reflect the risk in current and future systems. The objective of our research is to develop a methodology to account for the impact of software on system failure that can be used in the classical PRA analysis process. A test-based approach for integrating software into PRA is discussed in this article. This approach includes identification of software functions to be modeled in the PRA, modeling of the software contributions in the ESD, and fault tree. The approach also introduces the concepts of input tree and output tree and proposes a quantification strategy that uses a software safety testing technique. The method is applied to an example system, PACS.  相似文献   

2.
In counterterrorism risk management decisions, the analyst can choose to represent terrorist decisions as defender uncertainties or as attacker decisions. We perform a comparative analysis of probabilistic risk analysis (PRA) methods including event trees, influence diagrams, Bayesian networks, decision trees, game theory, and combined methods on the same illustrative examples (container screening for radiological materials) to get insights into the significant differences in assumptions and results. A key tenent of PRA and decision analysis is the use of subjective probability to assess the likelihood of possible outcomes. For each technique, we compare the assumptions, probability assessment requirements, risk levels, and potential insights for risk managers. We find that assessing the distribution of potential attacker decisions is a complex judgment task, particularly considering the adaptation of the attacker to defender decisions. Intelligent adversary risk analysis and adversarial risk analysis are extensions of decision analysis and sequential game theory that help to decompose such judgments. These techniques explicitly show the adaptation of the attacker and the resulting shift in risk based on defender decisions.  相似文献   

3.
Traditional probabilistic risk assessment (PRA), of the type originally developed for engineered systems, is still proposed for terrorism risk analysis. We show that such PRA applications are unjustified in general. The capacity of terrorists to seek and use information and to actively research different attack options before deciding what to do raises unique features of terrorism risk assessment that are not adequately addressed by conventional PRA for natural and engineered systems—in part because decisions based on such PRA estimates do not adequately hedge against the different probabilities that attackers may eventually act upon. These probabilities may differ from the defender's (even if the defender's experts are thoroughly trained, well calibrated, unbiased probability assessors) because they may be conditioned on different information. We illustrate the fundamental differences between PRA and terrorism risk analysis, and suggest use of robust decision analysis for risk management when attackers may know more about some attack options than we do.  相似文献   

4.
相对业绩对投资基金风险承担行为的影响研究   总被引:1,自引:1,他引:1  
本文把投资基金市场视为一系列的"联赛"建立了一个博弈模型,从理论上研究了相对业绩对投资基金风险承担行为的影响。在模型中,两个年中业绩不同的基金为了在年末即"联赛"结束时获得更多新的资金流入从而获得更多的报酬而相互竞争。与人们的直觉相反,我们发现在年末时年中业绩较好的基金反而比年中业绩较差的基金更可能选择风险水平较高的投资组合。而且,年中业绩的差距越大、风险资产的收益越高、波动越低,在年末时年中业绩较好的基金选择风险较高的投资组合的概率越大;相应地,在年末时年中业绩较差的基金选择风险较高的投资组合的概率越小。最后,我们运用博弈原理和行为金融理论对这些结论作了解释。  相似文献   

5.
《Risk analysis》2018,38(1):118-133
In security check systems, tighter screening processes increase the security level, but also cause more congestion, which could cause longer wait times. Having to deal with more congestion in lines could also cause issues for the screeners. The Transportation Security Administration (TSA) Precheck Program was introduced to create fast lanes in airports with the goal of expediting passengers who the TSA does not deem to be threats. In this lane, the TSA allows passengers to enjoy fewer restrictions in order to speed up the screening time. Motivated by the TSA Precheck Program, we study parallel queueing imperfect screening systems, where the potential normal and adversary participants/applicants decide whether to apply to the Precheck Program or not. The approved participants would be assigned to a faster screening channel based on a screening policy determined by an approver, who balances the concerns of safety of the passengers and congestion of the lines. There exist three types of optimal normal applicant's application strategy, which depend on whether the marginal payoff is negative or positive, or whether the marginal benefit equals the marginal cost. An adversary applicant would not apply when the screening policy is sufficiently large or the number of utilized benefits is sufficiently small. The basic model is extended by considering (1) applicants' parameters to follow different distributions and (2) applicants to have risk levels, where the approver determines the threshold value needed to qualify for Precheck. This article integrates game theory and queueing theory to study the optimal screening policy and provides some insights to imperfect parallel queueing screening systems.  相似文献   

6.
This article presents ongoing research that focuses on efficient allocation of defense resources to minimize the damage inflicted on a spatially distributed physical network such as a pipeline, water system, or power distribution system from an attack by an active adversary, recognizing the fundamental difference between preparing for natural disasters such as hurricanes, earthquakes, or even accidental systems failures and the problem of allocating resources to defend against an opponent who is aware of, and anticipating, the defender's efforts to mitigate the threat. Our approach is to utilize a combination of integer programming and agent‐based modeling to allocate the defensive resources. We conceptualize the problem as a Stackelberg “leader follower” game where the defender first places his assets to defend key areas of the network, and the attacker then seeks to inflict the maximum damage possible within the constraints of resources and network structure. The criticality of arcs in the network is estimated by a deterministic network interdiction formulation, which then informs an evolutionary agent‐based simulation. The evolutionary agent‐based simulation is used to determine the allocation of resources for attackers and defenders that results in evolutionary stable strategies, where actions by either side alone cannot increase its share of victories. We demonstrate these techniques on an example network, comparing the evolutionary agent‐based results to a more traditional, probabilistic risk analysis (PRA) approach. Our results show that the agent‐based approach results in a greater percentage of defender victories than does the PRA‐based approach.  相似文献   

7.
《Risk analysis》2018,38(10):2055-2072
Four dimensions of the precautionary principle (PP), involving threat, uncertainty, action, and command, are formalized at the level of set theory and the level of individual players and natural and technological factors. Flow and decision diagrams with a feedback loop are developed to open up a new research agenda. The role of strategic interaction and games in the PP is underdeveloped or nonexistent in today's literature. To rectify this deficiency, six kinds of games are identified in the four PP dimensions. The games can be interlinked since player sets can overlap. Characteristics are illustrated. Accounting for strategic interaction, the article illustrates uncertainty in the PP regarding which game is played, which players participate in which game, strategy sets, payoffs, incomplete information, risk attitudes, and bounded rationality. The insurance and lottery games analyzed earlier for the safe minimum standard (SMS) for species extinction are revisited and placed into a broader context illustrating strategic interaction. Uncertainty about payoffs illustrates transformations back and forth between the chicken game, battle of the sexes, assurance game, and prisoner's dilemma.  相似文献   

8.
A large number of PRA studies have been completed for specific plants at specific sites. From these studies, taken individually or collectively, many significant insights have evolved into items important to risk and safety. The content of this paper is primarily based on the material contained in the EPRI funded review of five PRA studies: Big Rock Point, Zion, Limerick, Grand Gulf, and Arkansas Nuclear One. The first three were the utility sponsored studies publicly available at the time of project initiation while the other two were deemed representative of the NRC's RSSMAP and IREP programs respectively. The results of PRA studies are usually expressed in terms of core melt frequencies, radionuclide release frequencies, and frequencies of occurrence of different reactor accident consequences (e.g., early and latent fatalities) depending on the level of PRA. These subjects are prominently addressed in this paper. One of the results of a PRA study is identification of a relatively small number of accident sequences that represent the dominant contributors to core melt. An analysis of the salient features of the dominant accident sequences from eleven PRA's yielded a characterization of accident sequence categories discussed at some length. Impact of external events is discussed very briefly. Next to an explicit quantification of public risk or core melt frequency, the identification of specific safety concerns and the evaluation of possible solutions to implement risk management are probably the best recognized and most widely used applications of PRA. Several illustrative examples are briefly discussed. Human interactions are extremely important contributors to safety and reliability of the plants. A review of PRA studies concluded that it was necessary to account for five types of human interactions; some of which may mitigate while others may exacerbate an accident sequence.  相似文献   

9.
《Risk analysis》2018,38(8):1559-1575
Security of the systems is normally interdependent in such a way that security risks of one part affect other parts and threats spread through the vulnerable links in the network. So, the risks of the systems can be mitigated through investments in the security of interconnecting links. This article takes an innovative look at the problem of security investment of nodes on their vulnerable links in a given contagious network as a game‐theoretic model that can be applied to a variety of applications including information systems. In the proposed game model, each node computes its corresponding risk based on the value of its assets, vulnerabilities, and threats to determine the optimum level of security investments on its external links respecting its limited budget. Furthermore, direct and indirect nonlinear influences of a node's security investment on the risks of other nodes are considered. The existence and uniqueness of the game's Nash equilibrium in the proposed game are also proved. Further analysis of the model in a practical case revealed that taking advantage of the investment effects of other players, perfectly rational players (i.e., those who use the utility function of the proposed game model) make more cost‐effective decisions than selfish nonrational or semirational players.  相似文献   

10.
Louis Anthony Cox  Jr. 《Risk analysis》2009,29(8):1062-1068
Risk analysts often analyze adversarial risks from terrorists or other intelligent attackers without mentioning game theory. Why? One reason is that many adversarial situations—those that can be represented as attacker‐defender games, in which the defender first chooses an allocation of defensive resources to protect potential targets, and the attacker, knowing what the defender has done, then decides which targets to attack—can be modeled and analyzed successfully without using most of the concepts and terminology of game theory. However, risk analysis and game theory are also deeply complementary. Game‐theoretic analyses of conflicts require modeling the probable consequences of each choice of strategies by the players and assessing the expected utilities of these probable consequences. Decision and risk analysis methods are well suited to accomplish these tasks. Conversely, game‐theoretic formulations of attack‐defense conflicts (and other adversarial risks) can greatly improve upon some current risk analyses that attempt to model attacker decisions as random variables or uncertain attributes of targets (“threats”) and that seek to elicit their values from the defender's own experts. Game theory models that clarify the nature of the interacting decisions made by attackers and defenders and that distinguish clearly between strategic choices (decision nodes in a game tree) and random variables (chance nodes, not controlled by either attacker or defender) can produce more sensible and effective risk management recommendations for allocating defensive resources than current risk scoring models. Thus, risk analysis and game theory are (or should be) mutually reinforcing.  相似文献   

11.
Computer systems are constantly under the threats of being attacked and in many cases these attacks succeed. Today’s networked systems are thus built to be intrusion tolerant. In a large scale, the progresses of compromising the networked system and recovering the damage will carry on in parallel, allowing services to be continued (at a degraded level). One of the key problems in the restoration procedure regards to the resource allocation strategies and the cost associated with, specifically, a minimal cost is desired. In this paper we model the cost as a sum of service loss and resource expense that incur during the restoration procedure. We investigate the achievable minimal total cost and corresponding resource allocation strategy for different situations. The situations include both constant rates and time-variant rates in terms of the speed of compromising and recovering. We also consider the fact that the restoration rate is constrained by the resource allocated. The relationship can be either linear or obeying the law of diminishing marginal utility. We present both analytical and numerical results in the paper. The results show the impact from various system parameters on the critical conditions for a successful system restoration and on the minimal cost. Dr. Ray is currently with Google Inc., 604 Arizona Avenue, Santa Monica, CA 90401, USA. His e-mail contact is sibu@google.com, siburay@gmail.com  相似文献   

12.
本文研究了一类典型并行系统的效率评价问题:(1)决策单元由两个并行的子单元组成;(2)在整个系统中,某一子单元居于主导地位,另一子单元居于从属地位;(3)两个子单元之间存在部分共享的投入资源,且无法明显区别该资源在不同子单元之间的分配比例。在分析决策单元整体效率及内部子单元效率的基础上,基于主从博弈思想,提出一种能同时确定系统整体效率及内部子单元效率的评价方法,该方法能够在评价系统效率的同时,实现共享资源的有效分配。最后,采用一个实例分析说明了所提方法的合理性和有效性。  相似文献   

13.
The performance of a probabilistic risk assessment (PRA) for a nuclear power plant is a complex undertaking, involving the assembly of an accident frequency analysis, an accident progression analysis, a source term analysis, and a consequence analysis. Each of these analyses is, in itself, quite complex. Uncertainties enter into a PRA from each of these analyses. An important focus in recent PRAs has been to incorporate these uncertainties at each stage of the analysis, propagate the subsequent uncertainties through the entire analysis, and include uncertainty in the final results. Monte Carlo procedures based on Latin hypercube sampling provide one way to perform propagations of this type. In this paper, the results of two complete and independent Monte Carlo calculations for a recently completed PRA for a nuclear power plant are compared as a means of providing empirical evidence on the repeatability of uncertainty and sensitivity analyses for large-scale PRA calculations. These calculations use the same variables and analysis structure with two independently generated Latin hypercube samples. The results of the two calculations show a high degree of repeatability for the analysis of a very complex system.  相似文献   

14.
We consider production and service systems that consist of parallel lines of two types: (i) M/M/1 lines and (ii) lines that have no buffers (loss systems). Each line is assumed to be controlled by a dedicated supervisor. The management measures the effectiveness of the supervisors by the long run expected cost of their line. Unbalanced lines cause congestion and bottlenecks, large variation in output, unnecessary wastes and, ultimately, high operating costs. Thus, the supervisors are expected to join forces and reduce the cost of the whole system by applying line‐balancing techniques, possibly combined with either strategic outsourcing or capacity reduction practices. By solving appropriate mathematical programming formulations, the policy that minimizes the long run expected cost of each of the parallel‐lines system, is identified. The next question to be asked is how to allocate the new total cost of each system among the lines' supervisors so that the cooperation's stability is preserved. For that sake, we associate a cooperative game to each system and we investigate its core. We show that the cooperative games are reducible to market games and therefore they are totally balanced, that is, their core and the core of their subgames are non‐empty. For each game a core cost allocation based on competitive equilibrium prices is identified.  相似文献   

15.
Probabilistic risk assessment (PRA) is a useful tool to assess complex interconnected systems. This article leverages the capabilities of PRA tools developed for industrial and nuclear risk analysis in community resilience evaluations by modeling the food security of a community in terms of its built environment as an integrated system. To this end, we model the performance of Gilroy, CA, a moderate‐size town, with regard to disruptions in its food supply caused by a severe earthquake. The food retailers of Gilroy, along with the electrical power network, water network elements, and bridges are considered as components of a system. Fault and event trees are constructed to model the requirements for continuous food supply to community residents and are analyzed efficiently using binary decision diagrams (BDDs). The study also identifies shortcomings in approximate classical system analysis methods in assessing community resilience. Importance factors are utilized to rank the importance of various factors to the overall risk of food insecurity. Finally, the study considers the impact of various sources of uncertainties in the hazard modeling and performance of infrastructure on food security measures. The methodology can be applicable for any existing critical infrastructure system and has potential extensions to other hazards.  相似文献   

16.
Risk analysts frequently view the regulation of risks as being largely a matter of decision theory. According to this view, risk analysis methods provide information on the likelihood and severity of various possible outcomes; this information should then be assessed using a decision‐theoretic approach (such as cost/benefit analysis) to determine whether the risks are acceptable, and whether additional regulation is warranted. However, this view ignores the fact that in many industries (particularly industries that are technologically sophisticated and employ specialized risk and safety experts), risk analyses may be done by regulated firms, not by the regulator. Moreover, those firms may have more knowledge about the levels of safety at their own facilities than the regulator does. This creates a situation in which the regulated firm has both the opportunity—and often also the motive—to provide inaccurate (in particular, favorably biased) risk information to the regulator, and hence the regulator has reason to doubt the accuracy of the risk information provided by regulated parties. Researchers have argued that decision theory is capable of dealing with many such strategic interactions as well as game theory can. This is especially true in two‐player, two‐stage games in which the follower has a unique best strategy in response to the leader's strategy, as appears to be the case in the situation analyzed in this article. However, even in such cases, we agree with Cox that game‐theoretic methods and concepts can still be useful. In particular, the tools of mechanism design, and especially the revelation principle, can simplify the analysis of such games because the revelation principle provides rigorous assurance that it is sufficient to analyze only games in which licensees truthfully report their risk levels, making the problem more manageable. Without that, it would generally be necessary to consider much more complicated forms of strategic behavior (including deception), to identify optimal regulatory strategies. Therefore, we believe that the types of regulatory interactions analyzed in this article are better modeled using game theory rather than decision theory. In particular, the goals of this article are to review the relevant literature in game theory and regulatory economics (to stimulate interest in this area among risk analysts), and to present illustrative results showing how the application of game theory can provide useful insights into the theory and practice of risk‐informed regulation.  相似文献   

17.
Utility functions in the form of tables or matrices have often been used to combine discretely rated decision‐making criteria. Matrix elements are usually specified individually, so no one rule or principle can be easily stated for the utility function as a whole. A series of five matrices are presented that aggregate criteria two at a time using simple rules that express a varying degree of constraint of the lower rating over the higher. A further nine possible matrices were obtained by using a different rule either side of the main axis of the matrix to describe situations where the criteria have a differential influence on the outcome. Uncertainties in the criteria are represented by three alternative frequency distributions from which the assessors select the most appropriate. The output of the utility function is a distribution of rating frequencies that is dependent on the distributions of the input criteria. In pest risk analysis (PRA), seven of these utility functions were required to mimic the logic by which assessors for the European and Mediterranean Plant Protection Organization arrive at an overall rating of pest risk. The framework enables the development of PRAs that are consistent and easy to understand, criticize, compare, and change. When tested in workshops, PRA practitioners thought that the approach accorded with both the logic and the level of resolution that they used in the risk assessments.  相似文献   

18.
The North American Free Trade Agreement (NAFTA) and the General Agreement on Tariffs and Trade (GATT) have focused attention on risk assessment of potential insect, weed, and animal pests and diseases of livestock. These risks have traditionally been addressed through quarantine protocols ranging from limits on the geographical areas from which a product may originate, postharvest disinfestation procedures like fumigation, and inspections at points of export and import, to outright bans. To ensure that plant and animal protection measures are not used as nontariff trade barriers, GATT and NAFTA require pest risk analysis (PRA) to support quarantine decisions. The increased emphasis on PRA has spurred multiple efforts at the national and international level to design frameworks for the conduct of these analyses. As approaches to pest risk analysis proliferate, and the importance of the analyses grows, concerns have arisen about the scientific and technical conduct of pest risk analysis. In January of 1997, the Harvard Center for Risk Analysis (HCRA) held an invitation-only workshop in Washington, D.C. to bring experts in risk analysis and pest characterization together to develop general principles for pest risk analysis. Workshop participants examined current frameworks for PRA, discussed strengths and weaknesses of the approaches, and formulated principles, based on years of experience with risk analysis in other setting and knowledge of the issues specific to analysis of pests. The principles developed highlight the both the similarities of pest risk analysis to other forms of risk analysis, and its unique attributes.  相似文献   

19.
The data envelopment analysis (DEA) technique uses the most favorable weights for each decision making unit (DMU) to calculate efficiency. The resulting efficiency scores are thus incomparable and difficult to discriminate. This phenomenon is more prominent for network systems, which involves the ranking of the component divisions, in addition to the system. This paper applies the idea of cross evaluation, which has been demonstrated to be an effective approach in ranking DMUs for systems considered as a whole-unit, to measure the efficiency of the two basic structures of network systems, series and parallel. The proposed model is able to decompose the cross efficiency measure of the system into the product of those of the divisions for the series structure and a weighted average for the parallel structure. The results from two real-world cases, one for the basic series structure and another for the parallel one, show that the cross efficiency measures proposed in this paper not only increase the discriminating power in ranking systems and divisions, but also identify the relationship between the system and division efficiencies. Which division has stronger effects on the performance of the system is reflected from this relationship.  相似文献   

20.
Low‐probability, high‐impact events are difficult to manage. Firms may underinvest in risk assessments for low‐probability, high‐impact events because it is not easy to link the direct and indirect benefits of doing so. Scholarly research on the effectiveness of programs aimed at reducing such events faces the same challenge. In this article, we draw on comprehensive industry‐wide data from the U.S. nuclear power industry to explore the impact of conducting probabilistic risk assessment (PRA) on preventing safety‐related disruptions. We examine this using data from over 25,000 monthly event reports across 101 U.S. nuclear reactors from 1985 to 1998. Using Poisson fixed effects models with time trends, we find that the number of safety‐related disruptions reduced between 8% and 27% per month in periods after operators submitted their PRA in response to the Nuclear Regulatory Commission's Generic Letter 88‐20, which required all operators to conduct a PRA. One possible mechanism for this is that the adoption of PRA may have increased learning rates, lowering the rate of recurring events by 42%. We find that operators that completed their PRA before Generic Letter 88‐20 continued to experience safety improvements during 1990–1995. This suggests that revisiting PRA or conducting it again can be beneficial. Our results suggest that even in a highly safety‐conscious industry as nuclear utilities, a more formal approach to quantifying risk has its benefits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号