首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

2.
In risk analysis, the treatment of the epistemic uncertainty associated to the probability of occurrence of an event is fundamental. Traditionally, probabilistic distributions have been used to characterize the epistemic uncertainty due to imprecise knowledge of the parameters in risk models. On the other hand, it has been argued that in certain instances such uncertainty may be best accounted for by fuzzy or possibilistic distributions. This seems the case in particular for parameters for which the information available is scarce and of qualitative nature. In practice, it is to be expected that a risk model contains some parameters affected by uncertainties that may be best represented by probability distributions and some other parameters that may be more properly described in terms of fuzzy or possibilistic distributions. In this article, a hybrid method that jointly propagates probabilistic and possibilistic uncertainties is considered and compared with pure probabilistic and pure fuzzy methods for uncertainty propagation. The analyses are carried out on a case study concerning the uncertainties in the probabilities of occurrence of accident sequences in an event tree analysis of a nuclear power plant.  相似文献   

3.
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic‐possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility‐probability (probability‐possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context.  相似文献   

4.
Event-tree analysis with imprecise probabilities   总被引:1,自引:0,他引:1  
You X  Tonon F 《Risk analysis》2012,32(2):330-344
Novel methods are proposed for dealing with event-tree analysis under imprecise probabilities, where one could measure chance or uncertainty without sharp numerical probabilities and express available evidence as upper and lower previsions (or expectations) of gambles (or bounded real functions). Sets of upper and lower previsions generate a convex set of probability distributions (or measures). Any probability distribution in this convex set should be considered in the event-tree analysis. This article focuses on the calculation of upper and lower bounds of the prevision (or the probability) of some outcome at the bottom of the event-tree. Three cases of given information/judgments on probabilities of outcomes are considered: (1) probabilities conditional to the occurrence of the event at the upper level; (2) total probabilities of occurrences, that is, not conditional to other events; (3) the combination of the previous two cases. Corresponding algorithms with imprecise probabilities under the three cases are explained and illustrated by simple examples.  相似文献   

5.
A wide range of uncertainties will be introduced inevitably during the process of performing a safety assessment of engineering systems. The impact of all these uncertainties must be addressed if the analysis is to serve as a tool in the decision-making process. Uncertainties present in the components (input parameters of model or basic events) of model output are propagated to quantify its impact in the final results. There are several methods available in the literature, namely, method of moments, discrete probability analysis, Monte Carlo simulation, fuzzy arithmetic, and Dempster-Shafer theory. All the methods are different in terms of characterizing at the component level and also in propagating to the system level. All these methods have different desirable and undesirable features, making them more or less useful in different situations. In the probabilistic framework, which is most widely used, probability distribution is used to characterize uncertainty. However, in situations in which one cannot specify (1) parameter values for input distributions, (2) precise probability distributions (shape), and (3) dependencies between input parameters, these methods have limitations and are found to be not effective. In order to address some of these limitations, the article presents uncertainty analysis in the context of level-1 probabilistic safety assessment (PSA) based on a probability bounds (PB) approach. PB analysis combines probability theory and interval arithmetic to produce probability boxes (p-boxes), structures that allow the comprehensive propagation through calculation in a rigorous way. A practical case study is also carried out with the developed code based on the PB approach and compared with the two-phase Monte Carlo simulation results.  相似文献   

6.
In counterterrorism risk management decisions, the analyst can choose to represent terrorist decisions as defender uncertainties or as attacker decisions. We perform a comparative analysis of probabilistic risk analysis (PRA) methods including event trees, influence diagrams, Bayesian networks, decision trees, game theory, and combined methods on the same illustrative examples (container screening for radiological materials) to get insights into the significant differences in assumptions and results. A key tenent of PRA and decision analysis is the use of subjective probability to assess the likelihood of possible outcomes. For each technique, we compare the assumptions, probability assessment requirements, risk levels, and potential insights for risk managers. We find that assessing the distribution of potential attacker decisions is a complex judgment task, particularly considering the adaptation of the attacker to defender decisions. Intelligent adversary risk analysis and adversarial risk analysis are extensions of decision analysis and sequential game theory that help to decompose such judgments. These techniques explicitly show the adaptation of the attacker and the resulting shift in risk based on defender decisions.  相似文献   

7.
Yacov Y. Haimes 《Risk analysis》2009,29(12):1647-1654
The premise of this article is that risk to a system, as well as its vulnerability and resilience, can be understood, defined, and quantified most effectively through a systems-based philosophical and methodological approach, and by recognizing the central role of the system states in this process. A universally agreed-upon definition of risk has been difficult to develop; one reason is that the concept is multidimensional and nuanced. It requires an understanding that risk to a system is inherently and fundamentally a function of the initiating event, the states of the system and of its environment, and the time frame. In defining risk, this article posits that: (a) the performance capabilities of a system are a function of its state vector; (b) a system's vulnerability and resilience vectors are each a function of the input (e.g., initiating event), its time of occurrence, and the states of the system; (c) the consequences are a function of the specificity and time of the event, the vector of the states, the vulnerability, and the resilience of the system; (d) the states of a system are time-dependent and commonly fraught with variability uncertainties and knowledge uncertainties; and (e) risk is a measure of the probability and severity of consequences. The above implies that modeling must evaluate consequences for each risk scenario as functions of the threat (initiating event), the vulnerability and resilience of the system, and the time of the event. This fundamentally complex modeling and analysis process cannot be performed correctly and effectively without relying on the states of the system being studied.  相似文献   

8.
This article is based on a quantitative risk assessment (QRA) that was performed on a radioactive waste disposal area within the Western New York Nuclear Service Center in western New York State. The QRA results were instrumental in the decision by the New York State Energy Research and Development Authority to support a strategy of in‐place management of the disposal area for another decade. The QRA methodology adopted for this first of a kind application was a scenario‐based approach in the framework of the triplet definition of risk (scenarios, likelihoods, consequences). The measure of risk is the frequency of occurrence of different levels of radiation dose to humans at prescribed locations. The risk from each scenario is determined by (1) the frequency of disruptive events or natural processes that cause a release of radioactive materials from the disposal area; (2) the physical form, quantity, and radionuclide content of the material that is released during each scenario; (3) distribution, dilution, and deposition of the released materials throughout the environment surrounding the disposal area; and (4) public exposure to the distributed material and the accumulated radiation dose from that exposure. The risks of the individual scenarios are assembled into a representation of the risk from the disposal area. In addition to quantifying the total risk to the public, the analysis ranks the importance of each contributing scenario, which facilitates taking corrective actions and implementing effective risk management. Perhaps most importantly, quantification of the uncertainties is an intrinsic part of the risk results. This approach to safety analysis has demonstrated many advantages of applying QRA principles to assessing the risk of facilities involving hazardous materials.  相似文献   

9.
Ted W. Yellman 《Risk analysis》2016,36(6):1072-1078
Some of the terms used in risk assessment and management are poorly and even contradictorily defined. One such term is “event,” which arguably describes the most basic of all risk‐related concepts. The author cites two contemporary textbook interpretations of “event” that he contends are incorrect and misleading. He then examines the concept of an event in A. N. Kolmogorov's probability axioms and in several more‐current textbooks. Those concepts are found to be too narrow for risk assessments and inconsistent with the actual usage of “event” by risk analysts. The author goes on to define and advocate linguistic definitions of events (as opposed to mathematical definitions)—definitions constructed from natural language. He argues that they should be recognized for what they are: the de facto primary method of defining events.  相似文献   

10.
《Risk analysis》2018,38(8):1534-1540
An extreme space weather event has the potential to disrupt or damage infrastructure systems and technologies that many societies rely on for economic and social well‐being. Space weather events occur regularly, but extreme events are less frequent, with a small number of historical examples over the last 160 years. During the past decade, published works have (1) examined the physical characteristics of the extreme historical events and (2) discussed the probability or return rate of select extreme geomagnetic disturbances, including the 1859 Carrington event. Here we present initial findings on a unified framework approach to visualize space weather event probability, using a Bayesian model average, in the context of historical extreme events. We present disturbance storm time (Dst ) probability (a proxy for geomagnetic disturbance intensity) across multiple return periods and discuss parameters of interest to policymakers and planners in the context of past extreme space weather events. We discuss the current state of these analyses, their utility to policymakers and planners, the current limitations when compared to other hazards, and several gaps that need to be filled to enhance space weather risk assessments.  相似文献   

11.
Matthew Revie 《Risk analysis》2011,31(7):1120-1132
Traditional statistical procedures for estimating the probability of an event result in an estimate of zero when no events are realized. Alternative inferential procedures have been proposed for the situation where zero events have been realized but often these are ad hoc, relying on selecting methods dependent on the data that have been realized. Such data‐dependent inference decisions violate fundamental statistical principles, resulting in estimation procedures whose benefits are difficult to assess. In this article, we propose estimating the probability of an event occurring through minimax inference on the probability that future samples of equal size realize no more events than that in the data on which the inference is based. Although motivated by inference on rare events, the method is not restricted to zero event data and closely approximates the maximum likelihood estimate (MLE) for nonzero data. The use of the minimax procedure provides a risk adverse inferential procedure where there are no events realized. A comparison is made with the MLE and regions of the underlying probability are identified where this approach is superior. Moreover, a comparison is made with three standard approaches to supporting inference where no event data are realized, which we argue are unduly pessimistic. We show that for situations of zero events the estimator can be simply approximated with , where n is the number of trials.  相似文献   

12.
A Distributional Approach to Characterizing Low-Dose Cancer Risk   总被引:2,自引:0,他引:2  
Since cancer risk at very low doses cannot be directly measured in humans or animals, mathematical extrapolation models and scientific judgment are required. This article demonstrates a probabilistic approach to carcinogen risk assessment that employs probability trees, subjective probabilities, and standard bootstrapping procedures. The probabilistic approach is applied to the carcinogenic risk of formaldehyde in environmental and occupational settings. Sensitivity analyses illustrate conditional estimates of risk for each path in the probability tree. Fundamental mechanistic uncertainties are characterized. A strength of the analysis is the explicit treatment of alternative beliefs about pharmacokinetics and pharmacodynamics. The resulting probability distributions on cancer risk are compared with the point estimates reported by federal agencies. Limitations of the approach are discussed as well as future research directions.  相似文献   

13.
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): “objective” dependence between the (random) occurrences of different basic events (BEs) in the FT and “state‐of‐knowledge” (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well‐known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present).  相似文献   

14.
We show by example that empirical likelihood and other commonly used tests for moment restrictions are unable to control the (exponential) rate at which the probability of a Type I error tends to zero unless the possible distributions for the observed data are restricted appropriately. From this, it follows that for the optimality claim for empirical likelihood in Kitamura (2001) to hold, additional assumptions and qualifications are required. Under stronger assumptions than those in Kitamura (2001), we establish the following optimality result: (i) empirical likelihood controls the rate at which the probability of a Type I error tends to zero and (ii) among all procedures for which the probability of a Type I error tends to zero at least as fast, empirical likelihood maximizes the rate at which the probability of a Type II error tends to zero for most alternatives. This result further implies that empirical likelihood maximizes the rate at which the probability of a Type II error tends to zero for all alternatives among a class of tests that satisfy a weaker criterion for their Type I error probabilities.  相似文献   

15.
Following the 2013 Chelyabinsk event, the risks posed by asteroids attracted renewed interest, from both the scientific and policy‐making communities. It reminded the world that impacts from near‐Earth objects (NEOs), while rare, have the potential to cause great damage to cities and populations. Point estimates of the risk (such as mean numbers of casualties) have been proposed, but because of the low‐probability, high‐consequence nature of asteroid impacts, these averages provide limited actionable information. While more work is needed to further refine its input distributions (e.g., NEO diameters), the probabilistic model presented in this article allows a more complete evaluation of the risk of NEO impacts because the results are distributions that cover the range of potential casualties. This model is based on a modularized simulation that uses probabilistic inputs to estimate probabilistic risk metrics, including those of rare asteroid impacts. Illustrative results of this analysis are presented for a period of 100 years. As part of this demonstration, we assess the effectiveness of civil defense measures in mitigating the risk of human casualties. We find that they are likely to be beneficial but not a panacea. We also compute the probability—but not the consequences—of an impact with global effects (“cataclysm”). We conclude that there is a continued need for NEO observation, and for analyses of the feasibility and risk‐reduction effectiveness of space missions designed to deflect or destroy asteroids that threaten the Earth.  相似文献   

16.
Cox LA 《Risk analysis》2012,32(5):816-829
Recent proposals to further reduce permitted levels of air pollution emissions are supported by high projected values of resulting public health benefits. For example, the Environmental Protection Agency recently estimated that the 1990 Clean Air Act Amendment (CAAA) will produce human health benefits in 2020, from reduced mortality rates, valued at nearly $2 trillion per year, compared to compliance costs of $65 billion ($0.065 trillion). However, while compliance costs can be measured, health benefits are unproved: they depend on a series of uncertain assumptions. Among these are that additional life expectancy gained by a beneficiary (with median age of about 80 years) should be valued at about $80,000 per month; that there is a 100% probability that a positive, linear, no-threshold, causal relation exists between PM(2.5) concentration and mortality risk; and that progress in medicine and disease prevention will not greatly diminish this relationship. We present an alternative uncertainty analysis that assigns a positive probability of error to each assumption. This discrete uncertainty analysis suggests (with probability >90% under plausible alternative assumptions) that the costs of CAAA exceed its benefits. Thus, instead of suggesting to policymakers that CAAA benefits are almost certainly far larger than its costs, we believe that accuracy requires acknowledging that the costs purchase a relatively uncertain, possibly much smaller, benefit. The difference between these contrasting conclusions is driven by different approaches to uncertainty analysis, that is, excluding or including discrete uncertainties about the main assumptions required for nonzero health benefits to exist at all.  相似文献   

17.
Yoke Heng Wong 《Risk analysis》2011,31(12):1872-1882
Road tunnels are vital infrastructures providing underground vehicular passageways for commuters and motorists. Various quantitative risk assessment (QRA) models have recently been developed and employed to evaluate the safety levels of road tunnels in terms of societal risk (as measured by the F/N curve). For a particular road tunnel, traffic volume and proportion of heavy goods vehicles (HGVs) are two adjustable parameters that may significantly affect the societal risk, and are thus very useful in implementing risk reduction solutions. To evaluate the impact the two contributing factors have on the risk, this article first presents an approach that employs a QRA model to generate societal risk for a series of possible combinations of the two factors. Some combinations may result in F/N curves that do not fulfill a predetermined safety target. This article thus proposes an “excess risk index” in order to quantify the road tunnel risk magnitudes that do not pass the safety target. The two‐factor impact analysis can be illustrated by a contour chart based on the excess risk. Finally, the methodology has been applied to Singapore's KPE road tunnel and the results show that in terms of meeting the test safety target for societal risk, the traffic capacity of the tunnel should be no more than 1,200 vehs/h/lane, with a maximum proportion of 18% HGVs.  相似文献   

18.
In the partitioned multiobjective risk method (PMRM) the probability axis is typically partitioned into three regimes: high-exceedance low-consequence, intermediate-exceedance intermediate-consequence, and low-exceedance high-consequence (LE/HC). For each regime, the PMRM generates a conditional expected risk-function given that the damage lies within the regime. The theme of this paper is the conditional expected-risk function for the LE/HC regime. This function, denoted by f4(.), captures the behavior of the “extreme events” of an underlying decision-making problem. The PMRM offers two advantages: (a) it isolates LE/HC events, allowing the decision-maker(s) to focus on the impacts of catastrophies; and (b) it generates more valuable information than that obtained from the common unconditional expected-risk function. Theoretical problems may arise from uncertainty about the behavior of the tail of the risk curve describing the underlying frequency of damages. When the number of physical observations in small (e.g., in flood frequency analysis), the analyst is forced to make assumptions about the density of damages. Each succeeding distributional assumption will generate a different value of f4(.). An added dimension of difficulty is also created by the sensitivity of f4(.) to the choice of the boundary of the LE/HC regime. This paper has two overall objectives: (a) to present distribution-free results concerning the magnitude of f4(.); and (b) to use those results to obtain a distribution-free estimate of the sensitivity of f4(.) to the choice of the boundary of the LE/HC regime. The above objectives are realized by extending, and further developing, existing inequalities for continuously distributed random variables.  相似文献   

19.
Efficient Algorithms for Similarity Search   总被引:1,自引:0,他引:1  
The problem of our interest takes as input a database of m sequences from an alphabet and an integer k. The goal is to report all the pairs of sequences that have a matching subsequence of length at least k. We employ two algorithms to solve this problem. The first algorithm is based on sorting and the second is based on generalized suffix trees. We provide experimental data comparing the performances of these algorithms. The generalized suffix tree based algorithm performs better than the sorting based algorithm.  相似文献   

20.
《Risk analysis》2018,38(4):804-825
Economic consequence analysis is one of many inputs to terrorism contingency planning. Computable general equilibrium (CGE) models are being used more frequently in these analyses, in part because of their capacity to accommodate high levels of event‐specific detail. In modeling the potential economic effects of a hypothetical terrorist event, two broad sets of shocks are required: (1) physical impacts on observable variables (e.g., asset damage); (2) behavioral impacts on unobservable variables (e.g., investor uncertainty). Assembling shocks describing the physical impacts of a terrorist incident is relatively straightforward, since estimates are either readily available or plausibly inferred. However, assembling shocks describing behavioral impacts is more difficult. Values for behavioral variables (e.g., required rates of return) are typically inferred or estimated by indirect means. Generally, this has been achieved via reference to extraneous literature or ex ante surveys. This article explores a new method. We elucidate the magnitude of CGE‐relevant structural shifts implicit in econometric evidence on terrorist incidents, with a view to informing future ex ante event assessments. Ex post econometric studies of terrorism by Blomberg et al . yield macro econometric equations that describe the response of observable economic variables (e.g., GDP growth) to terrorist incidents. We use these equations to determine estimates for relevant (unobservable) structural and policy variables impacted by terrorist incidents, using a CGE model of the United States. This allows us to: (i) compare values for these shifts with input assumptions in earlier ex ante CGE studies; and (ii) discuss how future ex ante studies can be informed by our analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号