首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 434 毫秒
1.
This paper analyzes quantitatively the design of the Ocean Ranger off-shore oil drilling rig that capsized and sank on February 15, 1982 off the coast of Canada. A review of the actual disaster is also included based on evidence gathered by the Canadian Royal Commission. The risk analysis includes the construction of a failure modes and effects analysis (FMEA) table, a fault tree, and a quantitative evaluation including common cause failure of the rig components. In the case of the Ocean Ranger ballast control system, it was shown that the analysis was able both to successfully model the catastrophic system failure of the portholes, the actual system failure mode, and identify a common cause failure mode of the pump system. This study represents an application of reliability and risk techniques to the oil services industry.  相似文献   

2.
Life-table analysis can help to gauge the lifetime impacts that accrue from modifications to (age-specific) baseline mortality. Modifications of interest include those stemming from risk-factor-related exposures or from interventions. The specific algorithm used in these analyses can be called a cause-modified life table (a generalization of the cause-deleted life table). The author presents an approach for approximating that algorithm and uses it to obtain remarkably simplified expressions for approximating three indices of common interest: life-years lost (LYL), excess lifetime risk ratio (ELRR), and risk of exposure-induced death (REID). These efforts are restricted to the special case of multiplicative increases to baseline mortality (modeled as an excess rate ratio, ERR). The simplified expressions effectively "break open" what is often treated as a "black-box" calculation. Several insights result. For a practical range of risk factor impacts (ERRs), each index can be related to the ERR as a function of a baseline summary statistic and a "characteristic number" specific to the population and cause of interest. Conveniently, those numbers help form "rules of thumb" for translating among the three indices and suggest heuristics for extrapolating indices across populations and causes of death.  相似文献   

3.
Fault Trees vs. Event Trees in Reliability Analysis   总被引:1,自引:0,他引:1  
Reliability analysis is the study of both the probability and the process of failure of a system. For that purpose, several tools are available, for example, fault trees, event trees, or the GO technique. These tools are often complementary and address different aspects of the questions. Experience shows that there is sometimes confusion between two of these methods: fault trees and event trees. Sometimes identified as equivalent, they, in fact, serve different purposes. Fault trees lay out relationships among events. Event trees lay out sequences of events linked by conditional probabilities. At least in theory, event trees can handle better notions of continuity (logical, temporal, and physical), whereas fault trees are most powerful in identifying and simplifying failure scenarios. Different characteristics of the system in question (e.g., a dam or a nuclear reactor) may guide the choice between fault trees, event trees, or a combination of the two. Some elements of this choice are examined, and observations are made about the relative capabilities of the two methods.  相似文献   

4.
Risk-Based Ranking of Dominant Contributors to Maritime Pollution Events   总被引:2,自引:0,他引:2  
This report describes a conceptual approach for identifying dominant contributors to risk from maritime shipping of hazardous materials. Maritime transportation accidents are relatively common occurrences compared to more frequently analyzed contributors to public risk. Yet research on maritime safety and pollution incidents has not been guided by a systematic, risk-based approach. Maritime shipping accidents can be analyzed using event trees to group the accidents into "bins," or groups, of similar characteristics such as type of cargo, location of accident (e.g., harbor, inland waterway), type of accident (e.g., fire, collision, grounding), and size of release. The importance of specific types of events to each accident bin can be quantified. Then the overall importance of accident events to risk can be estimated by weighting the events' individual bin importance measures by the risk associated with each accident bin.  相似文献   

5.
This article presents an iterative six‐step risk analysis methodology based on hybrid Bayesian networks (BNs). In typical risk analysis, systems are usually modeled as discrete and Boolean variables with constant failure rates via fault trees. Nevertheless, in many cases, it is not possible to perform an efficient analysis using only discrete and Boolean variables. The approach put forward by the proposed methodology makes use of BNs and incorporates recent developments that facilitate the use of continuous variables whose values may have any probability distributions. Thus, this approach makes the methodology particularly useful in cases where the available data for quantification of hazardous events probabilities are scarce or nonexistent, there is dependence among events, or when nonbinary events are involved. The methodology is applied to the risk analysis of a regasification system of liquefied natural gas (LNG) on board an FSRU (floating, storage, and regasification unit). LNG is becoming an important energy source option and the world's capacity to produce LNG is surging. Large reserves of natural gas exist worldwide, particularly in areas where the resources exceed the demand. Thus, this natural gas is liquefied for shipping and the storage and regasification process usually occurs at onshore plants. However, a new option for LNG storage and regasification has been proposed: the FSRU. As very few FSRUs have been put into operation, relevant failure data on FSRU systems are scarce. The results show the usefulness of the proposed methodology for cases where the risk analysis must be performed under considerable uncertainty.  相似文献   

6.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

7.
Root cause analysis can be used in foodborne illness outbreak investigations to determine the underlying causes of an outbreak and to help identify actions that could be taken to prevent future outbreaks. We developed a new tool, the Quantitative Risk Assessment-Epidemic Curve Prediction Model (QRA-EC), to assist with these goals and applied it to a case study to investigate and illustrate the utility of leveraging quantitative risk assessment to provide unique insights for foodborne illness outbreak root cause analysis. We used a 2019 Salmonella outbreak linked to melons as a case study to demonstrate the utility of this model (Centers for Disease Control and Prevention [CDC], 2019). The model was used to evaluate the impact of various root cause hypotheses (representing different contamination sources and food safety system failures in the melon supply chain) on the predicted number and timeline of illnesses. The predicted number of illnesses varied by contamination source and was strongly impacted by the prevalence and level of Salmonella contamination on the surface/inside of whole melons and inside contamination niches on equipment surfaces. The timeline of illnesses was most strongly impacted by equipment sanitation efficacy for contamination niches. Evaluations of a wide range of scenarios representing various potential root causes enabled us to identify which hypotheses, were likely to result in an outbreak of similar size and illness timeline to the 2019 Salmonella melon outbreak. The QRA-EC framework can be adapted to accommodate any food–pathogen pairs to provide insights for foodborne outbreak investigations.  相似文献   

8.
Context in the Risk Assessment of Digital Systems   总被引:1,自引:0,他引:1  
As the use of digital computers for instrumentation and control of safety-critical systems has increased, there has been a growing debate over the issue of whether probabilistic risk assessment techniques can be applied to these systems. This debate has centered on the issue of whether software failures can be modeled probabilistically. This paper describes a context-based approach to software risk assessment that explicitly recognizes the fact that the behavior of software is not probabilistic. The source of the perceived uncertainty in its behavior results from both the input to the software as well as the application and environment in which the software is operating. Failures occur as the result of encountering some context for which the software was not properly designed, as opposed to the software simply failing randomly. The paper elaborates on the concept of error-forcing context as it applies to software. It also illustrates a methodology which utilizes event trees, fault trees, and the Dynamic Flowgraph Methodology (DFM) to identify error-forcing contexts for software in the form of fault tree prime implicants.  相似文献   

9.
Tunneling excavation is bound to produce significant disturbances to surrounding environments, and the tunnel‐induced damage to adjacent underground buried pipelines is of considerable importance for geotechnical practice. A fuzzy Bayesian networks (FBNs) based approach for safety risk analysis is developed in this article with detailed step‐by‐step procedures, consisting of risk mechanism analysis, the FBN model establishment, fuzzification, FBN‐based inference, defuzzification, and decision making. In accordance with the failure mechanism analysis, a tunnel‐induced pipeline damage model is proposed to reveal the cause‐effect relationships between the pipeline damage and its influential variables. In terms of the fuzzification process, an expert confidence indicator is proposed to reveal the reliability of the data when determining the fuzzy probability of occurrence of basic events, with both the judgment ability level and the subjectivity reliability level taken into account. By means of the fuzzy Bayesian inference, the approach proposed in this article is capable of calculating the probability distribution of potential safety risks and identifying the most likely potential causes of accidents under both prior knowledge and given evidence circumstances. A case concerning the safety analysis of underground buried pipelines adjacent to the construction of the Wuhan Yangtze River Tunnel is presented. The results demonstrate the feasibility of the proposed FBN approach and its application potential. The proposed approach can be used as a decision tool to provide support for safety assurance and management in tunnel construction, and thus increase the likelihood of a successful project in a complex project environment.  相似文献   

10.
The methodology and results reported in this paper are based on an analysis of a hypothetical accident occurring in a two unit power plant with shared systems (i.e., the diesel generator, the emergency service water, and the residual heat removal service water systems). The accident postulated is a loss of coolant accident (LOCA) in one out of two nuclear units in conjunction with a loss of offsite power (LOOP) and a failure of one out of four diesel generators to start. To analyze the intersystem effects, we needed to develop and apply a new methodology, intersystem common cause analysis (ICCA). The ICCA methodology revealed problems which were not identified by the traditional intrasystem failure modes and effects analysis (FMEA) performed earlier by the design teams. The first potential problem arises if one unit experiences a LOCA and diesel generator failure while one loop of its residual heat removal system is in the suppression pool cooling mode (SPCM); in this event, it is likely that minimum emergency core cooling system (ECCS) requirements will not be met. The second potential problem arises if a diesel generator fails while both units are simultaneously subjected to a controlled forced shutdown (a LOCA need not be postulated for either unit); in this event, it is likely that one unit will be required to use a heat removal path identified as off-normal in the final safety analysis report (FSAR) for the two unit plant. These and other potential concerns identified through application of the ICCA presented here were resolved early in the design phase.  相似文献   

11.
12.
Intentional or accidental releases of contaminants into a water distribution system (WDS) have the potential to cause significant adverse health effects among individuals consuming water from the system. A flexible analysis framework is presented here for estimating the magnitude of such potential effects and is applied using network models for 12 actual WDSs of varying sizes. Upper bounds are developed for the magnitude of adverse effects of contamination events in WDSs and evaluated using results from the 12 systems. These bounds can be applied in cases in which little system‐specific information is available. The combination of a detailed, network‐specific approach and a bounding approach allows consequence assessments to be performed for systems for which varying amounts of information are available and addresses important needs of individual utilities as well as regional or national assessments. The approach used in the analysis framework allows contaminant injections at any or all network nodes and uses models that (1) account for contaminant transport in the systems, including contaminant decay, and (2) provide estimates of ingested contaminant doses for the exposed population. The approach can be easily modified as better transport or exposure models become available. The methods presented here provide the ability to quantify or bound potential adverse effects of contamination events for a wide variety of possible contaminants and WDSs, including systems without a network model.  相似文献   

13.
In human reliability analysis (HRA), dependence analysis refers to assessing the influence of the failure of the operators to perform one task on the failure probabilities of subsequent tasks. A commonly used approach is the technique for human error rate prediction (THERP). The assessment of the dependence level in THERP is a highly subjective judgment based on general rules for the influence of five main factors. A frequently used alternative method extends the THERP model with decision trees. Such trees should increase the repeatability of the assessments but they simplify the relationships among the factors and the dependence level. Moreover, the basis for these simplifications and the resulting tree is difficult to trace. The aim of this work is a method for dependence assessment in HRA that captures the rules used by experts to assess dependence levels and incorporates this knowledge into an algorithm and software tool to be used by HRA analysts. A fuzzy expert system (FES) underlies the method. The method and the associated expert elicitation process are demonstrated with a working model. The expert rules are elicited systematically and converted into a traceable, explicit, and computable model. Anchor situations are provided as guidance for the HRA analyst's judgment of the input factors. The expert model and the FES‐based dependence assessment method make the expert rules accessible to the analyst in a usable and repeatable way, with an explicit and traceable basis.  相似文献   

14.
A quantitative risk analysis was conducted to evaluate the design of the VX neutralization subsystem and related support facilities of the U.S. Army Newport Chemical Agent Disposal Facility. Three major incidents including agent release, personnel injury, and system loss were studied using fault tree analysis methodology. Each incident was assigned a risk assessment code based on the severity level and probability of occurrence of the incident. Safety mitigations or design changes were recommended to bring the "undesired" risk level (typical agent release events) to be "acceptable with controls" or "acceptable."  相似文献   

15.
Probabilistic risk analyses often construct multistage chance trees to estimate the joint probability of compound events. If random measurement error is associated with some or all of the estimates, we show that resulting estimates of joint probability may be highly skewed. Joint probability estimates based on the analysis of multistage chance trees are more likely than not to be below the true probability of adverse events, but will sometimes substantially overestimate them. In contexts such as insurance markets for environmental risks, skewed distributions of risk estimates amplify the "winner's curse" so that the estimated risk premium for low-probability events is likely to be lower than the normative value. Skewness may result even in unbiased estimators of expected value from simple lotteries, if measurement error is associated with both the probability and pay-off terms. Further, skewness may occur even if the error associated with these two estimates is symmetrically distributed. Under certain circumstances, skewed estimates of expected value may result in risk-neutral decisionmakers exhibiting a tendency to choose a certainty equivalent over a lottery of equal expected value, or vice versa. We show that when distributions of estimates of expected value are, positively skewed, under certain circumstances it will be optimal to choose lotteries with nominal values lower than the value of apparently superior certainty equivalents. Extending the previous work of Goodman (1960), we provide an exact formula for the skewness of products.  相似文献   

16.
17.
Earlier work with decision trees identified nonseparability as an obstacle to minimizing the conditional expected value, a measure of the risk of extreme events, by the well-known method of averaging out and folding back. This first of two companion papers addresses the conditional expected value that is defined as the expected outcome assuming the exceedance of a threshold β, where β is preselected by the decision maker. An approach is proposed to overcome the need to evaluate all policies in order to identify the optimal policy. The approach is based on the insight that the conditional expected value is separable into two constituent elements of risk and can thus be optimized along with other objectives, including the unconditional expected value of the outcome, by using a multiobjective decision tree. An example of sequential decision making for improving highway capacity is given.  相似文献   

18.
Dynamic reliability methods aim at complementing the capability of traditional static approaches (e.g., event trees [ETs] and fault trees [FTs]) by accounting for the system dynamic behavior and its interactions with the system state transition process. For this, the system dynamics is here described by a time‐dependent model that includes the dependencies with the stochastic transition events. In this article, we present a novel computational framework for dynamic reliability analysis whose objectives are i) accounting for discrete stochastic transition events and ii) identifying the prime implicants (PIs) of the dynamic system. The framework entails adopting a multiple‐valued logic (MVL) to consider stochastic transitions at discretized times. Then, PIs are originally identified by a differential evolution (DE) algorithm that looks for the optimal MVL solution of a covering problem formulated for MVL accident scenarios. For testing the feasibility of the framework, a dynamic noncoherent system composed of five components that can fail at discretized times has been analyzed, showing the applicability of the framework to practical cases.  相似文献   

19.
Pet-Armacost  Julia J.  Sepulveda  Jose  Sakude  Milton 《Risk analysis》1999,19(6):1173-1184
The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known.  相似文献   

20.
Although occupational exposure limits are sought to establish health-based standards, they do not always give a sufficient basis for planning an indoor air climate that is good and comfortable for the occupants in industrial work rooms. This paper considers methodologies by which the desired level, i.e., target level, of air quality in industrial settings can be defined, taking into account feasibility issues. Risk assessment based on health criteria is compared with risk-assessment based on "Best Available Technology" (BAT). Because health-based risk estimates at low concentration regions are rather inaccurate, the technology-based approach is emphasized. The technological approach is based on information on the prevailing concentrations in industrial work environments and the benchmark air quality attained with the best achievable technology. The prevailing contaminant concentrations are obtained from a contaminant exposure databank, and the benchmark air quality by field measurements in industrial work rooms equipped with advanced ventilation and production technology. As an example, the target level assessment has been applied to formaldehyde, total inorganic dust and hexavalent chromium, which are common contaminants in work room air.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号