首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Human error is one of the significant factors contributing to accidents. Traditional human error probability (HEP) studies based on fuzzy number concepts are one of the contributions addressing such a problem. It is particularly useful under circumstances where the lack of data exists. However, the degree of the discriminability of such studies may be questioned when applied under circumstances where experts have adequate information and specific values can be determined in the abscissa of the membership function of linguistic terms, that is, the fuzzy data of each scenario considered are close to each other. In this article, a novel HEP assessment aimed at solving such a difficulty is proposed. Under the framework, the fuzzy data are equipped with linguistic terms and membership values. By establishing a rule base for data combination, followed by the defuzzification and HEP transformation processes, the HEP results can be acquired. The methodology is first examined using a test case consisting of three different scenarios of which the fuzzy data are close to each other. The results generated are compared with the outcomes produced from the traditional fuzzy HEP studies using the same test case. It is concluded that the methodology proposed in this study has a higher degree of the discriminability and is capable of providing more reasonable results. Furthermore, in situations where the lack of data exists, the proposed approach is also capable of providing the range of the HEP results based on different risk viewpoints arbitrarily established as illustrated using a real‐world example.  相似文献   

2.
This paper presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle for nuclear power plant operators. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. Comparison of the average of the ASEP HEP values with the fraction of the population actually failed and demonstrated that the ASEP HEP values are larger (conservative) by a statistically significant average factor of two. Partitioning of tasks into subgroups based on the ASEP HEP values and comparison of the subgroup average ASEP HEP values with observed subgroup failure rates showed little or no conservatism for small ASEP HEP values, but considerable conservatism for larger ASEP HEP values.  相似文献   

3.
Human factors are widely regarded to be highly contributing factors to maritime accident prevention system failures. The conventional methods for human factor assessment, especially quantitative techniques, such as fault trees and bow-ties, are static and cannot deal with models with uncertainty, which limits their application to human factors risk analysis. To alleviate these drawbacks, in the present study, a new human factor analysis framework called multidimensional analysis model of accident causes (MAMAC) is introduced. MAMAC combines the human factors analysis and classification system and business process management. In addition, intuitionistic fuzzy set theory and Bayesian Network are integrated into MAMAC to form a comprehensive dynamic human factors analysis model characterized by flexibility and uncertainty handling. The proposed model is tested on maritime accident scenarios from a sand carrier accident database in China to investigate the human factors involved, and the top 10 most highly contributing primary events associated with the human factors leading to sand carrier accidents are identified. According to the results of this study, direct human factors, classified as unsafe acts, are not a focus for maritime investigators and scholars. Meanwhile, unsafe preconditions and unsafe supervision are listed as the top two considerations for human factors analysis, especially for supervision failures of shipping companies and ship owners. Moreover, potential safety countermeasures for the most highly contributing human factors are proposed in this article. Finally, an application of the proposed model verifies its advantages in calculating the failure probability of accidents induced by human factors.  相似文献   

4.
Peng Liu  Zhizhong Li 《Risk analysis》2014,34(9):1706-1719
There is a scarcity of empirical data on human error for human reliability analysis (HRA). This situation can increase the variability and impair the validity of HRA outcomes in risk analysis. In this work, a microworld study was used to investigate the effects of performance shaping factors (PSFs) and their interrelationships and combined effects on the human error probability (HEP). The PSFs involved were task complexity, time availability, experience, and time pressure. The empirical data obtained were compared with predictions by the Standardized Plant Analysis Risk‐Human Reliability Method (SPAR‐H) and data from other sources. The comparison included three aspects: (1) HEP, (2) relative effects of the PSFs, and (3) error types. Results showed that the HEP decreased with experience and time availability levels. The significant relationship between task complexity and the HEP depended on time availability and experience, and time availability affected the HEP through time pressure. The empirical HEPs were higher than the HEPs predicted by SPAR‐H under different PSF combinations, showing the tendency of SPAR‐H to produce relatively optimistic results in our study. The relative effects of two PSFs (i.e., experience/training and stress/stressors) in SPAR‐H agreed to some extent with those in our study. Several error types agreed well with those from operational experience and a database for nuclear power plants (NPPs).  相似文献   

5.
In human reliability analysis (HRA), dependence analysis refers to assessing the influence of the failure of the operators to perform one task on the failure probabilities of subsequent tasks. A commonly used approach is the technique for human error rate prediction (THERP). The assessment of the dependence level in THERP is a highly subjective judgment based on general rules for the influence of five main factors. A frequently used alternative method extends the THERP model with decision trees. Such trees should increase the repeatability of the assessments but they simplify the relationships among the factors and the dependence level. Moreover, the basis for these simplifications and the resulting tree is difficult to trace. The aim of this work is a method for dependence assessment in HRA that captures the rules used by experts to assess dependence levels and incorporates this knowledge into an algorithm and software tool to be used by HRA analysts. A fuzzy expert system (FES) underlies the method. The method and the associated expert elicitation process are demonstrated with a working model. The expert rules are elicited systematically and converted into a traceable, explicit, and computable model. Anchor situations are provided as guidance for the HRA analyst's judgment of the input factors. The expert model and the FES‐based dependence assessment method make the expert rules accessible to the analyst in a usable and repeatable way, with an explicit and traceable basis.  相似文献   

6.
This article introduces a human error analysis or human reliability analysis methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), for analyzing emergency tasks in nuclear power plants. The AGAPE-ET method is based on a simplified cognitive model and a set of performance-influencing factors (PIFs). At each cognitive function, error-causing factors (ECF) or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and the influencing mechanism of the PIFs on the cognitive function. Then, a human error analysis procedure based on the error analysis factors is organized to cue or guide the analyst in conducting the human error analysis. The method can be characterized by the structured identification of weak points of the task required to be performed and by the efficient analysis process such that the analyst has only to carry out the analysis with the necessary cognitive functions. Through the application, AGAPE-ET showed its usefulness, which effectively identifies the vulnerabilities with respect to cognitive performance as well as task execution, and that helps the analyst directly draw specific error reduction measures through the analysis.  相似文献   

7.
This article proposes a methodology for the application of Bayesian networks in conducting quantitative risk assessment of operations in offshore oil and gas industry. The method involves translating a flow chart of operations into the Bayesian network directly. The proposed methodology consists of five steps. First, the flow chart is translated into a Bayesian network. Second, the influencing factors of the network nodes are classified. Third, the Bayesian network for each factor is established. Fourth, the entire Bayesian network model is established. Lastly, the Bayesian network model is analyzed. Subsequently, five categories of influencing factors, namely, human, hardware, software, mechanical, and hydraulic, are modeled and then added to the main Bayesian network. The methodology is demonstrated through the evaluation of a case study that shows the probability of failure on demand in closing subsea ram blowout preventer operations. The results show that mechanical and hydraulic factors have the most important effects on operation safety. Software and hardware factors have almost no influence, whereas human factors are in between. The results of the sensitivity analysis agree with the findings of the quantitative analysis. The three‐axiom‐based analysis partially validates the correctness and rationality of the proposed Bayesian network model.  相似文献   

8.
Methods of engineering risk analysis are based on a functional analysis of systems and on the probabilities (generally Bayesian) of the events and random variables that affect their performances. These methods allow identification of a system's failure modes, computation of its probability of failure or performance deterioration per time unit or operation, and of the contribution of each component to the probabilities and consequences of failures. The model has been extended to include the human decisions and actions that affect components' performances, and the management factors that affect behaviors and can thus be root causes of system failures. By computing the risk with and without proposed measures, one can then set priorities among different risk management options under resource constraints. In this article, I present briefly the engineering risk analysis method, then several illustrations of risk computations that can be used to identify a system's weaknesses and the most cost-effective way to fix them. The first example concerns the heat shield of the space shuttle orbiter and shows the relative risk contribution of the tiles in different areas of the orbiter's surface. The second application is to patient risk in anesthesia and demonstrates how the engineering risk analysis method can be used in the medical domain to rank the benefits of risk mitigation measures, in that case, mostly organizational. The third application is a model of seismic risk analysis and mitigation, with application to the San Francisco Bay area for the assessment of the costs and benefits of different seismic provisions of building codes. In all three cases, some aspects of the results were not intuitively obvious. The probabilistic risk analysis (PRA) method allowed identifying system weaknesses and the most cost-effective way to fix them.  相似文献   

9.
Yu  Fan-Jang  Hwang  Sheue-Ling  Huang  Yu-Hao 《Risk analysis》1999,19(3):401-415
In the design, development, and manufacturing stage of industrial products, engineers usually focus on the problems caused by hardware or software, but pay less attention to problems caused by human error, which may significantly affect system reliability and safety. Although operating procedures are strictly followed, human error still may occur occasionally. Among the influencing factors, the inappropriate design of standard operation procedure (SOP) or standard assembly procedure (SAP) is an important and latent reason for unexpected results found during human operation. To reduce the error probability and error effects of these unexpected behaviors in the industrial work process, overall evaluation of SOP or SAP quality has become an essential task. The human error criticality analysis (HECA) method was developed to identify the potentially critical problems caused by human error in the human operation system. This method performs task analysis on the basis of operation procedure. For example, SOP, analyzes the human error probability (HEP) for each human operation step, and assesses its error effects to the whole system. The results of the analysis will show the interrelationship that exists between critical human tasks, critical human error modes, and human reliability information of the human operation system. To identify the robustness of the model, a case study of initiator assembly tasks was conducted. Results show that the HECA method is practicable in evaluating the operation procedure, and the information is valuable in identifying the means to upgrade human reliability and system safety for human tasks.  相似文献   

10.
The purpose of this article is to present a quantitative analysis of the human failure contribution in the collision and/or grounding of oil tankers, considering the recommendation of the “Guidelines for Formal Safety Assessment” of the International Maritime Organization. Initially, the employed methodology is presented, emphasizing the use of the technique for human error prediction to reach the desired objective. Later, this methodology is applied to a ship operating on the Brazilian coast and, thereafter, the procedure to isolate the human actions with the greatest potential to reduce the risk of an accident is described. Finally, the management and organizational factors presented in the “International Safety Management Code” are associated with these selected actions. Therefore, an operator will be able to decide where to work in order to obtain an effective reduction in the probability of accidents. Even though this study does not present a new methodology, it can be considered as a reference in the human reliability analysis for the maritime industry, which, in spite of having some guides for risk analysis, has few studies related to human reliability effectively applied to the sector.  相似文献   

11.
Probabilistic risk analysis, based on the identification of failure modes, points to technical malfunctions and operator errors that can be direct causes of system failure. Yet component failures and operator errors are often rooted in management decisions and organizational factors. Extending the analysis to identify these factors allows more effective risk management strategies. It also permits a more realistic assessment of the overall failure probability. An implicit assumption that is often made in PRA is that, on the whole, the system has been designed according to specified norms and constructed as designed. Such an analysis tends to overemphasize scenarios in which the system fails because it is subjected to a much higher load than those for which it was designed. In this article, we find that, for the case of jacket-type offshore platforms, this class of scenarios contributes only about 5% of the failure probability. We link the PRA inputs to decisions and errors during the three phases of design, construction, and operation of platforms, and we assess the contribution of different types of error scenarios to the overall probability of platform failure. We compute the benefits of improving the design review, and we find that, given the costs involved, improving the review process is a more efficient way to increase system safety than reinforcing the structure.  相似文献   

12.
In this study, a methodology has been proposed for risk analysis of dust explosion scenarios based on Bayesian network. Our methodology also benefits from a bow‐tie diagram to better represent the logical relationships existing among contributing factors and consequences of dust explosions. In this study, the risks of dust explosion scenarios are evaluated, taking into account common cause failures and dependencies among root events and possible consequences. Using a diagnostic analysis, dust particle properties, oxygen concentration, and safety training of staff are identified as the most critical root events leading to dust explosions. The probability adaptation concept is also used for sequential updating and thus learning from past dust explosion accidents, which is of great importance in dynamic risk assessment and management. We also apply the proposed methodology to a case study to model dust explosion scenarios, to estimate the envisaged risks, and to identify the vulnerable parts of the system that need additional safety measures.  相似文献   

13.
Dependence assessment among human errors in human reliability analysis (HRA) is an important issue. Many of the dependence assessment methods in HRA rely heavily on the expert's opinion, thus are subjective and may sometimes cause inconsistency. In this article, we propose a computational model based on the Dempster‐Shafer evidence theory (DSET) and the analytic hierarchy process (AHP) method to handle dependence in HRA. First, dependence influencing factors among human tasks are identified and the weights of the factors are determined by experts using the AHP method. Second, judgment on each factor is given by the analyst referring to anchors and linguistic labels. Third, the judgments are represented as basic belief assignments (BBAs) and are integrated into a fused BBA by weighted average combination in DSET. Finally, the CHEP is calculated based on the fused BBA. The proposed model can deal with ambiguity and the degree of confidence in the judgments, and is able to reduce the subjectivity and improve the consistency in the evaluation process.  相似文献   

14.
Bin Li  Ming Li  Carol Smidts 《Risk analysis》2005,25(4):1061-1077
Probabilistic risk assessment (PRA) is a methodology to assess the probability of failure or success of a system's operation. PRA has been proved to be a systematic, logical, and comprehensive technique for risk assessment. Software plays an increasing role in modern safety critical systems. A significant number of failures can be attributed to software failures. Unfortunately, current probabilistic risk assessment concentrates on representing the behavior of hardware systems, humans, and their contributions (to a limited extent) to risk but neglects the contributions of software due to a lack of understanding of software failure phenomena. It is thus imperative to consider and model the impact of software to reflect the risk in current and future systems. The objective of our research is to develop a methodology to account for the impact of software on system failure that can be used in the classical PRA analysis process. A test-based approach for integrating software into PRA is discussed in this article. This approach includes identification of software functions to be modeled in the PRA, modeling of the software contributions in the ESD, and fault tree. The approach also introduces the concepts of input tree and output tree and proposes a quantification strategy that uses a software safety testing technique. The method is applied to an example system, PACS.  相似文献   

15.
Application of Human Reliability Analysis to Nursing Errors in Hospitals   总被引:1,自引:0,他引:1  
Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical errors: "violation of rules" with a weight of 826 x 10(-4), "failure of labor management" with a weight of 661 x 10(-4), and "defects in the standardization of nursing practices" with a weight of 495 x 10(-4).  相似文献   

16.
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.  相似文献   

17.
This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario‐ and barrier‐based framework is proposed to describe the maritime accident process. This evidential reasoning‐based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice.  相似文献   

18.
基于时间延迟理论的预防维修模型及案例研究   总被引:1,自引:1,他引:0  
本文旨在解决设备维修决策过程中预防维修检查数据缺乏情况下如何确定出合理的维修间隔期问题。首先,通过预防维修技术经济分析,提出了有关维修间隔期和总的停机时间之间关系的预防维修模型。其次,根据时间延迟维修理论,利用故障记录数据和预防维修检查数据的估计值,建立了统计模型并用来计算维修间隔期内故障次数的期望值。计算机仿真检验证明统计模型正确后,采用最大拟然法估计有关参数,这些参数包括缺陷发生率、时间延迟分布、检查出缺陷的概率等。最后是案例分析,应用估计参数和预防维修模型,计算出最佳的维修间隔期。  相似文献   

19.
For safe innovation, knowledge on potential human health impacts is essential. Ideally, these impacts are considered within a larger life‐cycle‐based context to support sustainable development of new applications and products. A methodological framework that accounts for human health impacts caused by inhalation of engineered nanomaterials (ENMs) in an indoor air environment has been previously developed. The objectives of this study are as follows: (i) evaluate the feasibility of applying the CF framework for NP exposure in the workplace based on currently available data; and (ii) supplement any resulting knowledge gaps with methods and data from the li fe c ycle a pproach and human r isk a ssessment (LICARA) project to develop a modified case‐specific version of the framework that will enable near‐term inclusion of NP human health impacts in life cycle assessment (LCA) using a case study involving nanoscale titanium dioxide (nanoTiO2). The intent is to enhance typical LCA with elements of regulatory risk assessment, including its more detailed measure of uncertainty. The proof‐of‐principle demonstration of the framework highlighted the lack of available data for both the workplace emissions and human health effects of ENMs that is needed to calculate generalizable characterization factors using common human health impact assessment practices in LCA. The alternative approach of using intake fractions derived from workplace air concentration measurements and effect factors based on best‐available toxicity data supported the current case‐by‐case approach for assessing the human health life cycle impacts of ENMs. Ultimately, the proposed framework and calculations demonstrate the potential utility of integrating elements of risk assessment with LCA for ENMs once the data are available.  相似文献   

20.
The use of autonomous underwater vehicles (AUVs) for various applications have grown with maturing technology and improved accessibility. The deployment of AUVs for under-ice marine science research in the Antarctic is one such example. However, a higher risk of AUV loss is present during such endeavors due to the extremities in the Antarctic. A thorough analysis of risks is therefore crucial for formulating effective risk control policies and achieving a lower risk of loss. Existing risk analysis approaches focused predominantly on the technical aspects, as well as identifying static cause and effect relationships in the chain of events leading to AUV loss. Comparatively, the complex interrelationships between risk variables and other aspects of risk such as human errors have received much lesser attention. In this article, a systems-based risk analysis framework facilitated by system dynamics methodology is proposed to overcome existing shortfalls. To demonstrate usefulness of the framework, it is applied on an actual AUV program to examine the occurrence of human error during Antarctic deployment. Simulation of the resultant risk model showed an overall decline in human error incident rate with the increase in experience of the AUV team. Scenario analysis based on the example provided policy recommendations in areas of training, practice runs, recruitment policy, and setting of risk tolerance level. The proposed risk analysis framework is pragmatically useful for risk analysis of future AUV programs to ensure the sustainability of operations, facilitating both better control and monitoring of risk.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号