首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Ola Svenson 《Risk analysis》1991,11(3):499-507
This study develops a theoretical model for accident evolutions and how they can be arrested. The model describes the interaction between technical and human-organizational systems which may lead to an accident. The analytic tool provided by the model gives equal weight to both these types of systems and necessitates simultaneous and interactive accident analysis by engineers and human factors specialists. It can be used in predictive safety analyses as well as in post hoc incident analyses. To illustrate this, the AEB model is applied to an incident reported by the nuclear industry in Sweden. In general, application of the model will indicate where and how safety can be improved, and it also raises questions about issues such as the cost, feasibility, and effectiveness of different ways of increasing safety.  相似文献   

2.
Probabilistic risk analysis, based on the identification of failure modes, points to technical malfunctions and operator errors that can be direct causes of system failure. Yet component failures and operator errors are often rooted in management decisions and organizational factors. Extending the analysis to identify these factors allows more effective risk management strategies. It also permits a more realistic assessment of the overall failure probability. An implicit assumption that is often made in PRA is that, on the whole, the system has been designed according to specified norms and constructed as designed. Such an analysis tends to overemphasize scenarios in which the system fails because it is subjected to a much higher load than those for which it was designed. In this article, we find that, for the case of jacket-type offshore platforms, this class of scenarios contributes only about 5% of the failure probability. We link the PRA inputs to decisions and errors during the three phases of design, construction, and operation of platforms, and we assess the contribution of different types of error scenarios to the overall probability of platform failure. We compute the benefits of improving the design review, and we find that, given the costs involved, improving the review process is a more efficient way to increase system safety than reinforcing the structure.  相似文献   

3.
Recent trends indicate that vehicle miles traveled for large trucks is increasing at a higher rate than for other vehicles. The resulting competition between large trucks and other vehicles for highway space can be expected to result in more multivehicle collisions involving large trucks. This paper presents the result of an investigation of the causes and the mechanism related to large vehicle accidents. A fault-tree analysis of large vehicle accidents identifies the individual roles played by driver, vehicle, and environmental factors, as well as their interactions in the accident mechanism. Using accident data for 1984-1986, the probabilities for different basic events in the fault tree were assessed. The most likely events leading to a large vehicle accident, as well as the most effective counter measures, were then identified. The result indicate that the most prevalent form of accidents due to driver-related failures is when a normal driver makes an error in judgment and is unsuccessful in his or her evasive action. For vehicle-related failures, the predominant type of failure is equipment failure, and for environmental-related failures, excessive demand on driver and vehicle performance created by the environmental or roadway factors.  相似文献   

4.
The author contends that a previous Risk Analysis article overemphasized the pitfalls of incorporating redundancy into designs. Relevant aspects of that article are reviewed and commented upon, then the potentials and pitfalls of redundancy in systems and procedures are more broadly discussed. To provide a solid foundation for that discussion, some definitions for systems risk analysis terminology are presented. It is shown that pairs and larger sets of related failures (the physical causes of shortfalls in redundancy effectiveness) can be divided into two types: (1) cascading/induced failures and (2) common-external-cause failures. Each type has its own physical characteristics and implications for mathematical modeling. Service experience with large-commercial-airplane jet-engine propulsion systems is used to illustrate the two types of related failures. Finally, an overview is provided of event-sequence analysis, an alternative approach to systems risk analysis. When the possibility of related failures of mutually-redundant system elements must be accounted for, event-sequence analysis can usually do that better than fault-tree analysis.  相似文献   

5.
Human factors are widely regarded to be highly contributing factors to maritime accident prevention system failures. The conventional methods for human factor assessment, especially quantitative techniques, such as fault trees and bow-ties, are static and cannot deal with models with uncertainty, which limits their application to human factors risk analysis. To alleviate these drawbacks, in the present study, a new human factor analysis framework called multidimensional analysis model of accident causes (MAMAC) is introduced. MAMAC combines the human factors analysis and classification system and business process management. In addition, intuitionistic fuzzy set theory and Bayesian Network are integrated into MAMAC to form a comprehensive dynamic human factors analysis model characterized by flexibility and uncertainty handling. The proposed model is tested on maritime accident scenarios from a sand carrier accident database in China to investigate the human factors involved, and the top 10 most highly contributing primary events associated with the human factors leading to sand carrier accidents are identified. According to the results of this study, direct human factors, classified as unsafe acts, are not a focus for maritime investigators and scholars. Meanwhile, unsafe preconditions and unsafe supervision are listed as the top two considerations for human factors analysis, especially for supervision failures of shipping companies and ship owners. Moreover, potential safety countermeasures for the most highly contributing human factors are proposed in this article. Finally, an application of the proposed model verifies its advantages in calculating the failure probability of accidents induced by human factors.  相似文献   

6.
Typical forecast-error measures such as mean squared error, mean absolute deviation and bias generally are accepted indicators of forecasting performance. However, the eventual cost impact of forecast errors on system performance and the degree to which cost consequences are explained by typical error measures have not been studied thoroughly. The present paper demonstrates that these typical error measures often are not good predictors of cost consequences in material requirements planning (MRP) settings. MRP systems rely directly on the master production schedule (MPS) to specify gross requirements. These MRP environments receive forecast errors indirectly when the errors create inaccuracies in the MPS. Our study results suggest that within MRP environments the predictive capabilities of forecast-error measures are contingent on the lot-sizing rule and the product components structure When forecast errors and MRP system costs are coanalyzed, bias emerges as having reasonable predictive ability. In further investigations of bias, loss functions are evaluated to explain the MRP cost consequences of forecast errors. Estimating the loss functions of forecast errors through regression analysis demonstrates the superiority of loss functions as measures over typical forecast error measures in the MPS.  相似文献   

7.
The use of autonomous underwater vehicles (AUVs) for various applications have grown with maturing technology and improved accessibility. The deployment of AUVs for under-ice marine science research in the Antarctic is one such example. However, a higher risk of AUV loss is present during such endeavors due to the extremities in the Antarctic. A thorough analysis of risks is therefore crucial for formulating effective risk control policies and achieving a lower risk of loss. Existing risk analysis approaches focused predominantly on the technical aspects, as well as identifying static cause and effect relationships in the chain of events leading to AUV loss. Comparatively, the complex interrelationships between risk variables and other aspects of risk such as human errors have received much lesser attention. In this article, a systems-based risk analysis framework facilitated by system dynamics methodology is proposed to overcome existing shortfalls. To demonstrate usefulness of the framework, it is applied on an actual AUV program to examine the occurrence of human error during Antarctic deployment. Simulation of the resultant risk model showed an overall decline in human error incident rate with the increase in experience of the AUV team. Scenario analysis based on the example provided policy recommendations in areas of training, practice runs, recruitment policy, and setting of risk tolerance level. The proposed risk analysis framework is pragmatically useful for risk analysis of future AUV programs to ensure the sustainability of operations, facilitating both better control and monitoring of risk.  相似文献   

8.
Application of Human Reliability Analysis to Nursing Errors in Hospitals   总被引:1,自引:0,他引:1  
Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical errors: "violation of rules" with a weight of 826 x 10(-4), "failure of labor management" with a weight of 661 x 10(-4), and "defects in the standardization of nursing practices" with a weight of 495 x 10(-4).  相似文献   

9.
This paper demonstrates how qualitative analysis can be a novel means of investigating theories of error and causation in natural gas pipeline incidents. Qualitative analysis offers unique opportunities to understand process, interactions, and the role of context in identifying active error and latent conditions in incident causation. Through the coding of text from 24 onshore natural gas pipeline incident reports on leaks and explosions in the United States and Canada, our findings reveal a proportion of active and latent errors consistent with other hazardous infrastructure contexts (roughly 3:1 latent-active ratio across 817 coded errors). These findings underscore the robustness of extant error theory and support the argument for documenting multiple, connected causes of disaster in aggregate. Conclusions highlight the utility of in-depth case analyses and critique present pipeline incident database aggregation. Our interpretation provides a means to convey complex causation in aggregate form thus enabling more nuanced future qualitative and qualitative analyses.  相似文献   

10.
The accident that occurred on board the offshore platform Piper Alpha in July 1988 killed 167 people and cost billions of dollars in property damage. It was caused by a massive fire, which was not the result of an unpredictable "act of God" but of an accumulation of errors and questionable decisions. Most of them were rooted in the organization, its structure, procedures, and culture. This paper analyzes the accident scenario using the risk analysis framework, determines which human decision and actions influenced the occurrence of the basic events, and then identifies the organizational roots of these decisions and actions. These organizational factors are generalizable to other industries and engineering systems. They include flaws in the design guidelines and design practices (e.g., tight physical couplings or insufficient redundancies), misguided priorities in the management of the tradeoff between productivity and safety, mistakes in the management of the personnel on board, and errors of judgment in the process by which financial pressures are applied on the production sector (i.e., the oil companies' definition of profit centers) resulting in deficiencies in inspection and maintenance operations. This analytical approach allows identification of risk management measures that go beyond the purely technical (e.g., add redundancies to a safety system) and also include improvements of management practices.  相似文献   

11.
Complex engineered systems, such as nuclear reactors and chemical plants, have the potential for catastrophic failure with disastrous consequences. In recent years, human and management factors have been recognized as frequent root causes of major failures in such systems. However, classical probabilistic risk analysis (PRA) techniques do not account for the underlying causes of these errors because they focus on the physical system and do not explicitly address the link between components' performance and organizational factors. This paper describes a general approach for addressing the human and management causes of system failure, called the SAM (System-Action-Management) framework. Beginning with a quantitative risk model of the physical system, SAM expands the scope of analysis to incorporate first the decisions and actions of individuals that affect the physical system. SAM then links management factors (incentives, training, policies and procedures, selection criteria, etc.) to those decisions and actions. The focus of this paper is on four quantitative models of action that describe this last relationship. These models address the formation of intentions for action and their execution as a function of the organizational environment. Intention formation is described by three alternative models: a rational model, a bounded rationality model, and a rule-based model. The execution of intentions is then modeled separately. These four models are designed to assess the probabilities of individual actions from the perspective of management, thus reflecting the uncertainties inherent to human behavior. The SAM framework is illustrated for a hypothetical case of hazardous materials transportation. This framework can be used as a tool to increase the safety and reliability of complex technical systems by modifying the organization, rather than, or in addition to, re-designing the physical system.  相似文献   

12.
This paper opens a new avenue for investigation of quality issues in services. We take the viewpoint that a substantial portion of service failures is the result of human error in the delivery process. Drawing upon the Generic Error Modeling System (gems) from the cognitive science literature, we develop a framework for understanding the role of human error in service failures. An empirical investigation assesses the applicability of this framework to services, identifies which error mechanisms are important sources of service failure, and clarifies how the different roles of customers and providers affect the errors made by each.  相似文献   

13.
This article proposes a methodology for incorporating electrical component failure data into the human error assessment and reduction technique (HEART) for estimating human error probabilities (HEPs). The existing HEART method contains factors known as error-producing conditions (EPCs) that adjust a generic HEP to a more specific situation being assessed. The selection and proportioning of these EPCs are at the discretion of an assessor, and are therefore subject to the assessor's experience and potential bias. This dependence on expert opinion is prevalent in similar HEP assessment techniques used in numerous industrial areas. The proposed method incorporates factors based on observed trends in electrical component failures to produce a revised HEP that can trigger risk mitigation actions more effectively based on the presence of component categories or other hazardous conditions that have a history of failure due to human error. The data used for the additional factors are a result of an analysis of failures of electronic components experienced during system integration and testing at NASA Goddard Space Flight Center. The analysis includes the determination of root failure mechanisms and trend analysis. The major causes of these defects were attributed to electrostatic damage, electrical overstress, mechanical overstress, or thermal overstress. These factors representing user-induced defects are quantified and incorporated into specific hardware factors based on the system's electrical parts list. This proposed methodology is demonstrated with an example comparing the original HEART method and the proposed modified technique.  相似文献   

14.
The paper develops a statistical procedure for predicting the safety performance of motor carriers based on characteristics of firms and results of two government safety enforcement programs. One program is an audit of management safety practices, and the other is a program to inspect drivers and vehicles at the roadside for compliance with safety regulations. The technique can be used to provide safety regulators with an empirical approach to identify the most dangerous firms and provide a priority list of firms against which educational and enforcement actions should be initiated. The government needs to use such an approach rather than directly observing accident rates because the most dangerous firms are generally small and, despite relatively high accident rates, accidents remain rare events. The technique uses negative-binomial regression procedures on a dataset of 20,000 firms. The definition of poor performance in roadside inspection is based on both the rate of inspections per fleet mile and the average number of violations found during an inspection. This choice was made because selection for inspection has both a random and nonrandom component. The results of the study suggest that both of the government's safety programs help identify the most dangerous firms. The 2.5% of firms that do poorly in both programs have an average accident rate twice that of the mean for all other firms.  相似文献   

15.
This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large‐scale systems. This article introduces three SLFs models: node centered SLFs, district‐based SLFs, and circle‐shaped SLFs, and proposes a SLFs‐induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs‐induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions.  相似文献   

16.
A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design.  相似文献   

17.
Employee-based errors result in quality defects that can often impact customer satisfaction. This study examined the effects of a process change and feedback system intervention on error rates of 3 teams of retail furniture distribution warehouse workers. Archival records of error codes were analyzed and aggregated as the measure of quality. The intervention consisted of a process change where teams of 5 employees who had previously been assigned a specific role within the process were cross-trained to know and help with other team members' functions. Additionally, these teams were given performance feedback on an immediate, daily, and weekly basis. Team A reduced mean errors from 7.47 errors per week during baseline to 3.53 errors per week during the intervention phase. Team B experienced a reduction in mean number of weekly errors from a baseline of 11.39 errors per week to 3.82 errors per week during the intervention phase. Team C did not experience significant error rate reduction.  相似文献   

18.
The recent occurrence of severe major accidents has brought to light flaws and limitations of hazard identification (HAZID) processes performed for safety reports, as in the accidents at Toulouse (France) and Buncefield (UK), where the accident scenarios that occurred were not captured by HAZID techniques. This study focuses on this type of atypical accident scenario deviating from normal expectations. The main purpose is to analyze the examples of atypical accidents mentioned and to attempt to identify them through the application of a well-known methodology such as the bow-tie analysis. To these aims, the concept of atypical event is accurately defined. Early warnings, causes, consequences, and occurrence mechanisms of the specific events are widely studied and general failures of risk assessment, management, and governance isolated. These activities contribute to outline a set of targeted recommendations, addressing transversal common deficiencies and also demonstrating how a better management of knowledge from the study of past events can support future risk assessment processes in the identification of atypical accident scenarios. Thus, a new methodology is not suggested; rather, a specific approach coordinating a more effective use of experience and available information is described, to suggest that lessons to be learned from past accidents can be effectively translated into actions of prevention.  相似文献   

19.
The performance of a probabilistic risk assessment (PRA) for a nuclear power plant is a complex undertaking, involving the assembly of an accident frequency analysis, an accident progression analysis, a source term analysis, and a consequence analysis. Each of these analyses is, in itself, quite complex. Uncertainties enter into a PRA from each of these analyses. An important focus in recent PRAs has been to incorporate these uncertainties at each stage of the analysis, propagate the subsequent uncertainties through the entire analysis, and include uncertainty in the final results. Monte Carlo procedures based on Latin hypercube sampling provide one way to perform propagations of this type. In this paper, the results of two complete and independent Monte Carlo calculations for a recently completed PRA for a nuclear power plant are compared as a means of providing empirical evidence on the repeatability of uncertainty and sensitivity analyses for large-scale PRA calculations. These calculations use the same variables and analysis structure with two independently generated Latin hypercube samples. The results of the two calculations show a high degree of repeatability for the analysis of a very complex system.  相似文献   

20.
This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario‐ and barrier‐based framework is proposed to describe the maritime accident process. This evidential reasoning‐based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号