首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In human reliability analysis (HRA), dependence analysis refers to assessing the influence of the failure of the operators to perform one task on the failure probabilities of subsequent tasks. A commonly used approach is the technique for human error rate prediction (THERP). The assessment of the dependence level in THERP is a highly subjective judgment based on general rules for the influence of five main factors. A frequently used alternative method extends the THERP model with decision trees. Such trees should increase the repeatability of the assessments but they simplify the relationships among the factors and the dependence level. Moreover, the basis for these simplifications and the resulting tree is difficult to trace. The aim of this work is a method for dependence assessment in HRA that captures the rules used by experts to assess dependence levels and incorporates this knowledge into an algorithm and software tool to be used by HRA analysts. A fuzzy expert system (FES) underlies the method. The method and the associated expert elicitation process are demonstrated with a working model. The expert rules are elicited systematically and converted into a traceable, explicit, and computable model. Anchor situations are provided as guidance for the HRA analyst's judgment of the input factors. The expert model and the FES‐based dependence assessment method make the expert rules accessible to the analyst in a usable and repeatable way, with an explicit and traceable basis.  相似文献   

2.
Yu  Fan-Jang  Hwang  Sheue-Ling  Huang  Yu-Hao 《Risk analysis》1999,19(3):401-415
In the design, development, and manufacturing stage of industrial products, engineers usually focus on the problems caused by hardware or software, but pay less attention to problems caused by human error, which may significantly affect system reliability and safety. Although operating procedures are strictly followed, human error still may occur occasionally. Among the influencing factors, the inappropriate design of standard operation procedure (SOP) or standard assembly procedure (SAP) is an important and latent reason for unexpected results found during human operation. To reduce the error probability and error effects of these unexpected behaviors in the industrial work process, overall evaluation of SOP or SAP quality has become an essential task. The human error criticality analysis (HECA) method was developed to identify the potentially critical problems caused by human error in the human operation system. This method performs task analysis on the basis of operation procedure. For example, SOP, analyzes the human error probability (HEP) for each human operation step, and assesses its error effects to the whole system. The results of the analysis will show the interrelationship that exists between critical human tasks, critical human error modes, and human reliability information of the human operation system. To identify the robustness of the model, a case study of initiator assembly tasks was conducted. Results show that the HECA method is practicable in evaluating the operation procedure, and the information is valuable in identifying the means to upgrade human reliability and system safety for human tasks.  相似文献   

3.
In the design of engineering systems, mental workload is one of the most important factors in the allocation of cognitive tasks. Current methods of task allocation have criteria that are defined in only general terms and are thus not very useful in aiding detailed decision-making in system design. Whilst there are many quantitative criteria available to determine the physical space in human-machine interaction, system designers really require an explicit model and specific criteria for the following identification of the mental workload imposed by the system; prediction of both human and system performance; evaluation of the alternatives of system design; and the design of system components. It is argued that the available methods of workload or performance are either too domain-dependent to apply to the design of other systems, or subject-dependent and thus do not reflect the objective workload imposed by the system. The presented research adopts a new approach to cognitive task analysis in dynamic decision-making systems. Based on the characteristics derived from task analysis, a general conceptual model of the prediction of mental workload in system design is proposed. In the new model, workload is represented by a set of system parameters—task arrival rate, task complexity, task uncertainty, and performance requirements—which are considered to be the main sources of workload. In this context, workload becomes an objective demand of engineering systems, independent of any subjective factors. Whether an individual or population is overloaded depends upon their workload threshold with respect to the specified task and environment. It is hoped that this new model, after both laboratory and industrial validation, could be used by system designers to predict the workload imposed on people by systems.  相似文献   

4.
This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario‐ and barrier‐based framework is proposed to describe the maritime accident process. This evidential reasoning‐based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice.  相似文献   

5.
Dependence assessment among human errors in human reliability analysis (HRA) is an important issue. Many of the dependence assessment methods in HRA rely heavily on the expert's opinion, thus are subjective and may sometimes cause inconsistency. In this article, we propose a computational model based on the Dempster‐Shafer evidence theory (DSET) and the analytic hierarchy process (AHP) method to handle dependence in HRA. First, dependence influencing factors among human tasks are identified and the weights of the factors are determined by experts using the AHP method. Second, judgment on each factor is given by the analyst referring to anchors and linguistic labels. Third, the judgments are represented as basic belief assignments (BBAs) and are integrated into a fused BBA by weighted average combination in DSET. Finally, the CHEP is calculated based on the fused BBA. The proposed model can deal with ambiguity and the degree of confidence in the judgments, and is able to reduce the subjectivity and improve the consistency in the evaluation process.  相似文献   

6.
Abstract

Organizational performance is a function of many variables, two of which are work process factors and human performance factors. Our study compared the effects of changing a work process versus human performance improvement techniques and the combined effects of combing both techniques. A 2 (manual vs. electronic process) X 2 (with vs. without behavioral intervention) between-subjects design with stratified random assignment was employed. Forty-eight participants performed a word processing task where their minutes-in-possession and error rate were recorded. Results revealed a main effect for process type and a main effect for behavioral intervention. The largest effects were observed with the context of a combined intervention. The implications of using a combined approach and topics for future researchers are discussed.  相似文献   

7.
Peng Liu  Zhizhong Li 《Risk analysis》2014,34(9):1706-1719
There is a scarcity of empirical data on human error for human reliability analysis (HRA). This situation can increase the variability and impair the validity of HRA outcomes in risk analysis. In this work, a microworld study was used to investigate the effects of performance shaping factors (PSFs) and their interrelationships and combined effects on the human error probability (HEP). The PSFs involved were task complexity, time availability, experience, and time pressure. The empirical data obtained were compared with predictions by the Standardized Plant Analysis Risk‐Human Reliability Method (SPAR‐H) and data from other sources. The comparison included three aspects: (1) HEP, (2) relative effects of the PSFs, and (3) error types. Results showed that the HEP decreased with experience and time availability levels. The significant relationship between task complexity and the HEP depended on time availability and experience, and time availability affected the HEP through time pressure. The empirical HEPs were higher than the HEPs predicted by SPAR‐H under different PSF combinations, showing the tendency of SPAR‐H to produce relatively optimistic results in our study. The relative effects of two PSFs (i.e., experience/training and stress/stressors) in SPAR‐H agreed to some extent with those in our study. Several error types agreed well with those from operational experience and a database for nuclear power plants (NPPs).  相似文献   

8.
Warren E. Walker   《Omega》2009,37(6):1051
This paper deals with ethics in the context of the real-world practice of operations research (OR), once an analyst has taken on the responsibility of carrying out a rational-style model-based policy study for a client. OR models are often used by policy analysts to assist decisionmakers in choosing a good course of action based on multiple (and competing) criteria from among a variety of alternatives under uncertain conditions as part of the policy analysis process. The paper suggests that if applied operations researchers (acting as rational-style model-based policy analysts, and not as policy analysts playing a different role or as policy advocates) use the scientific method and apply the generally accepted best practices of their profession, they will be acting in an ethical manner. It, therefore, describes the steps of a typical rational-style model-based policy analysis study, and specifies the tenets of good practice in each step. It also provides a list of questions and statements that the analyst and those evaluating an analyst's work (both internally and externally) can use to help make sure that the study adheres to the tenets of good practice for rational-style model-based policy analysis and remains within ethical bounds.  相似文献   

9.
This article proposes a methodology for incorporating electrical component failure data into the human error assessment and reduction technique (HEART) for estimating human error probabilities (HEPs). The existing HEART method contains factors known as error-producing conditions (EPCs) that adjust a generic HEP to a more specific situation being assessed. The selection and proportioning of these EPCs are at the discretion of an assessor, and are therefore subject to the assessor's experience and potential bias. This dependence on expert opinion is prevalent in similar HEP assessment techniques used in numerous industrial areas. The proposed method incorporates factors based on observed trends in electrical component failures to produce a revised HEP that can trigger risk mitigation actions more effectively based on the presence of component categories or other hazardous conditions that have a history of failure due to human error. The data used for the additional factors are a result of an analysis of failures of electronic components experienced during system integration and testing at NASA Goddard Space Flight Center. The analysis includes the determination of root failure mechanisms and trend analysis. The major causes of these defects were attributed to electrostatic damage, electrical overstress, mechanical overstress, or thermal overstress. These factors representing user-induced defects are quantified and incorporated into specific hardware factors based on the system's electrical parts list. This proposed methodology is demonstrated with an example comparing the original HEART method and the proposed modified technique.  相似文献   

10.
This article proposes a systematic procedure for computing probabilities of operator action failure in the cognitive reliability and error analysis method (CREAM). The starting point for the quantification is a previously introduced fuzzy version of the CREAM paradigm that is here further extended to account for: (1) the ambiguity in the qualification of the conditions under which the action is performed (common performance conditions, CPCs) and (2) the fact that the effects of such conditions on human performance reliability may not all be equal.  相似文献   

11.
Iris Vessey 《决策科学》1991,22(2):219-240
A considerable amount of research has been conducted over a long period of time into the effects of graphical and tabular representations on decision-making performance. To date, however, the literature appears to have arrived at few conclusions with regard to the performance of the two representations. This paper addresses these issues by presenting a theory, based on information processing theory, to explain under what circumstances one representation outperforms the other. The fundamental aspects of the theory are: (1) although graphical and tabular representations may contain the same information, they present that information in fundamentally different ways; graphical representations emphasize spatial information, while tables emphasize symbolic information; (2) tasks can be divided into two types, spatial and symbolic, based on the type of information that facilitates their solution; (3) performance on a task will be enhanced when there is a cognitive fit (match) between the information emphasized in the representation type and that required by the task type; that is, when graphs support spatial tasks and when tables support symbolic tasks; (4) the processes or strategies problem solvers use are the crucial elements of cognitive fit since they provide the link between representation and task; the processes identified here are perceptual and analytical; (5) so long as there is a complete fit of representation, processes, and task type, each representation will lead to both quicker and more accurate problem solving. The theory is validated by its success in explaining the results of published studies that examine the performance of graphical and tabular representations in decision making.  相似文献   

12.
利用2003—2009年《新财富》杂志评选作为分析师声誉和券商声誉代理变量,结合超过54000条分析师推荐评级样本,从投资者角度检验了券商声誉、分析师声誉与分析师荐股评级价值之间的短期关系和长期关系,研究结果发现:在评级公告日,"买入/增持","中性","减持/卖出"评级分组的超常收益率与分析师声誉和券商声誉之间存在不同的影响规律。通过长期关系检验还发现牛市阶段,非顶级券商明星分析师"买入/增持"评级的投资价值高于顶级券商明星分析师,但在熊市阶段,非顶级券商非明星分析师"买入/增持"评级投资价值最高,顶级券商明星分析师投资价值最低。顶级券商和非顶级券商的明星分析师"中性"评级投资价值低,投资者容易错失投资机会。顶级券商明星分析师"减持/卖出"评级投资价值最高;非顶级券商明星分析师"减持/卖出"评级投资价值最低。最后提出了维护分析师证券研究报告独立性的四条政策建议。  相似文献   

13.
After a brief review of the role of dummy variables in regression analysis and the current state-of-the art in rounding/truncation error detection in computerized least squares programs, this paper presents a theorem that can be used to detect this type of error whenever an analyst is running a regression program that has one (or more) dummy variables as independent variables.  相似文献   

14.
In counterterrorism risk management decisions, the analyst can choose to represent terrorist decisions as defender uncertainties or as attacker decisions. We perform a comparative analysis of probabilistic risk analysis (PRA) methods including event trees, influence diagrams, Bayesian networks, decision trees, game theory, and combined methods on the same illustrative examples (container screening for radiological materials) to get insights into the significant differences in assumptions and results. A key tenent of PRA and decision analysis is the use of subjective probability to assess the likelihood of possible outcomes. For each technique, we compare the assumptions, probability assessment requirements, risk levels, and potential insights for risk managers. We find that assessing the distribution of potential attacker decisions is a complex judgment task, particularly considering the adaptation of the attacker to defender decisions. Intelligent adversary risk analysis and adversarial risk analysis are extensions of decision analysis and sequential game theory that help to decompose such judgments. These techniques explicitly show the adaptation of the attacker and the resulting shift in risk based on defender decisions.  相似文献   

15.
Delays are among the most crucial adversaries to the success and performance of construction projects, making delay analysis and management a critical task for project managers. This task will be highly complicated in large-scale projects such as construction, which usually consist of a complex network of heterogeneous entities in continuous interaction. Traditional approaches and methods for the analysis of delays and their causes have been criticised for their ability to handle complex projects, and for considering the interrelationships between delay causes. Addressing this gap, this research introduces an alternative approach for delay causes analysis by adopting Semantic Network Analysis (SNA) method. The paper reports the results from an investigation of delays in construction projects in the Oil-Gas-Petrochemical sector using SNA. The method’s capacity to identify and rank delay causes, which can assist managers in selecting appropriate measures for eliminating them, are empirically examined and discussed. The paper argues that SNA leads to a more comprehensive understanding of the main causes of delay in large and complex projects, allowing a better identification and mapping of the interrelationships between these discrete factors.  相似文献   

16.
The transition to semiautonomous driving is set to considerably reduce road accident rates as human error is progressively removed from the driving task. Concurrently, autonomous capabilities will transform the transportation risk landscape and significantly disrupt the insurance industry. Semiautonomous vehicle (SAV) risks will begin to alternate between human error and technological susceptibilities. The evolving risk landscape will force a departure from traditional risk assessment approaches that rely on historical data to quantify insurable risks. This article investigates the risk structure of SAVs and employs a telematics‐based anomaly detection model to assess split risk profiles. An unsupervised multivariate Gaussian (MVG) based anomaly detection method is used to identify abnormal driving patterns based on accelerometer and GPS sensors of manually driven vehicles. Parameters are inferred for vehicles equipped with semiautonomous capabilities and the resulting split risk profile is determined. The MVG approach allows for the quantification of vehicle risks by the relative frequency and severity of observed anomalies and a location‐based risk analysis is performed for a more comprehensive assessment. This approach contributes to the challenge of quantifying SAV risks and the methods employed here can be applied to evolving data sources pertinent to SAVs. Utilizing the vast amounts of sensor‐generated data will enable insurers to proactively reassess the collective performances of both the artificial driving agent and human driver.  相似文献   

17.
为克服多因素变权决策方法的内在缺陷,基于Belton 和Gear提出的B/G-AHP层次分析原理给出了一种隐含式的多属性变权决策建模思想,并运用该思想给出了一种多属性变权决策新方法。它相对于多因素变权决策方法具有三方面的比较优势。其一,依赖的变权偏好信息直接由决策者给出,因而能够克服决策分析者对决策结果的主观武断性影响,更好地反映决策者的真实偏好。其二,不会受到由因素的属性值转化为偏好值所额外引入的主观测度偏差的干扰。其三,对决策者主观判断可能存在的误差予以了旨在弱化其影响的优化控制。数值分析表明新方法拥有较好的变权能力,并且相对于已有方法能够给出更易为决策者所接受的评价结论,因而具有较好的应用有效性。  相似文献   

18.
Methods of engineering risk analysis are based on a functional analysis of systems and on the probabilities (generally Bayesian) of the events and random variables that affect their performances. These methods allow identification of a system's failure modes, computation of its probability of failure or performance deterioration per time unit or operation, and of the contribution of each component to the probabilities and consequences of failures. The model has been extended to include the human decisions and actions that affect components' performances, and the management factors that affect behaviors and can thus be root causes of system failures. By computing the risk with and without proposed measures, one can then set priorities among different risk management options under resource constraints. In this article, I present briefly the engineering risk analysis method, then several illustrations of risk computations that can be used to identify a system's weaknesses and the most cost-effective way to fix them. The first example concerns the heat shield of the space shuttle orbiter and shows the relative risk contribution of the tiles in different areas of the orbiter's surface. The second application is to patient risk in anesthesia and demonstrates how the engineering risk analysis method can be used in the medical domain to rank the benefits of risk mitigation measures, in that case, mostly organizational. The third application is a model of seismic risk analysis and mitigation, with application to the San Francisco Bay area for the assessment of the costs and benefits of different seismic provisions of building codes. In all three cases, some aspects of the results were not intuitively obvious. The probabilistic risk analysis (PRA) method allowed identifying system weaknesses and the most cost-effective way to fix them.  相似文献   

19.
Chiang Kao  Hwei-Lan Pao 《Omega》2012,40(1):89-95
Project selection is an important task for organizations in achieving their missions using limited budgets and resources. Whether or not a project will be approved is also of primary concern to the applicants. This paper predicts whether a project will be approved for cases where the criteria for evaluating it are known while the scoring system is not. The idea is to construct a frontier function for the approved projects from past performance on the criteria. The relative distance between a proposed project and the frontier serves as an indicator of the possibility that the project will be approved. Data from the Management II Division of the National Science Council of Taiwan in the Topic Research Project are collected to illustrate this approach. From the percentile of the distance measure, an applicant is able to predict the possibility that their project will be approved. Since professors with different levels of experience and different research areas have different research performance, these factors are taken into account in the prediction. A Malmquist productivity index analysis is also conducted to investigate the performance improvement of the applicants in research between two periods.  相似文献   

20.
Bayesian Monte Carlo (BMC) decision analysis adopts a sampling procedure to estimate likelihoods and distributions of outcomes, and then uses that information to calculate the expected performance of alternative strategies, the value of information, and the value of including uncertainty. These decision analysis outputs are therefore subject to sample error. The standard error of each estimate and its bias, if any, can be estimated by the bootstrap procedure. The bootstrap operates by resampling (with replacement) from the original BMC sample, and redoing the decision analysis. Repeating this procedure yields a distribution of decision analysis outputs. The bootstrap approach to estimating the effect of sample error upon BMC analysis is illustrated with a simple value-of-information calculation along with an analysis of a proposed control structure for Lake Erie. The examples show that the outputs of BMC decision analysis can have high levels of sample error and bias.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号