首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario‐ and barrier‐based framework is proposed to describe the maritime accident process. This evidential reasoning‐based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice.  相似文献   

2.
Boholm M 《Risk analysis》2012,32(2):281-293
The analysis combines frame semantic and corpus linguistic approaches in analyzing the role of agency and decision making in the semantics of the words "risk" and "danger" (both nominal and verbal uses). In frame semantics, the meanings of "risk" and of related words, such as "danger," are analyzed against the background of a specific cognitive-semantic structure (a frame) comprising frame elements such as Protagonist, Bad Outcome, Decision, Possession, and Source. Empirical data derive from the British National Corpus (100 million words). Results indicate both similarities and differences in use. First, both "risk" and "danger" are commonly used to represent situations having potential negative consequences as the result of agency. Second, "risk" and "danger," especially their verbal uses (to risk, to endanger), differ in agent-victim structure, i.e., "risk" is used to express that a person affected by an action is also the agent of the action, while "endanger" is used to express that the one affected is not the agent. Third, "risk," but not "danger," tends to be used to represent rational and goal-directed action. The results therefore to some extent confirm the analysis of "risk" and "danger" suggested by German sociologist Niklas Luhmann. As a point of discussion, the present findings arguably have implications for risk communication.  相似文献   

3.
A model is developed from which welfare-optimal prices, capacities, and reliabilities for a service provider are simultaneously determined. Solutions are determined under conditions of stochastic demand subject to a reliability constraint on service quality. Both quality of service provided, as well as price, impact on demand for services rendered. Results indicate that (i) optimal prices are equated to the reliability-constrained marginal costs, (ii) optimal reliabilities require that the marginal benefits of increasing reliability are equated to the marginal costs of doing so, and (iii) optimal capacity allocation involves minimizing the system's expected costs subject to meeting the prespecified reliability constraint for service quality. The model is applied to postal delivery services in light of the growing competition that has emerged in this industry.  相似文献   

4.
Segregation of tasks is an important factor in the determination of internal control system reliability. In assigning tasks within an organization, the impact on internal control system reliability must be considered. This paper uses reliability modeling as the basis of a method for formulating the design problem. Formulation of the problem in this manner facilitates development of a knowledge base for auditors' judgments and the use of such knowledge in the design problem. It is demonstrated that under certain conditions the problem can be solved easily. This is useful not only in task assignment but also in manpower decisions. New concepts such as reliability degradation and a task combination matrix are introduced in developing the formulation.  相似文献   

5.
Abstract

Numerous tools have been developed that attempt to measure work-related stress and working conditions, but few practical instruments in the literature have been found to have a reliable psychometric factor structure. In the UK, the Health and Safety Executive (HSE) Management Standards (MS) Indicator Tool is increasingly used by organizations to monitor working conditions that can lead to stress. In Health and Safety Executive (2004), a factor analysis was conducted demonstrating the reliability of the scales. However, the authors acknowledged that direct reassessment of the same factor structure was impossible as the questionnaire was split into two separate modules for data collection. Furthermore, the tool is designed to enable comparisons between as well as within organizations to take place, yet reliability has only previously been tested at the individual level. The current study is the first to examine the factor structure of the HSE MS Indicator Tool using organizational-level data. Data collected from 39 UK organizations (N=26,382) was used to perform a first-order Confirmatory Factor Analysis (CFA) on the original 35-item seven-factor measurement scale. The results showed an acceptable fit to the data for the instrument. A second-order CFA was also performed to test if the Indicator Tool contains a higher order uni-dimensional measure of work-related stress. These findings also revealed an acceptable fit to the data, suggesting that it may be possible to derive a single measure of work-related stress. Normative data comprising tables of percentiles from the organizational data are provided to enable employers to compare their organizational averages against national benchmarks.  相似文献   

6.
This study examines the effects of using different priority rules at different stages of a multistage, flow-dominant shop. A simulation model is constructed of a manufacturing system comprised of three stages: gateway, intcrmcdiatc, and finishing. As is typical of a flow-dominant shop, the overall flow of the simulated system (gateway to intermediate to finishing) is consistent with a flow shop, but processing in the intermediate stage involves multiple work centers and resembles a job shop. Shop performance is observed when four well-known priority heuristics are applied in different combinations in the gateway, intermediate, and finishing stages of the process. Multiple performance measures addressing the strategic objectives of delivery speed and delivery reliability are recorded under two different shop load conditions. Results show that the measures of both delivery speed and delivery reliability are affected by the priority rule combinations, and that a tradeoff exists between average performance and consistency of performance. Certain priority rule combinations affect performance in predictable ways, allowing the user to assess tradeoffs between delivery speed and delivery reliability.  相似文献   

7.
An isotone pure strategy equilibrium exists in any game of incomplete information in which each player's action set is a finite sublattice of multidimensional Euclidean space, types are multidimensional and atomless, and each player's interim expected payoff function satisfies two “nonprimitive conditions” whenever others adopt isotone pure strategies: (i) single‐crossing in own action and type and (ii) quasi‐supermodularity in own action. Conditions (i), (ii) are satisfied in supermodular and log‐supermodular games given affiliated types,and in games with independent types in which each player's ex post payoff satisfies supermodularity in own action and nondecreasing differences in own action and type. This result is applied to provide the first proof of pure strategy equilibrium existence in the uniform price auction when bidders have multi‐unit demand, nonprivate values, and independent types.  相似文献   

8.
Waters  Robert D.  Parker  Frank L. 《Risk analysis》1999,19(2):249-259
The reliability of a treatment process is addressed in terms of achieving a regulatory effluent concentration standard and the design safety factors associated with the treatment process. This methodology was then applied to two aqueous hazardous waste treatment processes: packed tower aeration and activated sludge (aerobic) biological treatment. The designs achieving 95 percent reliability were compared with those designs based on conventional practice to determine their patterns of conservatism. Scoping-level treatment costs were also related to reliability levels for these treatment processes. The results indicate that the reliability levels for the physical/chemical treatment process (packed tower aeration) based on the deterministic safety factors range from 80 percent to over 99 percent, whereas those for the biological treatment process range from near 0 percent to over 99 percent, depending on the compound evaluated. Increases in reliability per unit increase in treatment costs are most pronounced at lower reliability levels (less than about 80 percent) than at the higher reliability levels (greater than 90 percent, indicating a point of diminishing returns. Additional research focused on process parameters that presently contain large uncertainties may reduce those uncertainties, with attending increases in the reliability levels of the treatment processes.  相似文献   

9.
Autonomous vehicles (AVs) promise to make traffic safer, but their societal integration poses ethical challenges. What behavior of AVs is morally acceptable in critical traffic situations when consequences are only probabilistically known (a situation of risk) or even unknown (a situation of uncertainty)?  How do people retrospectively evaluate the behavior of an AV in situations in which a road user has been harmed? We addressed these questions in two empirical studies (N = 1,638) that approximated the real‐world conditions under which AVs operate by varying the degree of risk and uncertainty of the situation. In Experiment 1, subjects learned that an AV had to decide between staying in the lane or swerving. Each action could lead to a collision with another road user, with some known or unknown likelihood. Subjects’ decision preferences and moral judgments varied considerably with specified probabilities under risk, yet less so under uncertainty. The results suggest that staying in the lane and performing an emergency stop is considered a reasonable default, even when this action does not minimize expected loss. Experiment 2 demonstrated that if an AV collided with another road user, subjects’ retrospective evaluations of the default action were also more robust against unwanted outcome and hindsight effects than the alternative swerve maneuver. The findings highlight the importance of investigating moral judgments under risk and uncertainty in order to develop policies that are societally acceptable even under critical conditions.  相似文献   

10.
This article offers a method for analyzing the reliability of a man–machine system (MMS) and ranking of influencing factors based on a fuzzy cognitive map (FCM). The ranking of influencing factors is analogous to the ranking of system elements the probabilistic theory of reliability. To approximate the dependence of “influencing factors—reliability,” the relationship of variable increments is used, which ensures the sensitivity of the reliability level to variations in the levels of influencing factors. The novelty of the method lies in the fact that the expert values of the weights of the FCM graph edges (arcs) are adjusted based on the results of observations using a genetic algorithm. The algorithm's chromosomes are generated from the intervals of acceptable values of edge weights, and the selection criterion is the sum of squares of deviations of the reliability simulation results from observations. The method is illustrated by the example of a multifactor analysis of the reliability of the “driver–car–road” system. It is shown that the FCM adjustment reduces the discrepancy between the reliability forecast and observations almost in half. Possible applications of the method can be complex systems with vaguely defined structures whose reliability depends very much on interrelated factors measured expertly.  相似文献   

11.
Understanding the reliability of hazardous organizations and their protective systems is central to understanding the risk they produce. Work on “high reliability organization” has done much to illuminate the conditions in which social organization becomes reliable in highly demanding conditions. But risk depends just as much on how relying entities do their relying as it does on the reliability of the entities they rely on. Patterns of relying are often opaque in sociotechnical systems, and processes of relying and being relied on are mutually influencing in complex ways, so the relationship between relying and risk may not be at all obvious. This study was an attempt to study relying as a social practice, in particular analyzing how it had ecological validity in a social organization—how practice was responsive to the conditions in which it took place. This involved observational fieldwork and inductive, qualitative analysis on an offshore oil and gas production platform that was nearing the end of its design life and undergoing refurbishment. The analysis produced four main categories of ecological validity: responsiveness to formal organization, responsiveness to situational contingency, responsiveness to information asymmetry, and responsiveness to sociomateriality. This ecological validity of relying practice should be a primary focus of risk identification, assessing how relying can become mismatched to reliability in certain ways, both when relying practice is responsive to circumstances and when it is not.  相似文献   

12.
Operational risk management of autonomous vehicles in extreme environments is heavily dependent on expert judgments and, in particular, judgments of the likelihood that a failure mitigation action, via correction and prevention, will annul the consequences of a specific fault. However, extant research has not examined the reliability of experts in estimating the probability of failure mitigation. For systems operations in extreme environments, the probability of failure mitigation is taken as a proxy of the probability of a fault not reoccurring. Using a priori expert judgments for an autonomous underwater vehicle mission in the Arctic and a posteriori mission field data, we subsequently developed a generalized linear model that enabled us to investigate this relationship. We found that the probability of failure mitigation alone cannot be used as a proxy for the probability of fault not reoccurring. We conclude that it is also essential to include the effort to implement the failure mitigation when estimating the probability of fault not reoccurring. The effort is the time taken by a person (measured in person-months) to execute the task required to implement the fault correction action. We show that once a modicum of operational data is obtained, it is possible to define a generalized linear logistic model to estimate the probability a fault not reoccurring. We discuss how our findings are important to all autonomous vehicle operations and how similar operations can benefit from revising expert judgments of risk mitigation to take account of the effort required to reduce key risks.  相似文献   

13.
Galletta and Lederer [10] concluded that the short-form measure of user information satisfaction (UIS) lacks sufficient test-retest reliability. We question the validity of this conclusion since the method used by the authors for assessing test-retest reliability is not grounded in classical reliability theory. Analysis of their data suggests that the UIS instrument has adequate test-retest reliability.  相似文献   

14.
Complex engineered systems, such as nuclear reactors and chemical plants, have the potential for catastrophic failure with disastrous consequences. In recent years, human and management factors have been recognized as frequent root causes of major failures in such systems. However, classical probabilistic risk analysis (PRA) techniques do not account for the underlying causes of these errors because they focus on the physical system and do not explicitly address the link between components' performance and organizational factors. This paper describes a general approach for addressing the human and management causes of system failure, called the SAM (System-Action-Management) framework. Beginning with a quantitative risk model of the physical system, SAM expands the scope of analysis to incorporate first the decisions and actions of individuals that affect the physical system. SAM then links management factors (incentives, training, policies and procedures, selection criteria, etc.) to those decisions and actions. The focus of this paper is on four quantitative models of action that describe this last relationship. These models address the formation of intentions for action and their execution as a function of the organizational environment. Intention formation is described by three alternative models: a rational model, a bounded rationality model, and a rule-based model. The execution of intentions is then modeled separately. These four models are designed to assess the probabilities of individual actions from the perspective of management, thus reflecting the uncertainties inherent to human behavior. The SAM framework is illustrated for a hypothetical case of hazardous materials transportation. This framework can be used as a tool to increase the safety and reliability of complex technical systems by modifying the organization, rather than, or in addition to, re-designing the physical system.  相似文献   

15.
This paper presents simple new multisignal generalizations of the two classic methods used to justify the first‐order approach to moral hazard principal–agent problems, and compares these two approaches with each other. The paper first discusses limitations of previous generalizations. Then a state‐space formulation is used to obtain a new multisignal generalization of the Jewitt (1988) conditions. Next, using the Mirrlees formulation, new multisignal generalizations of the convexity of the distribution function condition (CDFC) approach of Rogerson (1985) and Sinclair‐Desgagné (1994) are obtained. Vector calculus methods are used to derive easy‐to‐check local conditions for our generalization of the CDFC. Finally, we argue that the Jewitt conditions may generalize more flexibly than the CDFC to the multisignal case. This is because, with many signals, the principal can become very well informed about the agent's action and, even in the one‐signal case, the CDFC must fail when the signal becomes very accurate.  相似文献   

16.
Most research on firms׳ sourcing strategies assumes that wholesale prices and reliability of suppliers are exogenous. It is of our interest to study suppliers׳ competition on both wholesale price and reliability and firms׳ corresponding optimal sourcing strategy under complete information. In particular, we study a problem in which a firm procures a single product from two suppliers, taking into account suppliers׳ price and reliability differences. This motivates the suppliers to compete on these two factors. We investigate the equilibria of this supplier game and the firm׳s corresponding sourcing decisions. Our study shows that suppliers׳ reliability often plays a more important role than wholesale price in supplier competition and that maintaining high reliability and a high wholesale price is the ideal strategy for suppliers if multiple options exist. The conventional wisdom implies that low supply reliability and high demand uncertainty motivate dual-sourcing. We notice that when the suppliers׳ shared market/transportation network is often disrupted and demand uncertainty is high, suppliers׳ competition on both price and reliability may render the sole-sourcing strategy to be optimal in some cases that depend on the format of suppliers׳ cost functions. Moreover, numerical study shows that when the cost or vulnerability (to market disruptions) of one supplier increases, its profit and that of the firm may not necessarily decrease under supplier competition.  相似文献   

17.
This paper proposes a novel statistical approach for optimally sizing a stand-alone photovoltaic (PV) system under climate change. Traditionally, the irradiation profile of a typical day or year is used to size PV systems. However, facing the global warming crisis as well as the fact that no two years would have the same weather condition for a single site, this often makes the traditional way failed in the extreme weather conditions. This paper presents a method to statistically model the trend of climate change year by year and put it into the sizing formula, so that the results are optimal for the current weather condition and confidential for the future as well. Hence, the suitable sizes for the PV array and the number of batteries are obtained by pure computation. This is different from the traditional simulation-based sizing curve method. An economic optimization procedure is also presented. In addition to the capital and maintenance costs, a penalty cost is introduced when service fails. A new statistic-based reliability index, the loss of power probability, in terms of threshold-based Extreme Value Theory is presented. This index indicates the upper bound reliability for applications and provides rich information for many extreme events. A technological and economic comparison among the traditional daily energy balance method, sizing curve method and the proposed approach is conducted to demonstrate the usefulness of the new method.  相似文献   

18.
This paper considers a joint preventive maintenance (PM) and production/inventory control policy of an unreliable single machine, mono-product manufacturing cell with stochastic non-negligible corrective and preventive delays. The production/inventory control policy, which is based on the hedging point policy (HPP), consists in building and maintaining a safety stock of finished products in order to respond to demand and to avoid shortages during maintenance actions. Without considering the impact of preventive and corrective actions on the overall performance of the production system, most authors working in the reliability and maintainability domains confirm that the age-based preventive maintenance policy (ARP) outperforms the classical block-replacement policy (BRP). In order to reduce wastage incurred by the classical BRP, we consider a modified block replacement policy (MBRP), which consists in canceling a preventive maintenance action if the time elapsed since the last maintenance action exceeds a specified time threshold. The main objective of this paper is to determine the joint optimal policy that minimizes the overall cost, which is composed of corrective and preventive maintenance costs as well as inventory holding and backlog costs. A simulation model mimicking the dynamic and stochastic behavior of the manufacturing cell, based on more realistic considerations of the real behavior of industrial manufacturing cells, is proposed. Based on simulation results, the joint optimal MBRP/HPP parameters are obtained through a numerical approach that combines design of experiment, analysis of variance and response surface methodologies. The joint optimal MBRP/HPP policy is compared to classical joint ARP/HPP and BRP/HPP optimal policies, and the results show that the proposed MBRP/HPP outperforms the latter. Sensitivity analyses are also carried out in order to confirm the superiority of the proposed MBRP/HPP, and it is observed that for practitioners, the proposed joint MBRP/HPP offers not only cost savings, but is also easy to manage, as compared to the ARP/HPP policy.  相似文献   

19.
"Conflict management" and "conflict resolution" are not synonymous terms   总被引:3,自引:0,他引:3  
Robbins sees functional conflict as an absolute necessity within organizations and explicitly encourages it. He explains: "Survival can result only when an organization is able to adapt to constant changes in the environment. Adaption is possible only through change, and change is stimulated by conflict." Robbins cites evidence indicating that conflict can be related to increased productivity and that critical thinking encourages well-developed decisions. He admits, however, that not all conflicts are good for the organization. Their functional or dysfunctional nature is determined by the impact of the conflict on the objectives of the organization. The author identifies several factors underlying the need for conflict stimulation: (1) managers who are surrounded by "yes men"; (2) subordinates who are afraid to admit ignorance or uncertainty; (3) decision-makers' excessive concern about hurting the feelings of others; or (4) an environment where new ideas are slow in coming forth. He suggests techniques for stimulating conflict; manipulating the communication channels (i.e., repression of information); changing the organizational structure (i.e., changes in size or position); and altering personal behavior factors (i.e., role incongruence). Robbins stresses that the actual method to be used in either resolving or stimulating conflict must be appropriate to the situation.  相似文献   

20.
User information satisfaction (UIS) is important because of its potential effects on MIS department goals, quality of user work life, and extent of voluntary usage of systems. Reliable measurement of UIS is important for providing evaluative information for both researchers and practitioners. This study used 92 managers and executives as subjects to compare the test/retest reliability of a widely used, 13-scale UIS instrument together with four summary questions under experimental and control conditions. The summary questions behaved more reliably than the detailed questions for all groups, perhaps because of problems with scale units and origins and with item heterogeneity. This suggests that researchers need more reliable measures of UIS and that practitioners need to exercise caution when collecting and interpreting UIS scores.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号