首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 69 毫秒
1.
Dynamic structural models were introduced as early as 1958 as “Industrial Dynamics, but there has been little managerial use and little response in the academic world. Yet, the basic modeling methods provide an important mode for examining the broad interacting effects of large systems. More recent work appears to make structural and dynamic models understandable and accessible to individuals not trained in the decision sciences. The nature of the modeling methods is such that managers and policy makers in public systems can be involved directly in the model building process. The authors hope that this survey paper may help rekindle interest.  相似文献   

2.
System unavailabilities for large complex systems such as nuclear power plants are often evaluated through use of fault tree analysis. The system unavailability is obtained from a Boolean representation of a system fault tree. Even after truncation of higher order terms these expressions can be quite large, involving thousands of terms. A general matrix notation is proposed for the representation of Boolean expressions which facilitates uncertainty and sensitivity analysis calculations.  相似文献   

3.
Managers and quality practitioners are familiar with the linkage of the words quality and systems to denote a systematic approach to quality, as in BS5750 Quality Systems, say. There is, however, a more specialized use of the word systems that indicates the application of systems thinking and which gives rise to the adjective systemic (of, or pertaining to a system) rather than systematic (carrying out in a planned and orderly fashion). This paper examines the potential for applying systems thinking to the management of quality with particular reference to one branch of systems work: the study of failures. The paper draws comparisons between quality and systems analysis of failures and points out that some failures could equally well be described as quality problems and vice versa. The paper argues that problems at the system level are frequently overlooked or avoided by those undertaking quality improvement programmes, partly because individuals within an organization may experience only different, smaller aspects of a systemic problem and partly because the problem solvers may lack the means or motivation to tackle complex, poorly defined problem messes. It then goes on to suggest that use of a meta–method for problem analysis would enable such problems to be addressed. One such method that has been widely applied in the study of failures, the failures method, is described in detail and its application to a failure/quality problem is outlined.  相似文献   

4.
Critical infrastructure systems must be both robust and resilient in order to ensure the functioning of society. To improve the performance of such systems, we often use risk and vulnerability analysis to find and address system weaknesses. A critical component of such analyses is the ability to accurately determine the negative consequences of various types of failures in the system. Numerous mathematical and simulation models exist that can be used to this end. However, there are relatively few studies comparing the implications of using different modeling approaches in the context of comprehensive risk analysis of critical infrastructures. In this article, we suggest a classification of these models, which span from simple topologically‐oriented models to advanced physical‐flow‐based models. Here, we focus on electric power systems and present a study aimed at understanding the tradeoffs between simplicity and fidelity in models used in the context of risk analysis. Specifically, the purpose of this article is to compare performance estimates achieved with a spectrum of approaches typically used for risk and vulnerability analysis of electric power systems and evaluate if more simplified topological measures can be combined using statistical methods to be used as a surrogate for physical flow models. The results of our work provide guidance as to appropriate models or combinations of models to use when analyzing large‐scale critical infrastructure systems, where simulation times quickly become insurmountable when using more advanced models, severely limiting the extent of analyses that can be performed.  相似文献   

5.
Qualitative systems for rating animal antimicrobial risks using ordered categorical labels such as “high,”“medium,” and “low” can potentially simplify risk assessment input requirements used to inform risk management decisions. But do they improve decisions? This article compares the results of qualitative and quantitative risk assessment systems and establishes some theoretical limitations on the extent to which they are compatible. In general, qualitative risk rating systems satisfying conditions found in real‐world rating systems and guidance documents and proposed as reasonable make two types of errors: (1) Reversed rankings, i.e., assigning higher qualitative risk ratings to situations that have lower quantitative risks; and (2) Uninformative ratings, e.g., frequently assigning the most severe qualitative risk label (such as “high”) to situations with arbitrarily small quantitative risks and assigning the same ratings to risks that differ by many orders of magnitude. Therefore, despite their appealing consensus‐building properties, flexibility, and appearance of thoughtful process in input requirements, qualitative rating systems as currently proposed often do not provide sufficient information to discriminate accurately between quantitatively small and quantitatively large risks. The value of information (VOI) that they provide for improving risk management decisions can be zero if most risks are small but a few are large, since qualitative ratings may then be unable to confidently distinguish the large risks from the small. These limitations suggest that it is important to continue to develop and apply practical quantitative risk assessment methods, since qualitative ones are often unreliable.  相似文献   

6.
The concept that Information Technology can be used as part of an organization's strategy changes its role in the organization. Whilst investments associated with current or ‘more of the same’ computer systems are proposed by the DP manager, requests for investments associated with the use of IT as a competitive weapon come from a much wider audience. Since the size of the investment may be large and its potential impact on the organization profound there is a need to objectively analyze and manage such investments at the strategic level. Sophisticated models exist in the literature (Butler, 1988; Porter, 1988; Marsden, 1988; Synott, 1987) and there is much research to support the view that IT and corporate strategic models need to be aligned (Feeney & Brownlee, 1986; Haffenden, 1988 and Brewer, 1987). However the results from in-depth interviews with highly placed managers suggest that the decisions are based on more informal processes and that there exists differing views on the relationship between IT and corporate strategies.  相似文献   

7.
This paper discusses the significance of the enterprise systems and simulation integration in improving shop floor’s short-term production planning capability. The ultimate objectives are to identify the integration protocols, optimisation parameters and critical design artefacts, thereby identifying key ‘ingredients’ that help in setting out a future research agenda in pursuit of optimum decision-making at the shop floor level. While the integration of enterprise systems and simulation gains a widespread agreement within the existing work, the optimality, scalability and flexibility of the schedules remained unanswered. Furthermore, there seems to be no commonality or pattern as to how many core modules are required to enable such a flexible and scalable integration. Nevertheless, the objective of such integration remains clear, i.e. to achieve an optimum total production time, lead time, cycle time, production release rates and cost. The issues presently faced by existing enterprise systems (ES), if properly addressed, can contribute to the achievement of manufacturing excellence and can help identify the building blocks for the software architectural platform enabling the integration.  相似文献   

8.
We perform an analysis of various queueing systems with an emphasis on estimating a single performance metric. This metric is defined to be the percentage of customers whose actual waiting time was less than their individual waiting time threshold. We label this metric the Percentage of Satisfied Customers (PSC.) This threshold is a reflection of the customers' expectation of a reasonable waiting time in the system given its current state. Cases in which no system state information is available to the customer are referred to as “hidden queues.” For such systems, the waiting time threshold is independent of the length of the waiting line, and it is randomly drawn from a distribution of threshold values for the customer population. The literature generally assumes that such thresholds are exponentially distributed. For these cases, we derive closed form expressions for our performance metric for a variety of possible service time distributions. We also relax this assumption for cases where service times are exponential and derive closed form results for a large class of threshold distributions. We analyze such queues for both single and multi‐server systems. We refer to cases in which customers may observe the length of the line as “revealed” queues.“ We perform a parallel analysis for both single and multi‐server revealed queues. The chief distinction is that for these cases, customers may develop threshold values that are dependent upon the number of customers in the system upon their arrival. The new perspective this paper brings to the modeling of the performance of waiting line systems allows us to rethink and suggest ways to enhance the effectiveness of various managerial options for improving the service quality and customer satisfaction of waiting line systems. We conclude with many useful insights on ways to improve customer satisfaction in waiting line situations that follow directly from our analysis.  相似文献   

9.
In this paper we consider the use of data envelopment analysis (DEA) for the assessment of efficiency of units whose output profiles exhibit specialisation. An example of this is found in agriculture where a large number of different crops may be produced in a particular region, but only a few farms actually produce each particular crop. Because of the large number of outputs, the use of conventional DEA models in such applications results in a poor efficiency discrimination. We overcome this problem by specifying production trade-offs between different outputs, relying on the methodology of Podinovski (J Oper Res Soc 2004;55:1311–22). The main idea of our approach is to relate various outputs to the production of the main output. We illustrate this methodology by an application of DEA involving agricultural farms in different regions of Turkey. An integral part of this application is the elicitation of expert judgements in order to formulate the required production trade-offs. Their use in DEA models results in a significant improvement of the efficiency discrimination. The proposed methodology should also be of interest to other applications of DEA where units may exhibit specialization, such as applications involving hospitals or bank branches.  相似文献   

10.
It is sometimes argued that the use of increasingly complex "biologically-based" risk assessment (BBRA) models to capture increasing mechanistic understanding of carcinogenic processes may run into a practical barrier that cannot be overcome in the near term: the need for unrealistically large amounts of data about pharmacokinetic and pharmacodynamic parameters. This paper shows that, for a class of dynamical models widely used in biologically-based risk assessments, it is unnecessary to estimate the values of the individual parameters. Instead, the input-output properties of such a model–specifically, the ratio of the area-under-curve (AUC) for any selected output to the AUC of the input–is determined by a single aggregate "reduced" constant, which can be estimated from measured input and output quantities. Uncertainties about the many individual parameter values of the model, and even uncertainties about its internal structure, are irrelevant for purposes of quantifying and extrapolating its input-output (e.g., dose-response) behavior. We prove that this is the case for the class of linear, constant-coefficient, globally stable compartmental flow systems used in many classical pharmacokinetic and low-dose PBPK models. Examples are cited that suggest that the value of the reduced parameter representing such a system's aggregate behavior may be relatively insensitive to changes in (and hence to uncertainties about) the values of individual parameters. The theory is illustrated with a model of pharmacokinetics and metabolism of cyclophosphamide (CP), a drug widely used in cancer chemotherapy and as an immunosuppressive agent.  相似文献   

11.
Recent work in the assessment of risk in maritime transportation systems has used simulation-based probabilistic risk assessment techniques. In the Prince William Sound and Washington State Ferries risk assessments, the studies' recommendations were backed up by estimates of their impact made using such techniques and all recommendations were implemented. However, the level of uncertainty about these estimates was not available, leaving the decisionmakers unsure whether the evidence was sufficient to assess specific risks and benefits. The first step toward assessing the impact of uncertainty in maritime risk assessments is to model the uncertainty in the simulation models used. In this article, a study of the impact of proposed ferry service expansions in San Francisco Bay is used as a case study to demonstrate the use of Bayesian simulation techniques to propagate uncertainty throughout the analysis. The conclusions drawn in the original study are shown, in this case, to be robust to the inherent uncertainties. The main intellectual merit of this work is the development of Bayesian simulation technique to model uncertainty in the assessment of maritime risk. However, Bayesian simulations have been implemented only as theoretical demonstrations. Their use in a large, complex system may be considered state of the art in the field of computational sciences.  相似文献   

12.
Traditional probabilistic risk assessment (PRA), of the type originally developed for engineered systems, is still proposed for terrorism risk analysis. We show that such PRA applications are unjustified in general. The capacity of terrorists to seek and use information and to actively research different attack options before deciding what to do raises unique features of terrorism risk assessment that are not adequately addressed by conventional PRA for natural and engineered systems—in part because decisions based on such PRA estimates do not adequately hedge against the different probabilities that attackers may eventually act upon. These probabilities may differ from the defender's (even if the defender's experts are thoroughly trained, well calibrated, unbiased probability assessors) because they may be conditioned on different information. We illustrate the fundamental differences between PRA and terrorism risk analysis, and suggest use of robust decision analysis for risk management when attackers may know more about some attack options than we do.  相似文献   

13.
Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.  相似文献   

14.
Large scale inventory distribution systems typically comprise a hierarchy of retail stores and warehouses. This paper presents a model for finding the optimal design of such systems. Given the maximum number of facilities under consideration and their locations, the problem is to determine which facilities to include in the system and which products to stock at each in order to minimize the cost of the system. Demand for the products may be deterministic or stochastic. To use the model it is necessary to know the optimal inventory policies for the multi-echelon systems under consideration; however, an important feature of this work is that any multi-echelon model may be used in tandem with this design model. Included is an example to illustrate the model and the two points which are the basis for its formulation: first, there is generally no single design which is best for all products; second, the design which is optimal for a given product is not necessarily the best design to use for that product when trying to minimize the cost of the entire system.  相似文献   

15.
Mixed Levels of Uncertainty in Complex Policy Models   总被引:3,自引:0,他引:3  
The characterization and treatment of uncertainty poses special challenges when modeling indeterminate or complex coupled systems such as those involved in the interactions between human activity, climate and the ecosystem. Uncertainty about model structure may become as, or more important than, uncertainty about parameter values. When uncertainty grows so large that prediction or optimization no longer makes sense, it may still be possible to use the model as a behavioral test bed to examine the relative robustness of alternative observational and behavioral strategies. When models must be run into portions of their phase space that are not well understood, different submodels may become unreliable at different rates. A common example involves running a time stepped model far into the future. Several strategies can be used to deal with such situations. The probability of model failure can be reported as a function of time. Possible alternative surprises can be assigned probabilities, modeled separately, and combined. Finally, through the use of subjective judgments, one may be able to combine, and over time shift between models, moving from more detailed to progressively simpler order-of-magnitude models, and perhaps ultimately, on to simple bounding analysis.  相似文献   

16.
This paper describes and illustrates the architecture of computer-based Dynamic Risk Management Systems (DRMS) designed to assist real-time risk management decisions for complex physical systems, for example, engineered systems such as offshore platforms or medical systems such as patient treatment in Intensive Care Units. A key characteristic of the DRMSs that we describe is that they are hybrid, combining the powers of Probabilistic Risk Analysis methods and heuristic Artificial Intelligence techniques. A control module determines whether the situation corresponds to a specific rule or regulation, and is clear enough or urgent enough for an expert system to make an immediate recommendation without further analysis of the risks involved. Alternatively, if time permits and if the uncertainties justify it, a risk and decision analysis module formulates and evaluates options, including that of gathering further information. This feature is particularly critical since, most of the time, the physical system is only partially observable, i.e., the signals observed may not permit unambiguous characterization of its state. The DRMS structure is also dynamic in that, for a given time window (e.g., 1 day or 1 hour), it anticipates the physical system's state (and, when appropriate, performs a risk analysis) accounting for its evolution, its mode of operations, the predicted external loads and problems, and the possible changes in the set of available options. Therefore, we specifically address the issue of dynamic information gathering for decision-making purposes. The concepts are illustrated focusing on the risk and decision analysis modules for a particular case of real-time risk management on board offshore oil platforms, namely of two types of gas compressor leaks, one progressive and one catastrophic. We describe briefly the DRMS proof-of-concept produced at Stanford, and the prototype (ARMS) that is being constructed by Bureau Veritas (Paris) based on these concepts.  相似文献   

17.
It is critical for complex systems to effectively recover, adapt, and reorganize after system disruptions. Common approaches for evaluating system resilience typically study single measures of performance at one time, such as with a single resilience curve. However, multiple measures of performance are needed for complex systems that involve many components, functions, and noncommensurate valuations of performance. Hence, this article presents a framework for: (1) modeling resilience for complex systems with competing measures of performance, and (2) modeling decision making for investing in these systems using multiple stakeholder perspectives and multicriteria decision analysis. This resilience framework, which is described and demonstrated in this article via a real‐world case study, will be of interest to managers of complex systems, such as supply chains and large‐scale infrastructure networks.  相似文献   

18.
Slob  W.  Pieters  M. N. 《Risk analysis》1998,18(6):787-798
The use of uncertainty factors in the standard method for deriving acceptable intake or exposure limits for humans, such as the Reference Dose (RfD), may be viewed as a conservative method of taking various uncertainties into account. As an obvious alternative, the use of uncertainty distributions instead of uncertainty factors is gaining attention. This paper presents a comprehensive discussion of a general framework that quantifies both the uncertainties in the no-adverse-effect level in the animal (using a benchmark-like approach) and the uncertainties in the various extrapolation steps involved (using uncertainty distributions). This approach results in an uncertainty distribution for the no-adverse-effect level in the sensitive human subpopulation, reflecting the overall scientific uncertainty associated with that level. A lower percentile of this distribution may be regarded as an acceptable exposure limit (e.g., RfD) that takes account of the various uncertainties in a nonconservative fashion. The same methodology may also be used as a tool to derive a distribution for possible human health effects at a given exposure level. We argue that in a probabilistic approach the uncertainty in the estimated no-adverse-effect-level in the animal should be explicitly taken into account. Not only is this source of uncertainty too large to be ignored, it also has repercussions for the quantification of the other uncertainty distributions.  相似文献   

19.
Exogenous agents may perturb development during the embryonic period and adversely affect the formation of organs. However, adverse effects on development are not limited to the embryonic period nor are the manifestations restricted solely to outright gross structural malformation, but may instead be expressed as a decrement or abberration of postnatal function. Susceptibility to altered development may extend well into the postnatal period. Studies of functional parameters in several organ systems have demonstrated the broad-based susceptibility, subtlety of expression and potential of long-lasting effects of altered development assessed by physiologic assays. Adverse effects on functional development, whether in the CNS, reproductive, gastrointestinal, genitourinary, respiratory, or immune systems, etc., merit continuing investigation. From the viewpoint of risk estimation and hazard detection, evaluations of postnatal functional parameters may be relevant for several reasons. First, such parameters may serve as low-dose triggers. Second, they may be useful as a focal point for epidemiological studies. Finally, a more thorough understanding of the degree and magnitude of such postnatal functional deficits is needed since an adverse maternal effect may be transient, considered acceptable, or unperceived, but the effect on the conceptus may be permanent and severe. The immune and respiratory systems are discussed as two examples of how subtle and protean adverse effects on functional development may be.  相似文献   

20.
The main advantage of deep lane storage systems compared with conventional high bay warehouses is seen in a better space utilization, because products are stored in channels one pallet behind the other. However, for deep lane storage systems the last-in-first-out principle holds and direct access to pallets is lost apart from the last pallet entering a channel. To operate deep lane storage systems effectively, namely, providing high throughput rates even at times of high storage rack utilization requires a sophisticated operational planning system. We will describe a totally new concept consisting of five modules for storage and retrieval assignments, as well as for a reorganization of storage location occupations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号