首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
We study several finite‐horizon, discrete‐time, dynamic, stochastic inventory control models with integer demands: the newsvendor model, its multi‐period extension, and a single‐product, multi‐echelon assembly model. Equivalent linear programs are formulated for the corresponding stochastic dynamic programs, and integrality results are derived based on the total unimodularity of the constraint matrices. Specifically, for all these models, starting with integer inventory levels, we show that there exist optimal policies that are integral. For the most general single‐product, multi‐echelon assembly system model, integrality results are also derived for a practical alternative to stochastic dynamic programming, namely, rolling‐horizon optimization by a similar argument. We also present a different approach to prove integrality results for stochastic inventory models. This new approach is based on a generalization we propose for the one‐dimensional notion of piecewise linearity with integer breakpoints to higher dimensions. The usefulness of this new approach is illustrated by establishing the integrality of both the dynamic programming and rolling‐horizon optimization models of a two‐product capacitated stochastic inventory control system.  相似文献   

2.
In this article, we propose an integrated direct and indirect flood risk model for small‐ and large‐scale flood events, allowing for dynamic modeling of total economic losses from a flood event to a full economic recovery. A novel approach is taken that translates direct losses of both capital and labor into production losses using the Cobb‐Douglas production function, aiming at improved consistency in loss accounting. The recovery of the economy is modeled using a hybrid input‐output model and applied to the port region of Rotterdam, using six different flood events (1/10 up to 1/10,000). This procedure allows gaining a better insight regarding the consequences of both high‐ and low‐probability floods. The results show that in terms of expected annual damage, direct losses remain more substantial relative to the indirect losses (approximately 50% larger), but for low‐probability events the indirect losses outweigh the direct losses. Furthermore, we explored parameter uncertainty using a global sensitivity analysis, and varied critical assumptions in the modeling framework related to, among others, flood duration and labor recovery, using a scenario approach. Our findings have two important implications for disaster modelers and practitioners. First, high‐probability events are qualitatively different from low‐probability events in terms of the scale of damages and full recovery period. Second, there are substantial differences in parameter influence between high‐probability and low‐probability flood modeling. These findings suggest that a detailed approach is required when assessing the flood risk for a specific region.  相似文献   

3.
This article interweaves the widely published empirical frameworks with a new paradigm proposing stochastic dynamic decision-making tools that could be employed for capturing the trade-offs among multiple and conflicting-in-nature criteria so as to provide a design of a resilient shock absorber (RSA) for disrupted supply chain network (SCN). Modern SCNs encounter ‘excursion events’ of different kinds mainly due to uncertain and turbulent markets, catastrophes, accidents, industrial disputes/strikes in organisations, terrorism and asymmetric information. An ‘excursion event’ is an unpredictable event that effectively shuts down or has a relatively large negative impact on the performance of at least one member of a system for a relatively long amount of time. In this article, design of an analytical framework has been conceptualised that allows an SCN to avoid propagating the ill effects of the ‘excursion events’ further and maintains the network at a desired equilibrium level. A broad analytical view of econophysics has been conceptualised using the definition of a ‘system’ from physics. An example derived from the 9/11 case has been delineated in order to illustrate the efficacy of the proposed design. The devised RSA facilitates the assessment of resiliency strategies for SCNs prone to excursion events that are characterised by low probability of occurrence and high impact. The shock-dampening fortification framework also enables practitioners to identify and assess quantitatively the islands of the excursion events in SCN.  相似文献   

4.
Low‐earth orbit satellite (LEO) systems continue to provide mobile communication services. The issue of cost containment in system maintenance is a critical factor for continued operation. Satellite finite life‐times follow a stochastic process, and since satellite replenishment cost is the most significant on‐going cost of operation, finding optimal launch policies is of paramount importance. This paper formulates the satellite launch problem as a Markovian decision model that can be solved using dynamic programming. The policy space of the system is enormous and traditional action space dominance rules do not apply. In order to solve the dynamic program for realistic problem sizes, a novel procedure for limiting the state space considered in the dynamic program is developed. The viability of the proposed solution procedure is demonstrated in example problems using realistic system data. The policies derived by the proposed solution procedure are superior to those currently considered by LEO system operators, and result in substantial annual cost savings.  相似文献   

5.
The end states reached by an engineered system during an accident scenario depend not only on the sequences of the events composing the scenario, but also on their timing and magnitudes. Including these additional features within an overarching framework can render the analysis infeasible in practical cases, due to the high dimension of the system state‐space and the computational effort correspondingly needed to explore the possible system evolutions in search of the interesting (and very rare) ones of failure. To tackle this hurdle, in this article we introduce a framework for efficiently probing the space of event sequences of a dynamic system by means of a guided Monte Carlo simulation. Such framework is semi‐automatic and allows embedding the analyst's prior knowledge about the system and his/her objectives of analysis. Specifically, the framework allows adaptively and intelligently allocating the simulation efforts preferably on those sequences leading to outcomes of interest for the objectives of the analysis, e.g., typically those that are more safety‐critical (and/or rare). The emerging diversification in the filling of the state‐space by the preference‐guided exploration allows also the retrieval of critical system features, which can be useful to analysts and designers for taking appropriate means of prevention and mitigation of dangerous and/or unexpected consequences. A dynamic system for gas transmission is considered as a case study to demonstrate the application of the method.  相似文献   

6.
This article introduces a new integrated scenario-based evacuation (ISE) framework to support hurricane evacuation decision making. It explicitly captures the dynamics, uncertainty, and human–natural system interactions that are fundamental to the challenge of hurricane evacuation, but have not been fully captured in previous formal evacuation models. The hazard is represented with an ensemble of probabilistic scenarios, population behavior with a dynamic decision model, and traffic with a dynamic user equilibrium model. The components are integrated in a multistage stochastic programming model that minimizes risk and travel times to provide a tree of evacuation order recommendations and an evaluation of the risk and travel time performance for that solution. The ISE framework recommendations offer an advance in the state of the art because they: (1) are based on an integrated hazard assessment (designed to ultimately include inland flooding), (2) explicitly balance the sometimes competing objectives of minimizing risk and minimizing travel time, (3) offer a well-hedged solution that is robust under the range of ways the hurricane might evolve, and (4) leverage the substantial value of increasing information (or decreasing degree of uncertainty) over the course of a hurricane event. A case study for Hurricane Isabel (2003) in eastern North Carolina is presented to demonstrate how the framework is applied, the type of results it can provide, and how it compares to available methods of a single scenario deterministic analysis and a two-stage stochastic program.  相似文献   

7.
We use a simulation model called ‘SISCO’ to examine the effects in supply chains of stochastic lead times and of information sharing and quality of that information in a periodic order‐up‐to level inventory system. We test the accuracy of the simulation by verifying the results in Chen et al. (2000a) and Dejonckheere et al. (2004). We find that lead‐time variability exacerbates variance amplification in a supply chain, and that information sharing and information quality are highly significant. For example, using the assumptions in Chen et al. (2000a) and Dejonckheere et al. (2004), we find in a numerical experiment of a customer‐retailer‐wholesaler‐distributor‐factory supply chain that variance amplification is attenuated by nearly 50 percent at the factory due to information sharing. Other assumptions we make are based on interviews or conversations with managers at large supply chains.  相似文献   

8.
This article proposes a novel mathematical optimization framework for the identification of the vulnerabilities of electric power infrastructure systems (which is a paramount example of critical infrastructure) due to natural hazards. In this framework, the potential impacts of a specific natural hazard on an infrastructure are first evaluated in terms of failure and recovery probabilities of system components. Then, these are fed into a bi‐level attacker–defender interdiction model to determine the critical components whose failures lead to the largest system functionality loss. The proposed framework bridges the gap between the difficulties of accurately predicting the hazard information in classical probability‐based analyses and the over conservatism of the pure attacker–defender interdiction models. Mathematically, the proposed model configures a bi‐level max‐min mixed integer linear programming (MILP) that is challenging to solve. For its solution, the problem is casted into an equivalent one‐level MILP that can be solved by efficient global solvers. The approach is applied to a case study concerning the vulnerability identification of the georeferenced RTS24 test system under simulated wind storms. The numerical results demonstrate the effectiveness of the proposed framework for identifying critical locations under multiple hazard events and, thus, for providing a useful tool to help decisionmakers in making more‐informed prehazard preparation decisions.  相似文献   

9.
The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large‐scale hazard‐induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low‐probability high‐impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end‐users, particularly during large‐scale events.  相似文献   

10.
Information delays exist in an inventory system when it takes time to collect, process, validate, and transmit inventory/demand data. A general framework is developed in this paper to describe information flows in an inventory system with information delays. We characterize the sufficient statistics for making optimal decisions. When the ordering cost is linear, the optimality of a state‐dependent base‐stock policy is established even when information flows are allowed to cross over time. Additional insights into the problem are obtained via a comparison between our models and the models with stochastic order lead times. We also show that inventory can substitute for information and vice versa.  相似文献   

11.
Floods are a natural hazard evolving in space and time according to meteorological and river basin dynamics, so that a single flood event can affect different regions over the event duration. This physical mechanism introduces spatio‐temporal relationships between flood records and losses at different locations over a given time window that should be taken into account for an effective assessment of the collective flood risk. However, since extreme floods are rare events, the limited number of historical records usually prevents a reliable frequency analysis. To overcome this limit, we move from the analysis of extreme events to the modeling of continuous stream flow records preserving spatio‐temporal correlation structures of the entire process, and making a more efficient use of the information provided by continuous flow records. The approach is based on the dynamic copula framework, which allows for splitting the modeling of spatio‐temporal properties by coupling suitable time series models accounting for temporal dynamics, and multivariate distributions describing spatial dependence. The model is applied to 490 stream flow sequences recorded across 10 of the largest river basins in central and eastern Europe (Danube, Rhine, Elbe, Oder, Waser, Meuse, Rhone, Seine, Loire, and Garonne). Using available proxy data to quantify local flood exposure and vulnerability, we show that the temporal dependence exerts a key role in reproducing interannual persistence, and thus magnitude and frequency of annual proxy flood losses aggregated at a basin‐wide scale, while copulas allow the preservation of the spatial dependence of losses at weekly and annual time scales.  相似文献   

12.
We develop an analytical framework for studying the role capacity costs play in shaping the optimal differentiation strategy in terms of prices, delivery times, and delivery reliabilities of a profit‐maximizing firm selling two variants (express and regular) of a product in a capacitated environment. We first investigate three special cases. The first is an existing model of price and delivery time differentiation with exogenous reliabilities, which we only review. The second focuses on time‐based (i.e., length and reliability) differentiation with exogenous prices. The third deals with deciding on all features for an express variant when a regular product already exists in the marketplace. We subsequently address the integrative framework of time‐and‐price‐based differentiation for both products in a numerical study. Our results shed light on the role that customer preferences towards delivery times, reliabilities and prices, and the capacity costs (absolute and relative) have on the firm's optimal product positioning policy.  相似文献   

13.
Fault Trees vs. Event Trees in Reliability Analysis   总被引:1,自引:0,他引:1  
Reliability analysis is the study of both the probability and the process of failure of a system. For that purpose, several tools are available, for example, fault trees, event trees, or the GO technique. These tools are often complementary and address different aspects of the questions. Experience shows that there is sometimes confusion between two of these methods: fault trees and event trees. Sometimes identified as equivalent, they, in fact, serve different purposes. Fault trees lay out relationships among events. Event trees lay out sequences of events linked by conditional probabilities. At least in theory, event trees can handle better notions of continuity (logical, temporal, and physical), whereas fault trees are most powerful in identifying and simplifying failure scenarios. Different characteristics of the system in question (e.g., a dam or a nuclear reactor) may guide the choice between fault trees, event trees, or a combination of the two. Some elements of this choice are examined, and observations are made about the relative capabilities of the two methods.  相似文献   

14.
Some program managers share a common belief that adding a redundant component to a system reduces the probability of failure by half. This is true only if the failures of the redundant components are independent events, which is rarely the case. For example, the redundant components may be subjected to the same external loads. There is, however, in general a decrease in the failure probability of the system. Nonetheless, the redundant element comes at a cost, even if it is less than that of developing the first one when both are based on the same design. Identical parts save the most in terms of design costs, but are subjected to common failure modes from possible design errors that limit the effectiveness of the redundancy. In the development of critical systems, managers thus need to decide if the costs of a parallel system are justified by the increase in the system's reliability. NASA, for example, has used redundant spacecraft to increase the chances of mission success, which worked well in the cases of the Viking and Voyager missions. These two successes, however, do not guarantee future ones. We present here a risk analysis framework accounting for dependencies to support the decision to launch at the same time a twin mission of identical spacecraft, given incremental costs and risk-reduction benefits of the second one. We illustrate this analytical approach with the case of the Mars Exploration Rovers launched by NASA in 2003, for which we had performed this assessment in 2001.  相似文献   

15.
《Risk analysis》2018,38(8):1576-1584
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed‐form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling‐based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed‐form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks’s method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models.  相似文献   

16.
We develop a theory of optimal stopping under Knightian uncertainty. A suitable martingale theory for multiple priors is derived that extends the classical dynamic programming or Snell envelope approach to multiple priors. We relate the multiple prior theory to the classical setup via a minimax theorem. In a multiple prior version of the classical model of independent and identically distributed random variables, we discuss several examples from microeconomics, operation research, and finance. For monotone payoffs, the worst‐case prior can be identified quite easily with the help of stochastic dominance arguments. For more complex payoff structures like barrier options, model ambiguity leads to stochastic changes in the worst‐case beliefs.  相似文献   

17.
Operations management methods have been applied profitably to a wide range of technology portfolio management problems, but have been slow to be adopted by governments and policy makers. We develop a framework that allows us to apply such techniques to a large and important public policy problem: energy technology R&D portfolio management under climate change. We apply a multi‐model approach, implementing probabilistic data derived from expert elicitations into a novel stochastic programming version of a dynamic integrated assessment model. We note that while the unifying framework we present can be applied to a range of models and data sets, the specific results depend on the data and assumptions used and therefore may not be generalizable. Nevertheless, the results are suggestive, and we find that the optimal technology portfolio for the set of projects considered is fairly robust to different specifications of climate uncertainty, to different policy environments, and to assumptions about the opportunity cost of investing. We also conclude that policy makers would do better to over‐invest in R&D rather than under‐invest. Finally, we show that R&D can play different roles in different types of policy environments, sometimes leading primarily to cost reduction, other times leading to better environmental outcomes.  相似文献   

18.
In the event of contamination of a water distribution system, decisions must be made to mitigate the impact of the contamination and to protect public health. Making threat management decisions while a contaminant spreads through the network is a dynamic and interactive process. Response actions taken by the utility managers and water consumption choices made by the consumers will affect the hydraulics, and thus the spread of the contaminant plume, in the network. A modeling framework that allows the simulation of a contamination event under the effects of actions taken by utility managers and consumers will be a useful tool for the analysis of alternative threat mitigation and management strategies. This article presents a multiagent modeling framework that combines agent‐based, mechanistic, and dynamic methods. Agents select actions based on a set of rules that represent an individual's autonomy, goal‐based desires, and reaction to the environment and the actions of other agents. Consumer behaviors including ingestion, mobility, reduction of water demands, and word‐of‐mouth communication are simulated. Management strategies are evaluated, including opening hydrants to flush the contaminant and broadcasts. As actions taken by consumer agents and utility operators affect demands and flows in the system, the mechanistic model is updated. Management strategies are evaluated based on the exposure of the population to the contaminant. The framework is designed to consider the typical issues involved in water distribution threat management and provides valuable analysis of threat containment strategies for water distribution system contamination events.  相似文献   

19.
This paper deals with a manufacturing system consisting of a single machine subject to random failures and repairs. The machine can produce two types of parts. When the production is switched from one part type to the other, a random setup time is incurred at a constant cost rate. The objective is to track the demand, while keeping the work-in-process as close as possible to zero for both products. The problem is formulated as an optimal stochastic control problem. The optimal policy is obtained numerically by discretizing the continuous time continuous state opti-mality conditions using a Markov chain approximation technique. The discretized optimality conditions are shown to correspond to an infinite horizon, discrete time, discrete state dynamic programming problem. The optimal setup policy is shown to have two different structures depending on the parameters of the system. A heuristic policy is proposed to approximate the optimal setup policy. Simulation results show that the heuristic policy is a very good approximation for sufficiently reliable systems.  相似文献   

20.
A probabilistic game‐theoretic model is developed within both a static and a dynamic framework to capture adversary–defender conflict in the presence of backlash. I find that not accounting for backlash in counteradversary policies may be costly to the target government. But to minimize adversarial backlash requires understanding how backlash emerges and if, and how, adversaries strategize to goad target governments into policies that induce backlash. The dynamic version of the model shows that when backlash occurs with a time lag, an escalation of the conflict is likely to occur.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号