首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
We consider an inventory installation, controlled by the periodic review base stock (S, T) policy and facing a fixed-rate deterministic demand which, if unsatisfied, is backordered. The supply process is unreliable, so supply deliveries may fail according to an independent Bernoulli process; we refer to such failures reflecting the supply service quality and being internal to the supply chain, as endogenous disruptions. We seek to jointly determine the two policy variables, so to minimize long-run average cost. While an approximate model for this problem was recently analyzed, we present an exact analysis, valid for two common accounting schemes for inventory cost evaluation: continuous and end-of-cycle costing. After developing a unified (and exact) average cost model for both costing schemes, the cost for each scheme is analyzed. In both cases, the optimal policy variables and cost prevail in closed-form, having an identical structure to those of EOQ (with backorders). In fact, under continuous costing, the optimal solution reduces to EOQ for perfect supply. Analytical properties, demonstrating the impact of deteriorating supply quality on the optimal policy, are established. Moreover, computations reveal the cost impact of deploying heuristics that either ignore supply disruptions or rely on inaccurate costing information.  相似文献   

2.
In this survey we review methods to analyze open queueing network models for discrete manufacturing systems. We focus on design and planning models for job shops. The survey is divided in two parts: in the first we review exact and approximate decomposition methods for performance evaluation models for single and multiple product class networks. The second part reviews optimization models of three categories of problems: the first minimizes capital investment subject to attaining a performance measure (WIP or lead time), the second seeks to optimize the performance measure subject to resource constraints, and the third explores recent research developments in complexity reduction through shop redesign and products partitioning.  相似文献   

3.
The supply chain networks could be very fragile in global environment due to unexpected events such as emergencies, normal disruptions and industrial accidents. The supply chain members may lose their production capacities temporarily, which might have significant impacts on the performance of the whole supply chain network. This article proposes a discrete time model to characterise the unreliable production capacity in serial supply chain networks. Based on the proposed model, the exact probability distributions are available for the performance analysis of a single-stage system in the lost sales scenario. The iterative methods are developed to derive the approximate performance measures for single-stage systems in the backorder scenario and multi-stage systems in both the lost sales and backorder scenarios. The proposed methods are verified through a series of numerical experiments. The analysis results suggest that the performance of the supply chain network suffers more from the downstream-stage unreliability than the upstream-stage unreliability. Furthermore, some application examples are illustrated to show the possible solutions for practical problems.  相似文献   

4.
In this article, we investigate how the choice of the attenuation factor in an extended version of Katz centrality influences the centrality of the nodes in evolving communication networks. For given snapshots of a network, observed over a period of time, recently developed communicability indices aim to identify the best broadcasters and listeners (receivers) in the network. Here we explore the attenuation factor constraint, in relation to the spectral radius (the largest eigenvalue) of the network at any point in time and its computation in the case of large networks. We compare three different communicability measures: standard, exponential, and relaxed (where the spectral radius bound on the attenuation factor is relaxed and the adjacency matrix is normalised, in order to maintain the convergence of the measure). Furthermore, using a vitality-based measure of both standard and relaxed communicability indices, we look at the ways of establishing the most important individuals for broadcasting and receiving of messages related to community bridging roles. We compare those measures with the scores produced by an iterative version of the PageRank algorithm and illustrate our findings with three examples of real-life evolving networks: the MIT reality mining data set, consisting of daily communications between 106 individuals over the period of 1 year, a UK Twitter mentions network, constructed from the direct tweets between \(12.4\) k individuals during 1 week, and a subset of the Enron email data set.  相似文献   

5.
Path vector protocols in routing networks convey entire path information to each destination. When links fail, affected paths are replaced by new paths, and by observing the entire path information, one might hope to infer the failed links that caused these changes. However, inferring the exact topological changes behind observed routing changes may not be possible due to limited information, and the same changes may be explained by more than one set of candidate failures. In this paper, using a simple path vector routing model, we present the problem of finding the candidate set with minimum number of failures to explain observed route changes. We call this problem the minimum e-set problem and present algorithms for solving it optimally for certain cases. We also show that the minimum e-set problem is NP-complete in the general case. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No N66001-04-1-8926 and by National Science Fundation(NSF) under Contract No ANI-0221453. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the DARPA or NSF. Part of the work was done when Akash Nanavati was at DA-IICT, India  相似文献   

6.
Scour (localized erosion by water) is an important risk to bridges, and hence many infrastructure networks, around the world. In Britain, scour has caused the failure of railway bridges crossing rivers in more than 50 flood events. These events have been investigated in detail, providing a data set with which we develop and test a model to quantify scour risk. The risk analysis is formulated in terms of a generic, transferrable infrastructure network risk model. For some bridge failures, the severity of the causative flood was recorded or can be reconstructed. These data are combined with the background failure rate, and records of bridges that have not failed, to construct fragility curves that quantify the failure probability conditional on the severity of a flood event. The fragility curves generated are to some extent sensitive to the way in which these data are incorporated into the statistical analysis. The new fragility analysis is tested using flood events simulated from a spatial joint probability model for extreme river flows for all river gauging sites in Britain. The combined models appear robust in comparison with historical observations of the expected number of bridge failures in a flood event. The analysis is used to estimate the probability of single or multiple bridge failures in Britain's rail network. Combined with a model for passenger journey disruption in the event of bridge failure, we calculate a system‐wide estimate for the risk of scour failures in terms of passenger journey disruptions and associated economic costs.  相似文献   

7.
Network coding is a generalization of conventional routing methods that allows a network node to code information flows before forwarding them. While it has been theoretically proved that network coding can achieve maximum network throughput, theoretical results usually do not consider the stochastic nature in information processing and transmission, especially when the capacity of each arc becomes stochastic due to failure, attacks, or maintenance. Hence, the reliability measurement of network coding becomes an important issue to evaluate the performance of the network under various system settings. In this paper, we present analytical expressions to measure the reliability of multicast communications in coded networks, where network coding is most promising. We define the probability that a multicast rate can be transmitted through a coded packet network under a total transmission cost constraint as the reliability metric. To do this, we first introduce an exact mathematical formulation to construct multicast connections over coded packet networks under a limited transmission cost. We then propose an algorithm based on minimal paths to calculate the reliability measurement of multicast connections and analyze the complexity of the algorithm. Our results show that the reliability of multicast routing with network coding improved significantly compared to the case of multicast routing without network coding.  相似文献   

8.
This article looks at the ability of a relatively new technique, hybrid artificial neural networks (ANNs), to predict Japanese banking and firm failures. These models are compared with traditional statistical techniques and conventional ANN models. The results suggest that hybrid neural networks outperform all other models in predicting failure for one year prior to the event. This suggests that for researchers, policymakers, and others interested in early warning systems, the hybrid network may be a useful tool for predicting banking and firm failures.  相似文献   

9.
The Web proxy location problem in general networks is an NP-hard problem. In this paper, we study the problem in networks showing a general tree of rings topology. We improve the results of the tree case in literature and get an exact algorithm with time complexity O(nhk), where n is the number of nodes in the tree, h is the height of the tree (the server is in the root of the tree), and k is the number of web proxies to be placed in the net. For the case of networks with a general tree of rings topology we present an exact algorithm with O(kn 2) time complexity.This research has been supported by NSF of China (No. 10371028) and the Educational Department grant of Zhejiang Province (No. 20030622).  相似文献   

10.
考虑到现实网络中由于一些保护措施的存在,使得一些超负荷的边并不会立即从网络中移除,提出了超负荷边崩溃概率的机制,并构建了一个超负荷边带有崩溃概率的相继故障模型.对比了BA无标度网络和WS小世界网络上遭遇两种边袭击策略导致的全局相继故障现象,探讨了崩溃概率以及网络拓扑结构对边袭击策略的影响,并分析了网络有效预防相继故障发生的应对策略.数值模拟和理论解析的一致性也验证了结论的正确性.  相似文献   

11.
Rich Ties and Innovative Knowledge Transfer within a Firm   总被引:1,自引:1,他引:0  
We show that contacts in formal, informal and especially multiplex networks explain transfer of innovative knowledge in an organization. The contribution of informal contacts has been much acknowledged, while that of formal contacts did not receive much attention in the literature in recent decades. No study thus far has included both these different kinds of contacts in a firm, let alone considered their combined effect. The exact overlap between formal as well as informal contacts between individuals, forming multiplex or what we call rich ties because of their contribution, especially drives the transfer of new, innovative knowledge in a firm. Studying two cases in very different settings suggests these rich ties have a particularly strong effect on knowledge transfer in an organization, even when controlling for the strength of ties. Some of the effects on knowledge transfer in an organization previously ascribed to either the formal network or the informal network may actually be due to their combined effect in a rich tie.  相似文献   

12.
Assigning aircraft to gates is an important decision problem that airport professionals face every day. The solution of this problem has raised a significant research effort and many variants of this problem have been studied. In this paper, we review past work with a focus on identifying types of formulations, classifying objectives, and categorising solution methods. The review indicates that there is no standard formulation, that passenger oriented objectives are most common, and that more recent work are multi-objective. In terms of solution methods, heuristic and metaheuristic approaches are dominant which provides an opportunity to develop exact and approximate approaches both for the single and multi-objective problems.  相似文献   

13.
This paper studies the econometrics of computed dynamic models. Since these models generally lack a closed‐form solution, their policy functions are approximated by numerical methods. Hence, the researcher can only evaluate an approximated likelihood associated with the approximated policy function rather than the exact likelihood implied by the exact policy function. What are the consequences for inference of the use of approximated likelihoods? First, we find conditions under which, as the approximated policy function converges to the exact policy, the approximated likelihood also converges to the exact likelihood. Second, we show that second order approximation errors in the policy function, which almost always are ignored by researchers, have first order effects on the likelihood function. Third, we discuss convergence of Bayesian and classical estimates. Finally, we propose to use a likelihood ratio test as a diagnostic device for problems derived from the use of approximated likelihoods.  相似文献   

14.
The maximum leaf spanning tree (MLST) is a good candidate for constructing a virtual backbone in self-organized multihop wireless networks, but is practically intractable (NP-complete). Self-stabilization is a general technique that permits to recover from catastrophic transient failures in self-organized networks without human intervention. We propose a fully distributed self-stabilizing approximation algorithm for the MLST problem in arbitrary topology networks. Our algorithm is the first self-stabilizing protocol that is specifically designed to approximate an MLST. It builds a solution whose number of leaves is at least 1/3 of the maximum possible in arbitrary graphs. The time complexity of our algorithm is O(n 2) rounds.  相似文献   

15.
Many practical complex networks, such as the Internet, WWW and social networks, are discovered to follow power-law distribution in their degree sequences, i.e., the number of nodes with degree \(i\) in these networks is proportional to \(i^{-\beta }\) for some exponential factor \(\beta > 0\). However, these networks also expose their vulnerabilities to a great number of threats such as adversarial attacks on the Internet, cyber-crimes on the WWW or malware propagations on social networks. Although power-law networks have been found robust under random attacks and vulnerable to intentional attacks via experimental observations, how to better understand their vulnerabilities from a theoretical point of view still remains open. In this paper, we study the vulnerability of power-law networks under random attacks and adversarial attacks using the in-depth probabilistic analysis on the theory of random power-law graph models. Our results indicate that power-law networks are able to tolerate random failures if their exponential factor \(\beta \) is \(<\)2.9, and they are more robust against intentional attacks if \(\beta \) is smaller. Furthermore, we reveal the best range \([1.8, 2.5]\) for the exponential factor \(\beta \) by optimizing the complex networks in terms of both their vulnerabilities and costs. When \(\beta < 1.8\), the network maintenance cost is very expensive, and when \(\beta > 2.5\) the network robustness is unpredictable since it depends on the specific attacking strategy.  相似文献   

16.
Large‐scale outages on real‐world critical infrastructures, although infrequent, are increasingly disastrous to our society. In this article, we are primarily concerned with power transmission networks and we consider the problem of allocation of generation to distributors by rewiring links under the objectives of maximizing network resilience to cascading failure and minimizing investment costs. The combinatorial multiobjective optimization is carried out by a nondominated sorting binary differential evolution (NSBDE) algorithm. For each generators–distributors connection pattern considered in the NSBDE search, a computationally cheap, topological model of failure cascading in a complex network (named the Motter‐Lai [ML] model) is used to simulate and quantify network resilience to cascading failures initiated by targeted attacks. The results on the 400 kV French power transmission network case study show that the proposed method allows us to identify optimal patterns of generators–distributors connection that improve cascading resilience at an acceptable cost. To verify the realistic character of the results obtained by the NSBDE with the embedded ML topological model, a more realistic but also more computationally expensive model of cascading failures is adopted, based on optimal power flow (namely, the ORNL‐Pserc‐Alaska) model). The consistent results between the two models provide impetus for the use of topological, complex network theory models for analysis and optimization of large infrastructures against cascading failure with the advantages of simplicity, scalability, and low computational cost.  相似文献   

17.
We examine whether firms learn from their major acquisition failures. Drawing from a threat‐rigidity theoretical framework, we suggest that firms do not learn from their major acquisition failures. Furthermore, we hypothesize that host‐country experience reinforces the negative effects of major acquisition failures. Our research hypotheses are tested using an event history analysis of 741 acquisitions undertaken by French listed and non‐listed firms in the USA between January 1988 and December 2008. We use failure divestment (divestment resulting from acquisition failure) as a proxy for acquisition performance. Consistent with our theoretical framework, we find that major acquisition failures have a negative impact on future acquisition performance. Furthermore, we find that such negative effects are reinforced by firms’ host‐country experience.  相似文献   

18.
《Omega》1987,15(2):129-134
Several authors have compared and proposed exact, asymptotic or simulation methods for estimating or deriving the duration of project networks. These authors have all concentrated on one aspect of uncertainty—time. The results of simulations obtained through the Venture Evaluation and Review Technique (VERT) are compared with those of other authors. A brief exposé of the extra facilities within VERT is also given. That is, the ability to jointly manipulate time, cost and performance measures, as well being able to specify the distribution from which to sample data upon these uncertain measures.  相似文献   

19.
We consider a formulation for the fixed charge network flow (FCNF) problem subject to multiple uncertain arc failures, which aims to provide a robust optimal flow assignment in the sense of restricting potential losses using Conditional Value-at-Risk (CVaR). We show that a heuristic algorithm referred to as Adaptive Dynamic Cost Updating Procedure (ADCUP) previously developed for the deterministic FCNF problem can be extended to the considered problem under uncertainty and produce high-quality heuristic solutions for large problem instances. The reported computational experiments demonstrate that the described procedure can successfully tackle both the uncertainty considerations and the large size of the networks. High-quality heuristic solutions for problem instances with up to approximately 200,000 arcs have been identified in a reasonable time.  相似文献   

20.
The design and development of the network infrastructure to support mission‐critical applications has become a critical and‐complex activity. This study explores the use of genetic algorithms (GA) for network design in the context of degree‐constrained minimal spanning tree (DCMST) problem; compares for small networks the performance of GA with a mathematical model that provides optimal solutions; and for larger networks, compares GA's performance with two heuristic methods—edge exchange and primal algorithm. Two performance measures, solution quality and computation time, are used for evaluation. The algorithms are evaluated on a wide variety of network sizes with both static and dynamic degree constraints on the network nodes. The results indicate that GA provides optimal solutions for small networks. For larger networks it provides better solution quality compared to edge exchange and primal method, but is worse than the two methods in computation time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号