首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.
We consider the problem of scheduling operations in bufferless robotic cells that produce identical parts using either single‐gripper or dual‐gripper robots. The objective is to find a cyclic sequence of robot moves that minimizes the long‐run average time to produce a part or, equivalently, maximizes the throughput. Obtaining an efficient algorithm for an optimum k‐unit cyclic solution (k ≥ 1) has been a longstanding open problem. For both single‐gripper and dual‐gripper cells, the approximation algorithms in this paper provide the best‐known performance guarantees (obtainable in polynomial time) for an optimal cyclic solution. We provide two algorithms that have a running time linear in the number of machines: for single‐gripper cells (respectively, dual‐gripper cells), the performance guarantee is 9/7 (respectively, 3/2). The domain considered is free‐pickup cells with constant intermachine travel time. Our structural analysis is an important step toward resolving the complexity status of finding an optimal cyclic solution in either a single‐gripper or a dual‐gripper cell. We also identify optimal cyclic solutions for a variety of special cases. Our analysis provides production managers valuable insights into the schedules that maximize productivity for both single‐gripper and dual‐gripper cells for any combination of processing requirements and physical parameters.  相似文献   

2.
Peter B Scott 《Omega》1984,12(3):283-290
Assembly by robot is destined to have a major international impact over the next few years, yet the optimal design, analysis and economic evaluation of a potential robotic assembly application is currently extremely difficult. On looking at the current state-of-the-art in each stage of the hierarchy which a prospective robotic assembly project must pass through, it becomes clear that the whole procedure is closely analogous to attempting to paint an ‘ideal’ picture—both from the point of view of tasks which must be performed and also the level of complexity involved. Unfortunately, unlike with painting, there is not yet any wealth of experience on which roboticists can draw in their quest for ‘ideal’ assembly. Nevertheless, work at Imperial College aims at systematically uncovering some of the underlying principles of robotic assembly which if left to discovery through trial and error might take years—years which industry simply does not have.  相似文献   

3.
In just‐in‐time inventory management in any manufacturing setting, the general idea has been to release jobs as late as possible (to reduce inventory costs) while still having them arrive at bottleneck machines in time to maintain the desired throughput (by not starving a bottleneck machine). When a cyclic schedule is employed, the throughput is determined by a cyclic sequence of operations known as the cyclic critical path. These operations are not, in general, all performed on a single bottleneck machine. We present an algorithm for releasing jobs that treats this cyclic critical path as the bottleneck. Although this algorithm has the somewhat complex task of not delaying any of these operations on the cyclic critical path, it is greatly simplified by being able to take advantage of the fixed sequence of the cyclic schedule. The result is that the algorithm is relatively simple to implement. Although it uses a simulation‐based analysis, this analysis can all be done and the necessary results stored in advance of its use. We test the algorithm in a job shop environment with stochastic operation times. This algorithm is shown to be effective at reducing inventory while avoiding decreases in throughput.  相似文献   

4.
This paper addresses the performance of scheduling algorithms for a two-stage no-wait hybrid flowshop environment with inter-stage flexibility, where there exist several parallel machines at each stage. Each job, composed of two operations, must be processed from start to completion without any interruption either on or between the two stages. For each job, the total processing time of its two operations is fixed, and the stage-1 operation is divided into two sub-parts: an obligatory part and an optional part (which is to be determined by a solution), with a constraint that no optional part of a job can be processed in parallel with an idleness of any stage-2 machine. The objective is to minimize the makespan. We prove that even for the special case with only one machine at each stage, this problem is strongly NP-hard. For the case with one machine at stage 1 and m machines at stage 2, we propose two polynomial time approximation algorithms with worst case ratio of \(3-\frac{2}{m+1}\) and \(2-\frac{1}{m+1}\), respectively. For the case with m machines at stage 1 and one machine at stage 2, we propose a polynomial time approximation algorithm with worst case ratio of 2. We also prove that all the worst case ratios are tight.  相似文献   

5.
In this paper we study the time complexities of some two‐ and three‐stage no‐wait flowshop makespan scheduling problems where, in some stage, all the jobs require a constant processing time and the stage may consist of parallel identical machines. Polynomial time algorithms are presented for certain problems, while several others are proved to be strongly NP‐complete.  相似文献   

6.
Multi‐organizational collaborative decision making in high‐magnitude crisis situations requires real‐time information sharing and dynamic modeling for effective response. Information technology (IT) based decision support tools can play a key role in facilitating such effective response. We explore one promising class of decision support tools based on machine learning, known as support vector machines (SVM), which have the capability to dynamically model and analyze decision processes. To examine this capability, we use a case study with a design science approach to evaluate improved decision‐making effectiveness of an SVM algorithm in an agent‐based simulation experimental environment. Testing and evaluation of real‐time decision support tools in simulated environments provides an opportunity to assess their value under various dynamic conditions. Decision making in high‐magnitude crisis situations involves multiple different patterns of behavior, requiring the development, application, and evaluation of different models. Therefore, we employ a multistage linear support vector machine (MLSVM) algorithm that permits partitioning decision maker response into behavioral subsets, which can then individually model and examine their diverse patterns of response behavior. The results of our case study indicate that our MLSVM is clearly superior to both single stage SVMs and traditional approaches such as linear and quadratic discriminant analysis for understanding and predicting behavior. We conclude that machine learning algorithms show promise for quickly assessing response strategy behavior and for providing the capability to share information with decision makers in multi‐organizational collaborative environments, thus supporting more effective decision making in such contexts.  相似文献   

7.

We propose an approximate method based on the mean value analysis for estimating the average performance of re-entrant flow shop with single-job machines and batch machines. The main focus is on the steady-state averages of the cycle time and the throughput of the system. Characteristics of the re-entrant flow and inclusion of the batch machines complicate the exact analysis of the system. Thus, we propose an approximate analytic method for obtaining the mean waiting time at each buffer of the workstation and a heuristic method to improve the result of the analytic method. We compare the results of the proposed approach with a simulation study using some numerical examples.  相似文献   

8.
We develop analytical models for performance evaluation of Fabrication/Assembly (F/A) systems. We consider an F/A system that consists of an assembly station with input from K fabrication lines. Each fabrication line consists of one or more fabrication stations. The system is closed with a fixed number of items circulating between each fabrication line and the assembly station. We present algorithms to estimate the throughput and mean queue lengths of such systems with exponential processing times. We then extend our approach to analyze F/A systems with general processing time distributions. Numerical comparisons with simulations demonstrate the accuracy of our approach.  相似文献   

9.
10.
We consider a product sold in multiple variants, each with uncertain demand, produced in a multi‐stage process from a standard (i.e., generic) sub‐assembly. The fan‐out point is defined as the last process stage at which outputs are generic (outputs at every subsequent stage are variant‐specific). Insights gained from an analytical study of the system are used to develop heuristics that determine the stage(s) at which safety inventory should be held. We offer a relatively‐simple heuristic that approaches globally‐optimal results even though it uses only two relatively‐local parameters. We call this the VAPT, or value‐added/processing time heuristic, because it determines whether a (local) stage should hold inventory based only on the value added at that local stage relative to its downstream stage, along with the processing time at that local stage relative to its downstream stage. Another key insight is that, contrary to possible intuition, safety inventory should not always be held at the fan‐out point, although a fan‐out point does hold inventory under a wider range of conditions. We also explore when postponement is most valuable and illustrate that postponement may often be less beneficial than suggested by Lee and Tang (1997).  相似文献   

11.
We examine how technological change affects wage inequality and unemployment in a calibrated model of matching frictions in the labor market. We distinguish between two polar cases studied in the literature: a “creative destruction” economy, where new machines enter chiefly through new matches and an “upgrading” economy, where machines in existing matches are replaced by new machines. Our main results are: (i) these two economies produce very similar quantitative outcomes, and (ii) the total amount of wage inequality generated by frictions is very small. We explain these findings in light of the fact that, in the model calibrated to the US economy, both unemployment and vacancy durations are very short, i.e., the matching frictions are quantitatively minor. Hence, the equilibrium allocations of the model are remarkably close to those of a frictionless version of our economy where firms are indifferent between upgrading and creative destruction, and where every worker is paid the same market‐clearing wage. These results are robust to the inclusion of machine‐specific or match‐specific heterogeneity into the benchmark model. (JEL: J41, J64, O33)  相似文献   

12.
We study the asymptotic distribution of three‐step estimators of a finite‐dimensional parameter vector where the second step consists of one or more nonparametric regressions on a regressor that is estimated in the first step. The first‐step estimator is either parametric or nonparametric. Using Newey's (1994) path‐derivative method, we derive the contribution of the first‐step estimator to the influence function. In this derivation, it is important to account for the dual role that the first‐step estimator plays in the second‐step nonparametric regression, that is, that of conditioning variable and that of argument.  相似文献   

13.
In the no-idle flowshop, machines cannot be idle after finishing one job and before starting the next one. Therefore, start times of jobs must be delayed to guarantee this constraint. In practice machines show this behavior as it might be technically unfeasible or uneconomical to stop a machine in between jobs. This has important ramifications in the modern industry including fiber glass processing, foundries, production of integrated circuits and the steel making industry, among others. However, to assume that all machines in the shop have this no-idle constraint is not realistic. To the best of our knowledge, this is the first paper to study the mixed no-idle extension where only some machines have the no-idle constraint. We present a mixed integer programming model for this new problem and the equations to calculate the makespan. We also propose a set of formulas to accelerate the calculation of insertions that is used both in heuristics as well as in the local search procedures. An effective iterated greedy (IG) algorithm is proposed. We use an NEH-based heuristic to construct a high quality initial solution. A local search using the proposed accelerations is employed to emphasize intensification and exploration in the IG. A new destruction and construction procedure is also shown. To evaluate the proposed algorithm, we present several adaptations of other well-known and recent metaheuristics for the problem and conduct a comprehensive set of computational and statistical experiments with a total of 1750 instances. The results show that the proposed IG algorithm outperforms existing methods in the no-idle and in the mixed no-idle scenarios by a significant margin.  相似文献   

14.
We extend the Clark–Scarf serial multi‐echelon inventory model to include procuring production inputs under short‐term take‐or‐pay contracts at one or more stages. In each period, each such stage has the option to order/process at two different cost rates; the cheaper rate applies to units up to the contract quantity selected in the previous period. We prove that in each period and at each such stage, there are three base‐stock levels that characterize an optimal policy, two for the inventory policy and one for the contract quantity selection policy. The optimal cost function is additively separable in its state variables, leading to conquering the curse of dimensionality and the opportunity to manage the supply chain using independently acting managers. We develop conditions under which myopic policies are optimal and illustrate the results using numerical examples. We establish and use a generic one‐period result, which generalizes an important such result in the literature. Extensions to cover variants of take‐or‐pay contracts are included. Limitations are discussed.  相似文献   

15.
Assembly lines function best when every worker is present. When a worker is absent, management must scramble to quickly find a replacement. Usually, the replacement will not be as proficient as the absent worker. This can reduce quality and throughput. We present two assembly line work‐systems models (one for lines with Andon and one for lines without Andon) that show one mechanism whereby absenteeism could impact quality and throughput. We exercise these models to provide insights into absenteeism's impact on quality and throughput. While the paper is written in the concrete terms of automotive assembly, the concepts and results apply to manual assembly lines in general.  相似文献   

16.
We consider a manufacturer facing an unreliable supplier with high or low type on initial reliability. The private reliability can be enhanced through process improvement initiated by either manufacturer (manufacturer‐initiated improvement, MI) or supplier (supplier‐initiated improvement, SI). We derive optimal procurement contracts for both mechanisms and find that the moral hazard does not necessarily generate more profit for high‐type supplier. Furthermore, information asymmetry causes a greater possibility of not ordering from low type in SI than MI. For low type, when an upward effort distortion appears in both mechanisms, a decreased (increased) unit penalty should be imposed in MI (SI) compared with symmetric information case. Although possibly efficient effort from the supplier could yield greater channel profit in SI, several scenarios violate this expectation. However, the manufacturer's expected profit in MI is no less than that in SI. When MI is extended to MSI where both manufacturer and supplier can exert effort, the expected profits of two parties are equal to those in SI. We further extend SI to SID, where both process improvement and dual‐sourcing are available. The manufacturer considers the trade‐off between the benefit from diversification and the loss from dual information rent to decide to choose SID or MI. By comparing SID with pure dual‐sourcing, we find that supplier's process improvement could either accelerate or retard the exercise of dual‐sourcing.  相似文献   

17.
Despite the spread of cost‐driven outsourcing practices, academic research cautions that suppliers' cost advantage may weaken manufacturers' bargaining positions in negotiating outsourcing agreements, thereby hurting their profitability. In this study, we attempt to further understand the strategic impact of low‐cost outsourcing on manufacturers' profitability by investigating the contractual form of outsourcing agreements and the industry structure of the upstream supply market. We consider a two‐tier supply chain system, consisting of two competing manufacturers, who have the option to produce in‐house or to outsource to an upstream supplier with lower cost. To reach an outsourcing agreement, each manufacturer engages in bilateral negotiation with her supplier, who may be an exclusive supplier or a common supplier serving both manufacturers. Our analysis shows that wholesale‐price contracts always mitigate the competition between manufacturers regardless of whether they compete with price or quantity. In contrast, two‐part tariffs intensify the competition when the manufacturers compete with quantity, but soften it when they compete with price. As a result, when outsourcing with two‐part tariffs, the manufacturers may earn lower profits than they would from in‐house production, although the suppliers are more cost efficient. This suggests that managers have to be wary about the downside of using coordinating contracts such as two‐part tariffs when pursuing low‐cost outsourcing strategies. Our analysis also sheds some light on the profitability of using an exclusive supplier for outsourcing. When outsourcing with wholesale‐price contracts, the competing manufacturers are better off outsourcing to an exclusive supplier. However, when outsourcing with two‐part tariffs, the manufacturers may earn higher profits by outsourcing to a common supplier than to an exclusive one when the manufacturers' bargaining power is sufficiently strong (weak) under quantity (price) competition.  相似文献   

18.
We provide an exact myopic analysis for an N‐stage serial inventory system with batch ordering, linear ordering costs, and nonstationary demands under a finite planning horizon. We characterize the optimality conditions of the myopic nested batching newsvendor (NBN) policy and the myopic independent batching newsvendor (IBN) policy, which is a single‐stage approximation. We show that echelon reorder levels under the NBN policy are upper bounds of the counterparts under both the optimal policy and the IBN policy. In particular, we find that the IBN policy has bounded deviations from the optimal policy. We further extend our results to systems with martingale model of forecast evolution (MMFE) and advance demand information. Moreover, we provide a recursive computing procedure and optimality conditions for both heuristics which dramatically reduces computational complexity. We also find that the NBN problem under the MMFE faced by one stage has one more dimension for the forecast demand than the one faced by its downstream stage and that the NBN policy is optimal for systems with advance demand information and stationary problem data. Numerical studies demonstrate that the IBN policy outperforms on average the NBN policy over all tested instances when their optimality conditions are violated.  相似文献   

19.
Faced with demand uncertainty across multiple product lines, many companies have recourse to flexible capacities which can process different products in order to better balance the trade‐off between capacity utilization and cost efficiency. Many studies demonstrated the potential benefit of using flexible capacity at the aggregate level by treating a whole plant or a whole process as a single stage. This paper extends these analyses by studying the benefits of flexible capacity while considering the multi‐stage structure of processes and consequently determining which stages should be flexible, which should be dedicated, and how much capacity to assign to each stage. We consider a two‐product firm which operates in a process‐to‐order environment and faces uncertain demand. Each stage of the process can be designed as dedicated or flexible. Dedicated resources are highly cost efficient but limited to the single product they are exclusively designed for, whereas flexible resources are versatile to handle several products but are more expensive. Using a general mathematical formulation our analysis shows that the optimal design may have some dedicated and some flexible stages along the process. Interestingly, this decision should be decoupled from the chronological order of the stages along the process.  相似文献   

20.
《决策科学》2017,48(4):657-690
Subcontracting has become a prominent business practice across many industries. Subcontracting of industrial production is generally based on short‐term need for additional processing capacity, and is frequently employed by manufacturers to process customer orders more quickly than using only in‐house production. In this article, we study a popular business model where multiple manufacturers, each capable of processing his entire workload in‐house, have the option to subcontract some of their operations to a single third party with a flexible resource. Each manufacturer can deliver customer orders only after his entire batch of jobs, processed in‐house and at the third party, is completed. The third party facility is available to several manufacturers who compete for its use. Current business practice of First‐Come‐First‐Served (FCFS) processing of the subcontracted workloads as well as the competitive Nash equilibrium schedules developed in earlier studies result in two types of inefficiencies; the third party capacity is not maximally utilized, and the manufacturers incur decentralization cost. In this article, we develop models to assess the value created by coordinating the manufacturers' subcontracting decisions by comparing two types of centralized control against FCFS and Nash equilibrium schedules. We present optimal and/or approximate algorithms to quantify the third party underutilization and the manufacturers' decentralization cost. We find that both inefficiencies are more severe with competition than they are when the third party allocates capacity in an FCFS manner. However, in a decentralized setting, a larger percentage of the players prefer Nash equilibrium schedules to FCFS schedules. We extend our analysis to incomplete information scenarios where manufacturers reveal limited demand information, and find that more information dramatically benefits the third party and the manufacturers, however, the marginal benefit of additional information is decreasing. Finally, we discuss an extension wherein each manufacturer's objective takes into account asymmetries in subcontracting, in‐house processing, and delay costs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号