首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
In this paper we study the optimality of the TLS algorithm for solving the online scheduling problem of minimizing the makespan on a set of m multipurpose machines, where there are two different job types and each job type can only be processed on a unique subset of machines. The literature shows that the TLS algorithm is optimal for the special cases where either m=2 or where all processing times are restricted to unity. We show that the TLS algorithm is optimal also for the special cases where the job processing times are either job type or machine set dependent. For both cases, the optimality of the TLS algorithm is proven by showing that its competitive ratio matches the lower bound for any processing set and processing time parameters.  相似文献   

2.
Online scheduling with a buffer on related machines   总被引:1,自引:1,他引:0  
Online scheduling with a buffer is a semi-online problem which is strongly related to the basic online scheduling problem. Jobs arrive one by one and are to be assigned to parallel machines. A buffer of a fixed capacity K is available for storing at most K input jobs. An arriving job must be either assigned to a machine immediately upon arrival, or it can be stored in the buffer for unlimited time. A stored job which is removed from the buffer (possibly, in order to allocate a space in the buffer for a new job) must be assigned immediately as well. We study the case of two uniformly related machines of speed ratio s≥1, with the goal of makespan minimization.  相似文献   

3.

We propose an approximate method based on the mean value analysis for estimating the average performance of re-entrant flow shop with single-job machines and batch machines. The main focus is on the steady-state averages of the cycle time and the throughput of the system. Characteristics of the re-entrant flow and inclusion of the batch machines complicate the exact analysis of the system. Thus, we propose an approximate analytic method for obtaining the mean waiting time at each buffer of the workstation and a heuristic method to improve the result of the analytic method. We compare the results of the proposed approach with a simulation study using some numerical examples.  相似文献   

4.
We study a variant of classical scheduling, which is called scheduling with “end of sequence” information. It is known in advance that the last job has the longest processing time. Moreover, the last job is marked, and thus it is known for every new job whether it is the final job of the sequence. We explore this model on two uniformly related machines, that is, two machines with possibly different speeds. Two objectives are considered, maximizing the minimum completion time and minimizing the maximum completion time (makespan). Let s be the speed ratio between the two machines, we consider the competitive ratios which are possible to achieve for the two problems as functions of s. We present algorithms for different values of s and lower bounds on the competitive ratio. The proposed algorithms are best possible for a wide range of values of s. For the overall competitive ratio, we show tight bounds of ϕ + 1 ≈ 2.618 for the first problem, and upper and lower bounds of 1.5 and 1.46557 for the second problem. The authors would like to dedicate this paper to the memory of our colleague and friend Yong He who passed away in August 2005 after struggling with illness. D. Ye: Research was supported in part by NSFC (10601048).  相似文献   

5.
Abstract

Job crafting describes a set of proactive behaviours in which employees may engage to shape their work in order to minimize hindering job demands and maximize resources and challenging demands. Such behaviours may be particularly important among blue-collar workers whose jobs are characterized by poor working conditions and low well-being. We present the development and adaptation of a job crafting measure that may be used among blue-collar workers, based on an existing scale by Tims, Bakker, and Derks (2012) that was not specifically developed for blue-collar workers. We test the validity and reliability of the measure in a longitudinal study based on multiple source information from mail delivery workers in Denmark (N=362 at Time 1; N=408 at Time 2). Results indicate the presence of five job crafting dimensions: increasing challenging demands, decreasing social job demands, increasing social job resources, increasing quantitative demands and decreasing hindering job demands. These can be reliably measured with 15 items. The measure shows acceptable discriminant and criterion validity, and test-retest reliability. The findings extend the application of the original questionnaire. They also add to knowledge of the job crafting behaviours in which blue-collar workers engage and link them to well-being outcomes.  相似文献   

6.

Electronic assembly operations are vital to industries such as telecommunications, computers and consumer electronics. This paper presents a constraint analysis methodology for planning and improving electronic assembly operations that draws on concepts from queueing theory, simulation and production planning. The proposed methodology identifies the operational bottleneck and predicts the utilization, throughput and lead time of the assembly line. It also quantifies the relationship between yields and utilization for the assembly operations. A case study is presented that applies the methodology at an Ericsson, Inc., telecommunications equipment assembly facility. The constraint analysis methodology provided valuable decision support as the managers of Ericsson evaluated the costs and benefits of additional production capacity. Although the focus of this paper is electronic assembly operations, the methodology can be applied to general flow line assembly systems with feedback loops for test and rework under dedicated high-volume production.  相似文献   

7.
We consider the problem of off-line throughput maximization for job scheduling on one or more machines, where each job has a release time, a deadline and a profit. Most of the versions of the problem discussed here were already treated by Bar-Noy et al. (Proc. 31st ACM STOC, 1999, pp. 622–631; http://www.eng.tau.ac.il/amotz/). Our main contribution is to provide algorithms that do not use linear programming, are simple and much faster than the corresponding ones proposed in Bar-Noy et al. (ibid., 1999), while either having the same quality of approximation or improving it. More precisely, compared to the results of in Bar-Noy et al. (ibid., 1999), our pseudo-polynomial algorithm for multiple unrelated machines and all of our strongly-polynomial algorithms have better performance ratios, all of our algorithms run much faster, are combinatorial in nature and avoid linear programming. Finally, we show that algorithms with better performance ratios than 2 are possible if the stretch factors of the jobs are bounded; a straightforward consequence of this result is an improvement of the ratio of an optimal solution of the integer programming formulation of the JISP2 problem (see Spieksma, Journal of Scheduling, vol. 2, pp. 215–227, 1999) to its linear programming relaxation.  相似文献   

8.

Semiconductor wafer fabrication involves very complex process routing, and reducing flow times is very important. This study reports a search for better dispatch rules for achieving the goal of reducing flow times, while maintaining high machine utilization. We explored a new simulation-based dispatch rule and a queue prediction dispatch rule. Using simulation experiments and an industrial data set, we also compared several other dispatch rules commonly used in semiconductor manufacturing with our proposed dispatch rules. Among these rules, in addition to the simulation-based dispatching rule, the shortest-remaining-processing-time, earliest-due-date and leastslack rules also performed well in terms of reducing flow times. The reasons behind these good rules are discussed in this paper. Based on the previous works and this study, accurately predicting and effectively utilizing future flow times can improve the quality of production management decisions.  相似文献   

9.

In this paper, we propose a productivity model for solving the machine-part grouping problem in cellular manufacturing (CM) systems. First, a non-linear 0-1 integer programming model is developed to identify machine groups and part families simultaneously. This model aims to maximize the system productivity defined as the ratio of total output to the total material handling cost. Second, an efficient simulated annealing (SA) algorithm is developed to solve large-scale problems. This algorithm provides several advantages over the existing algorithms. It forms part families and machine cells simultaneously. It also considers production volume, sales price, and maximum number of machines in each cell and total material handling cost. The proposed SA also has the ability to determine the optimum number of manufacturing cells. The performance of the developed models is tested on eight problems of different size and complexity selected from the literature. The results show the superiority of the SA algorithm over the mathematical programming model in both productivity and computational time.  相似文献   

10.
We show how a simple normal approximation to Erlang's delay formula can be used to analyze capacity and staffing problems in service systems that can be modeled as M/M/s queues. The numbers of servers, s, needed in an M/M/s queueing system to assure a probability of delay of, at most, p can be well approximated by sp + z***I-p+, where z1-p, is the (1 - p)th percentile of the standard normal distribution and ρ, the presented load on the system, is the ratio of Λ, the customer arrival rate, to μ, the service rate. We examine the accuracy of this approximation over a set of parameters typical of service operations ranging from police patrol, through telemarketing to automatic teller machines, and we demonstrate that it tends to slightly underestimate the number of servers actually needed to hit the delay probability target—adding one server to the number suggested by the above formula typically gives the exact result. More importantly, the structure of the approximation promotes operational insight by explicitly linking the number of servers with server utilization and the customer service level. Using a scenario based on an actual teleservicing operation, we show how operations managers and designers can quickly obtain insights about the trade-offs between system size, system utilization and customer service. We argue that this little used approach deserves a prominent role in the operations analyst's and operations manager's toolbags.  相似文献   

11.

Planning and control systems for highly dynamic and uncertain manufacturing environments require adaptive flexibility and decision-making capabilities. Modern distributed manufacturing systems assess the utility of planning and executing solutions for both system goals (e.g. minimize manufacturing production time for all parts or minimize WIP) and local goals (e.g. expedite part A production schedule or maximize machine X utilization). Sensible Agents have the ability to alter their autonomy levels to choose among a set of decision models in order to handle the differences between local and system goals. In this paper, Sensible Agents are applied to a production planning and control problem in the context of job shop scheduling and decision model theory. Sensible Agents provide for trade-off reasoning mechanisms among system and local utilities that are flexible and responsive to an agent's abilities, situational context and position in the organizational structure of the system.  相似文献   

12.
Lot streaming is the process of splitting a job or lot into sublots to reduce its makespan on a sequence of machines. The goal in the lot streaming problem is to find the optimal size of each sublot that will minimize the makespan. The makespan is defined as the time the last sublot completes its processing on the last machine. If the sizes of these sublots are restricted to remain the same on all machines, the solution is called a consistent sublot solution. However, if the sizes of the sublots are allowed to vary, the solution is referred to as a nonconsistent or variable sublot solution. Also, if the machines must be in operation continuously from the first to the last sublot, the solution is a no idling solution. When setups are explicitly considered in the problem, there will be two cases. If setups on each machine require some portion of the first sublot be present by the machine, the problem is referred to as the attached setup time problem. If setups can be performed ahead of time before the first sublot reaches the particular machine, the corresponding problem is referred to as the detached setup problem. Finally, if the machines are allowed to be idle between the processing of sublots, the resultant solution is an intermittent idling solution. In this paper, the consistent sublot lot streaming problem with intermittent idling and no setups is discussed. The models developed also assume that the number of sublots are fixed and known. The m machine two sublot lot streaming problem is reviewed. An algorithm for the three sublot, m machine problem is derived using a network representation of the problem. The complexity of the algorithm is O (m2). Finally, using the insights from three sublot problem, a heuristic algorithm is provided for the m machine, n sublot problems. The results on the proposed heuristic are very encouraging; average percent deviation from optimal makespan is approximately at 0.76% on 155 randomly generated problems with different m and n values.  相似文献   

13.

An effective maintenance system is essential for SAUDIA in order to meet its set objectives. These objectives include minimal flight cancellations, minimal delays, minimal repair turn time, and effective utilization of maintenance resources. In this paper, the elements of an integrated simulation model for SAUDIA have been described. The integrated model consists of several modules. These are planning and scheduling, organization, supply, quality control and performance measures. The required data and software for such a model have been described. Also, the utility of such a model for SAUDIA has been outlined.  相似文献   

14.
Given the pervasive nature of computer and communication networks, many paradigms have been used to study their properties and performances. In particular, reliability models based on topological properties can adequately represent the network capacity to survive failures of its components. Classical reliability models are based on the existence of end-to-end paths between network nodes, not taking into account the length of these paths; for many applications, this is inadequate, because the connection will only be established or attain the required quality if the distance between the connecting nodes does not exceed a given value.An alternative topological reliability model is the diameter-constrained reliability of a network; this measure considers not only the underlying topology, but also imposes a bound on the diameter, which is the maximum distance between the nodes of the network. In this work, we study in particular the case where we want to model the connection between a source-vertex s and a set of terminal vertices K (for example, a video multicast application), using a directed graph (digraph) for representing the topology of the network with node set V. If the s,K-diameter is the maximum distance between s and any of vertices of K, the diameter-constrained s,K-terminal reliability of a network G, Rs,K(G,D), is defined as the probability that surviving arcs span a subgraph whose s,K-diameter does not exceed D.One of the tools successfully employed in the study of classical reliability models is the domination of a graph, which was introduced by Satyanarayana and Prabhakar. In this paper we introduce a definition and a full characterization of the domination in the case of the diameter-constrained s,K-terminal reliability when K=V, including the classical source-to-all-terminal reliability domination result as a specific case. Moreover we use these results to present an algorithm for the evaluation of the diameter-constrained s,V-terminal reliability Rs,V(G,D).  相似文献   

15.
Improved Bounds on Relaxations of a Parallel Machine Scheduling Problem   总被引:4,自引:0,他引:4  
We consider the problem of scheduling n jobs withrelease dates on m identical parallel machines to minimize the average completion time of the jobs. We prove that the ratio of the average completion time of the optimal nonpreemptive schedule to that of the optimal preemptive schedule is at most 7/3, improving a bound of Shmoys and Wein.  相似文献   

16.
Preemptive Machine Covering on Parallel Machines   总被引:2,自引:0,他引:2  
This paper investigates the preemptive parallel machine scheduling to maximize the minimum machine completion time. We first show the off-line version can be solved in O(mn) time for general m-uniform-machine case. Then we study the on-line version. We show that any randomized on-line algorithm must have a competitive ratio m for m-uniform-machine case and ∑i = 1m1/i for m-identical-machine case. Lastly, we focus on two-uniform-machine case. We present an on-line deterministic algorithm whose competitive ratio matches the lower bound of the on-line problem for every machine speed ratio s≥ 1. We further consider the case that idle time is allowed to be introduced in the procedure of assigning jobs and the objective becomes to maximize the continuous period of time (starting from time zero) when both machines are busy. We present an on-line deterministic algorithm whose competitive ratio matches the lower bound of the problem for every s≥ 1. We show that randomization does not help.  相似文献   

17.
On Approximating a Scheduling Problem   总被引:4,自引:0,他引:4  
Given a set of communication tasks (best described in terms of a weighted bipartite graph where one set of nodes denotes the senders, the other set the receivers, edges are communication tasks, and the weight of an edge is the time required for transmission), we wish to minimize the total time required for the completion of all communication tasks assuming that tasks can be preempted (that is, each edge can be subdivided into many edges with weights adding up to the edge's original weight) and that preemption comes with a cost. In this paper, we first prove that one cannot approximate this problem within a factor smaller than unless P=NP. It is known that a simple approximation algorithm achieves within a ratio of two (H. Choi and S.L. Hakimi, Algorithmica, vol. 3, pp. 223–245, 1988). However, our experimental results show that its performance is worse than the originally proposed heuristic algorithm (I.S. Gopal and C.K. Wong, IEEE Transactions on Communications, vol. 33, pp. 497–501, 1985). We devise a more sophisticated algorithm, called the potential function algorithm which, on the one hand, achieves a provable approximation ratio of two, and on the other hand, shows very good experimental performance. Moreover, the way in which our more sophisticated algorithm derives from the simple one, suggests a hierarchy of algorithms, all of which have a worst-case performance at most two, but which we suspect to have increasingly better performance, both in worst case and with actual instances.  相似文献   

18.
Abstract

This paper describes a process planning system for sheet metal parts. This system has capabilities to deliver process planning data just in time. The change of tools, processes and even machines for manufacturing of the parts is possible at the shop door after design has been finished.

The sequence of the production planning steps carried out can be changed. In this way the system can be adapted to the requirements of different sheet metal manufacturing companies.

More detailed information is given about the methods used for nesting of the workpieces on the sheets to be produced.  相似文献   

19.
A scheduling theory model is applied to study surgery scheduling in hospitals. If a surgical patient is regarded as a job waiting to be processed, and the related surgeons, anesthesiologists, nurses and surgical equipment as machines that are simultaneously needed for the processing of job, then the surgery scheduling can be described as a parallel machines scheduling problem in which a job is processed by multiple machines simultaneously. We adopt the two-stage approach to solve this scheduling problem and develop a computerized surgery scheduling system to handle such a task. This system was implemented in the Shanghai First People’s Hospital and increased the quantity of average monthly finished operations by 10.33 %, the utilization rate of expensive equipment by 9.66 % and the patient satisfaction degree by 1.12 %, and decreased the average length of time that patients wait for surgery by 0.46 day.  相似文献   

20.
Zheng  Hongye  Gao  Suogang  Liu  Wen  Wu  Weili  Du  Ding-Zhu  Hou  Bo 《Journal of Combinatorial Optimization》2022,44(1):343-353

In this paper, we consider the parallel-machine scheduling problem with release dates and submodular rejection penalties. In this problem, we are given m identical parallel machines and n jobs. Each job has a processing time and a release date. A job is either rejected, in which case a rejection penalty has to be paid, or accepted and processed on one of the m identical parallel machines. The objective is to minimize the sum of the makespan of the accepted jobs and the rejection penalty of the rejected jobs which is determined by a submodular function. Our main work is to design a 2-approximation algorithm based on the primal-dual framework.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号