首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study the problems of non-preemptively scheduling and packing malleable and parallel tasks with precedence constraints to minimize the makespan. In the scheduling variant, we allow the free choice of processors; in packing, each task must be assigned to a contiguous subset. Malleable tasks can be processed on different numbers of processors with varying processing times, while parallel tasks require a fixed number of processors. For precedence constraints of bounded width, we resolve the complexity status of the problem with any number of processors and any width bound. We present an FPTAS based on Dilworth’s decomposition theorem for the NP-hard problem variants, and exact efficient algorithms for all remaining special cases. For our positive results, we do not require the otherwise common monotonous penalty assumption on the processing times of malleable tasks, whereas our hardness results hold even when assuming this restriction. We complement our results by showing that these problems are all strongly NP-hard under precedence constraints which form a tree.  相似文献   

2.
Iris Vessey 《决策科学》1991,22(2):219-240
A considerable amount of research has been conducted over a long period of time into the effects of graphical and tabular representations on decision-making performance. To date, however, the literature appears to have arrived at few conclusions with regard to the performance of the two representations. This paper addresses these issues by presenting a theory, based on information processing theory, to explain under what circumstances one representation outperforms the other. The fundamental aspects of the theory are: (1) although graphical and tabular representations may contain the same information, they present that information in fundamentally different ways; graphical representations emphasize spatial information, while tables emphasize symbolic information; (2) tasks can be divided into two types, spatial and symbolic, based on the type of information that facilitates their solution; (3) performance on a task will be enhanced when there is a cognitive fit (match) between the information emphasized in the representation type and that required by the task type; that is, when graphs support spatial tasks and when tables support symbolic tasks; (4) the processes or strategies problem solvers use are the crucial elements of cognitive fit since they provide the link between representation and task; the processes identified here are perceptual and analytical; (5) so long as there is a complete fit of representation, processes, and task type, each representation will lead to both quicker and more accurate problem solving. The theory is validated by its success in explaining the results of published studies that examine the performance of graphical and tabular representations in decision making.  相似文献   

3.
启发式算法是解决资源受限的项目调度问题的经典方法之一,通常用来生成元启发算法初始解,传统的串行(SSGS)和并行(PSGS)是生成项目调度方案的经典机制,本文基于图的广度优先搜索算法,提出了一种考虑任务节点位置因素的广度生成机制(BSSGS),并验证了算法的效果。借鉴广度搜索算法定义进度生成机制中的当前任务集合C、候选任务集合D以及阶段变量g等,对各任务节点进行层次划分并定义任务调度秩序;结合优先规则选择候选任务j*并进行资源Rk(t)调度更新,进而生成完整的调度方案;案例分析表明新机制在满足优先规则和资源约束的同时兼顾了任务节点在网络中位置因素,拥有对于局部复杂网络不回避,对关键节点及时调度等明显优势;选择PSPLIB中算例,在不同优先规则下对新机制进行了测试,测试结果表明新的进度生成机制在LPT、SPT、MTS和MIS等优先规则下,在平均最短工期、平均资源利用率及最优调度方案率等方面优于串行和并行进度生成机制,且算法时间复杂度与传统机制相比并未增加,仍为O(J2,K)。  相似文献   

4.
We study a class of scheduling problems with batch setups for the online-list and online-time paradigms. Jobs are to be scheduled in batches for processing. All jobs in a batch start and complete together, and a constant setup is prior to each batch. The objective is to minimize the total completion time of all jobs. We primarily consider the special cases of these problems with identical processing times, for which efficient on-line heuristics are proposed and their competitive performance is evaluated.  相似文献   

5.
Geographic Information Systems (GIS) enable decision makers to view tabular data geographically, as maps. This simple yet powerful visual format appears to facilitate problem solving, yet how it does so is not clear, nor do we know the types of problems that benefit from this representation. To begin to understand the contributions of geographic representations over tabular representations, we conducted a three-factor experiment in problem solving. The experiment contained two different representations (map and table), three different geographic relationships (proximity, adjacency, and containment), and three levels of task difficulty (low, medium, and high). We found that maps generally produced faster problem solving than tables, and that problem-solving time increased with task difficulty. Most importantly, for the proximity and adjacency geographic relationships we found that maps kept problem-solving time low, while tables tended to increase time dramatically. However, we found that the number of knowledge states for each task explains performance times quite well and is a useful tool for understanding performance differences and interaction effects. As tasks become more difficult, representing them as maps generally keeps the number of knowledge states small, while for tables, the number of knowledge states increases dramatically. Correspondingly, problem-solving times increase dramatically with tables, but not with maps. In sum, as difficulty increases, maps are more effective for problem-solving tasks. Using maps, the tasks are simplified using visual heuristics that keep problemsolving times and error rates from rising as quickly as they do with tables.  相似文献   

6.
Paced assembly lines are widely used in repetitive manufacturing applications. Most previous research on the design of paced lines has assumed that each task along the line can be performed by only one worker (or a fixed number of workers). In many cases, however, task duration times may be reduced by increasing the number of workers or changing the equipment assigned to work stations. Thus, the problem becomes one of assigning resource alternatives (e.g., workers and/or equipment) and tasks to work stations to minimize total cost for a desired production rate. For this problem, we present three procedures. The first formulates the problem as a shortest path problem and guarantees optimality. The second and third are heuristic adaptations of the shortest path procedure that are capable of solving large problems. The procedures are compared in terms of solution quality and computation time on a set of 128 randomly generated problems for which optimal solutions could be found. Our simulation results indicate that the choice of procedure depends on problem complexity and resource costs.  相似文献   

7.
Equipment failures can have significant implications in terms of cost and customer satisfaction. Reducing the time required to find the cause of a failure can provide large cost savings and help preserve customer goodwill. Single‐item discrete sequential search models can be used to sequence the tasks in diagnostic search to minimize the expected time required to find the cause of the failure. We increase the utility of the single‐item discrete sequential search model by developing a formulation that includes simple precedence relationships as well as sequence dependent relationships defined by group activities. This formulation can be applied to a number of other problems including determining the sequence for multiple quality control tests on an item, scheduling oil well workovers to maximize the expected increase in oil production, and sequencing tasks in a research project where there is a technological risk associated with each task.  相似文献   

8.
In this paper, we consider the single-machine scheduling problems with the effects of learning and deterioration. By the effects of learning and deterioration, we mean that job processing times are defined by functions of their starting times and positions in the sequence. It is shown that even with the introduction of learning effect and deteriorating jobs to job processing times, single-machine makespan and sum of completion times (square) minimization problems remain polynomially solvable, respectively. But for the following objective functions: the weighted sum of completion times and the maximum lateness, this paper proves that the WSPT rule and the EDD rule can construct the optimal sequence under some special cases, respectively.  相似文献   

9.
We address the problem of assigning airline customer service agents (CSAs) to tasks related to departing flights, such as selling tickets and collecting boarding cards, at an international terminal of a large airport. The airline specifies minimum and target levels of staff and required (or desired) types and levels of skills for each location in each time period. The assignment problem is complicated by staff heterogeneity, time required for moves between locations, and lunch and rest‐break requirements. We present a mixed‐integer formulation that considers both staffing shortages and skills mismatches and show that the problem is NP‐hard. We derive valid inequalities that tighten the bounds within a branch‐and‐cut procedure, enabling us to obtain near‐optimal solutions for problems of realistic size very quickly. We also present a generalization to simultaneously optimize shift starting times and task assignments, which can aid in longer term workforce planning. Finally, we utilize our procedure to obtain managerial insights regarding the benefits of flexibility derived from more highly skilled staff, allowing more frequent moves, and choices of shift starting times. We also demonstrate the benefits of our procedure vs. a heuristic that mimics what an experienced scheduler might choose.  相似文献   

10.
资源受限是工程项目时刻都可能面对的挑战。由于资源限制,需要将原项目计划中相互之间无优先关系的平行工序调整为顺序工序。平行工序顺序化可导致项目工期延迟,因此需考虑如何使项目工期延迟最小。该平行工序顺序优化问题是项目调度问题,也是排列组合问题,通常难度很大,包括一些NP-hard问题。本文主要研究该问题的一类典型子问题——平行工序顺序对优化,即如何将项目中某2n个平行工序调整为n个顺序工序对,并且对项目工期的影响最小。该问题的总方案数可达到(2n)!/n!。本文借助工序网络(如CPM网络),运用简单的时间参数量化了平行工序顺序化对项目工期的影响,进而降低问题的求解难度,建立了纯0-1规划模型。实验验证了该模型的求解效率,求解100个平行工序规模的问题平均耗时0.2605秒,而求解500个平行工序规模的问题平均耗时10.66秒。  相似文献   

11.
单资源调度中误工问题的作业时间压缩算法   总被引:1,自引:0,他引:1  
本文采用作业时间可压缩的方法来解决单资源调度中的误工问题。在安排任务处理顺序的过程中,当某个任务发生误工时,我们基于关键路径反向搜索的方法,给出了一个启发式算法,求得需要压缩的任务集,使这个误工任务的延误时间尽可能的减少,并使需要压缩的任务数目最少,最后证明了算法的有效性,并给出了一个算例。  相似文献   

12.
We study the scheduling of multiple tasks under varying processing costs and derive a priority rule for optimal scheduling policies. Each task has a due date, and a non‐completion penalty cost is incurred if the task is not completely processed before its due date. We assume that the task arrival process is stochastic and the processing rate is capacitated. Our work is motivated by both traditional and emerging application domains, such as construction industry and freelance consulting industry. We establish the optimality of Shorter Slack time and Longer remaining Processing time (SSLP) principle that determines the priority among active tasks. Based on the derived structural properties, we also propose an effective cost‐balancing heuristic policy and demonstrate the efficacy of the proposed policy through extensive numerical experiments. We believe our results provide operators/managers valuable insights on how to devise effective service scheduling policies under varying costs.  相似文献   

13.
This paper presents a scheduling algorithm to solve flowshop problems with a common job sequence on all machines. This algorithm uses makespan as its criterion. Initially, it chooses a preferred sequence by scanning the processing times matrix and making a few calculations. The makespan time of the preferred job-sequence is further reduced by using an improvement routine that allows interchanges between adjacent jobs. Solutions of 1200 problems are compared with the best solutions previously reported for corresponding size problems in the Campbell-Dudek-Smith (C-D-S) paper [1]. This algorithm offers up to 1% average improvement in reducing the makespan of nearly 50% of the problem sets over the results of the existing algorithms, and its computational time requirements are about one-fifth of that of the C-D-S algorithm.  相似文献   

14.
The effects of task clarification, self-monitoring, and performance feedback on cleaning behaviors of 9 lifeguards in 3 performance areas (vacuuming, lobby tidying, and pool deck maintenance) were investigated using an ABA reversal design at a county swim complex. A specific task in each performance area was used as a behavioral control. Following a task clarification meeting, the percentage of closing tasks completed each night was self-monitored through ratings by lifeguards and managers. Researchers conducted independent ratings of these completed tasks after the staff had left the building. Feedback data were posted daily using line graphs that displayed the percentage of tasks completed correctly from both self-report and researchers' data. Overall performance increased from an average of 45.1% correct behaviors during baseline to an average of 76.9% during intervention then reversed to baseline during follow-up to an average performance of 45.05%.  相似文献   

15.

The increase in human resource cost puts forward higher requirements for the optimization of home appliance manufacturing processes. This paper studied an integrated human resource optimization problem considering the human resource selection, learning effect, skills degradation effect, and parallel production lines. There are multiple different manufacturing tasks with different normal processing times. Human resources have different abilities and costs. The actual processing time of a task is determined by its normal processing time, position, and ability of the human resource. The objective is to minimize production time and the labor cost. To solve the studied problem, we first consider the case where the human resources have been selected and assigned to the production lines. Then, some structural properties are proposed and a heuristic is developed to arrange tasks on every single production line. Also, we derive a lower bound for the problem. Since the investigated problem is NP-hard, a Variable Neighborhood Search is designed to solve the problem in a reasonable time. Finally, computational experiments are conducted and the experimental results validate the performance of the proposed methods.

  相似文献   

16.
Many workcells in batch manufacturing systems are populated with multiple, nonidentical machines that perform similar tasks. Because of the size of a batch when a job arrives, it may be uneconomical to set up two or more machines to process the same job simultaneously. An economic decision has to be made as regards which machine in the cell to assign the job. Likewise, many multi-operation jobs can be processed using one of several feasible operation sequences that may lead to different total manufacturing costs. The cost differences are the result of several factors, among which are processing time and cost dependencies between operations, fixturing requirements, and material handling requirements. When the workcell machine selection decision is considered along with the operation sequencing decision, determination of the best machine in a cell and the best operation sequence for the batch is a non-trivial task. In this paper, we address the problem of selecting the best machine within a cell and the best operation sequence for a batch when operation cost is machine and sequence dependent. The problem is modeled mathematically and solved using a heuristic algorithm. The performance of the algorithm is compared with that of an exact solution procedure.  相似文献   

17.
Yu  Fan-Jang  Hwang  Sheue-Ling  Huang  Yu-Hao 《Risk analysis》1999,19(3):401-415
In the design, development, and manufacturing stage of industrial products, engineers usually focus on the problems caused by hardware or software, but pay less attention to problems caused by human error, which may significantly affect system reliability and safety. Although operating procedures are strictly followed, human error still may occur occasionally. Among the influencing factors, the inappropriate design of standard operation procedure (SOP) or standard assembly procedure (SAP) is an important and latent reason for unexpected results found during human operation. To reduce the error probability and error effects of these unexpected behaviors in the industrial work process, overall evaluation of SOP or SAP quality has become an essential task. The human error criticality analysis (HECA) method was developed to identify the potentially critical problems caused by human error in the human operation system. This method performs task analysis on the basis of operation procedure. For example, SOP, analyzes the human error probability (HEP) for each human operation step, and assesses its error effects to the whole system. The results of the analysis will show the interrelationship that exists between critical human tasks, critical human error modes, and human reliability information of the human operation system. To identify the robustness of the model, a case study of initiator assembly tasks was conducted. Results show that the HECA method is practicable in evaluating the operation procedure, and the information is valuable in identifying the means to upgrade human reliability and system safety for human tasks.  相似文献   

18.
The worst-case behavior of the critical path (CP) algorithm for multiprocessor scheduling with an out-tree task dependency structure is studied. The out-tree is not known in advance and the tasks are released on-line over time (each task is released at the completion time of its direct predecessor task in the out-tree). For each task, the processing time and the remainder (the length of the longest chain of the future tasks headed by this task) become known at its release time. The tight worst-case ratio and absolute error are derived for this strongly clairvoyant on-line model. For out-trees with a specific simple structure, essentially better worst-case ratio and absolute error are derived. Our bounds are given in terms of t max, the length of the longest chain in the out-tree, and it is shown that the worst-case ratio asymptotically approaches 2 for large t max when the number of processors , where is an integer close to . A non-clairvoyant on-line version (without knowledge of task processing time and remainder at the release time of the task) is also considered and is shown that the worst-case behavior of width-first search is better or the same as that of the depth-first search.  相似文献   

19.
A hybrid approach to solve job sequencing problems using heuristic rules and artificial neural networks is proposed. The problem is to find a job sequence for a single machine that minimizes the total weighted tardiness of the jobs. Two different cases are considered: (1) when there are no setups, and (2) when there are sequence-dependent setup times. So far, successful heuristic rules for these cases are: apparent tardiness cost (ATC) rule proposed by Vepsalainen and Morton for the former case, and an extended version of the ATC rule (ATCS) proposed by Lee, Bhaskaran, and Pinedo for the latter. Both approaches utilize some look-ahead parameters for calculating the priority index of each job. As reported by Bhaskaran and Pinedo, the proper value of the look-ahead parameter depends upon certain problem characteristics, such as due-date tightness and due-date range. Thus, an obvious extension of the ATC or the ATCS rule is to adjust the parameter values depending upon the problem characteristics: this is known to be a difficult task. In this paper, we propose an application of a neural network as a tool to ‘predict’ proper values of the look-ahead parameters. Our computational tests show that the proposed hybrid approach outperforms both the ATC rule with a fixed parameter value and the ATCS using the heuristic curve-fitting method.  相似文献   

20.
Journal of Combinatorial Optimization - We consider a number of parallel-machine scheduling problems in which jobs have variable processing times. The actual processing time of each job is...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号