首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Zheng  Hongye  Gao  Suogang  Liu  Wen  Wu  Weili  Du  Ding-Zhu  Hou  Bo 《Journal of Combinatorial Optimization》2022,44(1):343-353

In this paper, we consider the parallel-machine scheduling problem with release dates and submodular rejection penalties. In this problem, we are given m identical parallel machines and n jobs. Each job has a processing time and a release date. A job is either rejected, in which case a rejection penalty has to be paid, or accepted and processed on one of the m identical parallel machines. The objective is to minimize the sum of the makespan of the accepted jobs and the rejection penalty of the rejected jobs which is determined by a submodular function. Our main work is to design a 2-approximation algorithm based on the primal-dual framework.

  相似文献   

2.
Given a complete binary tree of height h, the online tree node assignment problem is to serve a sequence of assignment/release requests, where an assignment request, with an integer parameter 0≤ih, is served by assigning a (tree) node of level (or height) i and a release request is served by releasing a specified assigned node. The node assignments have to guarantee that no node is assigned to two assignment requests unreleased, and every leaf-to-root path of the tree contains at most one assigned node. With assigned node reassignments allowed, the target of the problem is to minimize the number of assignments/reassignments, i.e., the cost, to serve the whole sequence of requests. This online tree node assignment problem is fundamental to many applications, including OVSF code assignment in WCDMA networks, buddy memory allocation and hypercube subcube allocation.  相似文献   

3.
Power assignment for wireless ad hoc networks is to assign a power for each wireless node such that the induced communication graph has some required properties. Recently research efforts have focused on finding the minimum power assignment to guarantee the connectivity or fault-tolerance of the network. In this paper, we study a new problem of finding the power assignment such that the induced communication graph is a spanner for the original communication graph when all nodes have the maximum power. Here, a spanner means that the length of the shortest path in the induced communication graph is at most a constant times of the length of the shortest path in the original communication graph. Polynomial time algorithm is given to minimize the maximum assigned power with spanner property. The algorithm also works for any other property that can be tested in polynomial time and is monotone. We then give a polynomial time approximation method to minimize the total transmission radius of all nodes. Finally, we propose two heuristics and conduct extensive simulations to study their performance when we aim to minimize the total assigned power of all nodes. The author is partially supported by NSF CCR-0311174.  相似文献   

4.
n/m shop scheduling is a ‘ NP-Hard’ problem. Using conventional heuristic algorithms ( priority rules) only, it is almost impossible to achieve an optimal solution. Research has been carried out to improve the heuristic algorithms to give a near-optimal solution. This paper advocates a fuzzy logic based, dynamic scheduling algoridim aimed at achieving this goal. The concept of new membership functions is discussed in die algorithm as a link to connect several priority rules. The constraints to determine the membership function of jobs for a particular priority rule are established, and three membership functions are developed. In order to decide the weight vector of priority rules, an aggregate performance measure is suggested. The methodology for constructing the weight vector is discussed in detail. Experiments have been carried out using a simulation technique to validate the proposed scheduling algorithm.  相似文献   

5.
The multiple weighted hitting set problem is to find a subset of nodes in a hypergraph that hits every hyperedge in at least m nodes. We extend the problem to a notion of hypergraphs with so-called hypernodes and show that, for m=2, it remains fixed-parameter tractable (FPT), parameterized by the number of hyperedges. This is accomplished by a nontrivial extension of the dynamic programming algorithm for hypergraphs. The algorithm might be interesting for certain assignment problems, but here we need it as a tool to solve another problem motivated by network analysis: A d-core of a graph is a subgraph in which every vertex has at least d neighbors. We give an FPT algorithm that computes a smallest 2-core including a given set of target vertices, where the number of targets is the parameter. This FPT result is best possible in the sense that no FPT algorithm for 3-cores can be expected.  相似文献   

6.
The optimization versions of the 3-Partitioning and the Kernel 3-Partitioning problems are considered in this paper. For the objective to maximize the minimum load of the m subsets, it is shown that the MODIFIED LPT algorithm has performance ratios (3m – 1)/(4m – 2) and (2m – 1)/(3m – 2), respectively, in the worst case.  相似文献   

7.
This paper presents an improved algorithm for solving the sum of linear fractional functions (SOLF) problem in 1-D and 2-D. A key subproblem to our solution is the off-line ratio query (OLRQ) problem, which asks to find the optimal values of a sequence of m linear fractional functions (called ratios), each ratio subject to a feasible domain defined by O(n) linear constraints. Based on some geometric properties and the parametric linear programming technique, we develop an algorithm that solves the OLRQ problem in O((m+n)log (m+n)) time. The OLRQ algorithm can be used to speed up every iteration of a known iterative SOLF algorithm, from O(m(m+n)) time to O((m+n)log (m+n)), in 1-D and 2-D. Implementation results of our improved 1-D and 2-D SOLF algorithm have shown that in most cases it outperforms the commonly-used approaches for the SOLF problem. We also apply our techniques to some problems in computational geometry and other areas, improving the previous results.This research was supported in part by the National Science Foundation under Grant CCR-9623585.The research of this author was supported in part by National Science Foundation under grant CCF-0430366.Grant-in-Aid of Ministry of Science, Culture and Education of Japan, No. 10780274.The research of this author was supported in part by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Researchon Priority Areas  相似文献   

8.
Preemptive Machine Covering on Parallel Machines   总被引:2,自引:0,他引:2  
This paper investigates the preemptive parallel machine scheduling to maximize the minimum machine completion time. We first show the off-line version can be solved in O(mn) time for general m-uniform-machine case. Then we study the on-line version. We show that any randomized on-line algorithm must have a competitive ratio m for m-uniform-machine case and ∑i = 1m1/i for m-identical-machine case. Lastly, we focus on two-uniform-machine case. We present an on-line deterministic algorithm whose competitive ratio matches the lower bound of the on-line problem for every machine speed ratio s≥ 1. We further consider the case that idle time is allowed to be introduced in the procedure of assigning jobs and the objective becomes to maximize the continuous period of time (starting from time zero) when both machines are busy. We present an on-line deterministic algorithm whose competitive ratio matches the lower bound of the problem for every s≥ 1. We show that randomization does not help.  相似文献   

9.
This paper considers the on-line problem of scheduling nonpreemptively n independent jobs on m > 1 identical and parallel machines with the objective to maximize the minimum machine completion time. It is assumed that the values of the processing times are unknown but the order of the jobs by their processing times is known in advance. We are asked to decide the assignment of all the jobs to some machines at time zero by utilizing only ordinal data rather than the actual magnitudes of jobs. Algorithms to slove the problem are called ordinal algorithms. In this paper, we give lower bounds and ordinal algorithms. We first propose an algorithm MIN which is at most -competitive for any m machine case, while the lower bound is i=1 m 1/i. Both are on the order of (ln m). Furthermore, for m = 3, we present an optimal algorithm.  相似文献   

10.
This paper develops a regression limit theory for nonstationary panel data with large numbers of cross section (n) and time series (T) observations. The limit theory allows for both sequential limits, wherein T followed by n, and joint limits where T, n simultaneously; and the relationship between these multidimensional limits is explored. The panel structures considered allow for no time series cointegration, heterogeneous cointegration, homogeneous cointegration, and near-homogeneous cointegration. The paper explores the existence of long-run average relations between integrated panel vectors when there is no individual time series cointegration and when there is heterogeneous cointegration. These relations are parameterized in terms of the matrix regression coefficient of the long-run average covariance matrix. In the case of homogeneous and near homogeneous cointegrating panels, a panel fully modified regression estimator is developed and studied. The limit theory enables us to test hypotheses about the long run average parameters both within and between subgroups of the full population.  相似文献   

11.
The hierarchical model for load balancing on two machines   总被引:1,自引:1,他引:0  
Following previous work, we consider the hierarchical load balancing model on two machines of possibly different speeds. We first focus on maximizing the minimum machine load and show that no competitive algorithm exists for this problem. We overcome this barrier in two ways, both related to previously known models. The first one is fractional assignment, where each job can be arbitrarily split between the machines. The second one is a semi-online model where the sum of jobs is known in advance. We design algorithms of best possible competitive ratios for both these cases. Furthermore, we show that the combination of the two models leads to the existence of an optimal algorithm (i.e., an algorithm of competitive ratio 1). This algorithm is clearly optimal for the makespan minimization problem as well. For the latter problem, we consider the fractional assignment model and design an algorithm of best possible competitive ratio for it. This work was submitted as the M.Sc. thesis of the first author.  相似文献   

12.
In a recent appellate court decision, a practitioner was granted the right to challenge a claims denial under provisions of the Employee Retirement Income Security Act of 1974. This article reviews the reasoning of the court in allowing the practitioner standing in the court under ERISA and reviews the language of the insurance contract that led the court to affirm denial of payment to the practitioner. "Health Law" is a regular feature of Physician Executive contributed by Epstein Becker & Green. Mark E. Lutes of the law firm's Washington, D.C, offices serves as editor for the column.  相似文献   

13.
This paper describes an experimentation methodology to measure how demand varies with price and the results of its application at a toy retailer. The same product is assigned different price‐points in different store panels and the resulting sales are used to estimate a demand curve. We use a variant of the k‐median problem to form store panels that control for differences between stores and produce results that are representative of the entire chain. We use the estimated demand curve to find a price that maximizes profit. Our experiment yielded the unexpected result that demand increases with price in some cases. We present likely reasons for this finding from our discussions with retail managers. Our methodology can be used to analyze the effect of several marketing and promotional levers employed in a retail store besides pricing.  相似文献   

14.
Purpose: This paper investigates the selection, design and implementation of a Drum-Buffer-Rope (DBR) type of production pull-system in a panel fabrication plant characterised by extensive shared, batch resource resources within a low volume UK manufacturer of large vehicles. This was the second of a series of two related research projects conducted under the aegis of a Lean initiative at this case firm.

Design/methodology/approach: A purposively selected longitudinal case study conducted over 24?months and organised around a two phase research design. The initial body of evidence included a detailed map constructed by a project team of eight managers and accountants during a two day structured workshop; numerous unstructured interviews and observation of shop floor practices; document and archival analysis, and 140 photographs of the focal operation. Supplemented by extensive financial and operational data extracted from the firm’s accounting and MRP systems, including all data necessary to construct and implement bespoke capacity planning, work in progress (WIP) monitoring and simulation modelling tools. The case firm is anonymised.

Findings: The Lean manufacturing literature ignores the real-world issue of shared resources, and this gap is attributable to the concept of ‘rightsizing’ tools and equipment that is widely promoted within the Lean community. The case panel plant is characterised by extensive shared resources; many of which are also batch processes. The most appropriate pull-system method for this production environment is DBR. The detailed design of the DBR mechanism required a controlled transfer buffer of overhead conveyance capacity after the Drum because the extent of downstream process variability risked it being unable to offload panels, hence compromising throughput.

Research limitations/implications: The study is based upon a single case. This consequently has implications for the ability to generalise from the results.

Practical Implications: When the DBR pull-system design was implemented it reduced the number of panels in WIP by 60%. This equated to a 56% (18?days worth) reduction of manufacturing lead time and more than doubled the plant’s inventory turns (from 9.1 to 21.2). It also significantly improved delivery schedule adherence, with downstream jig stoppages in the Final Assembly falling from an average of six to less than one per week. The financial benefit was independently audited to equate to an annualised value of $850?K. Consequently, this project was awarded the first prize at its parent enterprise’s annual worldwide process improvement competition.

Originality/value: This paper details a novel technique that permits the routings of multiple value streams to be mapped and is useful for highlighting the identity and location of shared resources. It also contributes significantly to the literature that is available on the relationship between the Lean paradigm and the management of shared production resources, and adds to the literature on the detailed design and implementation of a DBR pull-system in a jobbing-type of environment.  相似文献   

15.
We study the scheduling of multiple tasks under varying processing costs and derive a priority rule for optimal scheduling policies. Each task has a due date, and a non‐completion penalty cost is incurred if the task is not completely processed before its due date. We assume that the task arrival process is stochastic and the processing rate is capacitated. Our work is motivated by both traditional and emerging application domains, such as construction industry and freelance consulting industry. We establish the optimality of Shorter Slack time and Longer remaining Processing time (SSLP) principle that determines the priority among active tasks. Based on the derived structural properties, we also propose an effective cost‐balancing heuristic policy and demonstrate the efficacy of the proposed policy through extensive numerical experiments. We believe our results provide operators/managers valuable insights on how to devise effective service scheduling policies under varying costs.  相似文献   

16.
In this paper, we study the circular packing problem. Its objective is to pack a set of n circular pieces into a rectangular plate R of fixed dimensions L×W. Each piece’s type i, i=1,…,m, is characterized by its radius r i and its demand b i . The objective is to determine the packing pattern corresponding to the minimum unused area of R for the circular pieces placed. This problem is solved by using a hybrid algorithm that adopts beam search and a looking-ahead strategy. A node at a level of the beam-search tree contains a partial solution corresponding to the circles already placed inside R. Each node is then evaluated using a looking-ahead strategy, based on the minimum local-distance heuristic, by computing the corresponding complete solution. The nodes leading to the best solutions at level are then chosen for branching. A multi-start strategy is also considered in order to diversify the search space. The computational results show, on a set of benchmark instances, the effectiveness of the proposed algorithm.  相似文献   

17.
We study the extremal parameter N(n,m,H) which is the largest number of copies of a hypergraph H that can be formed of at most n vertices and m edges. Generalizing previous work of Alon (Isr. J. Math. 38:116–130, 1981), Friedgut and Kahn (Isr. J. Math. 105:251–256, 1998) and Janson, Oleszkiewicz and the third author (Isr. J. Math. 142:61–92, 2004), we obtain an asymptotic formula for N(n,m,H) which is strongly related to the solution α q (H) of a linear programming problem, called here the fractional q-independence number of H. We observe that α q (H) is a piecewise linear function of q and determine it explicitly for some ranges of q and some classes of H. As an application, we derive exponential bounds on the upper tail of the distribution of the number of copies of H in a random hypergraph.  相似文献   

18.
This paper solves the problem of increasing the edge-connectivity of a bipartite digraph by adding the smallest number of new edges that preserve bipartiteness. A natural application arises when we wish to reinforce a 2-dimensional square grid framework with cables. We actually solve the more general problem of covering a crossing family of sets with the smallest number of directed edges, where each new edge must join the blocks of a given bipartition of the elements. The smallest number of new edges is given by a min-max formula that has six infinite families of exceptional cases. We discuss a problem on network flows whose solution has a similar formula with three infinite families of exceptional cases. We also discuss a problem on arborescences whose solution has five infinite families of exceptions. We give an algorithm that increases the edge-connectivity of a bipartite digraph in the same time as the best-known algorithm for the problem without the bipartite constraint: O(km log n) for unweighted digraphs and O(nm log (n 2/m)) for weighted digraphs, where n, m and k are the number of vertices and edges of the given graph and the target connectivity, respectively.  相似文献   

19.
Connected dominating sets (CDS) that serve as a virtual backbone are now widely used to facilitate routing in wireless networks. A k-connected m-dominating set (kmCDS) is necessary for fault tolerance and routing flexibility. In order to construct a kmCDS with the minimum size, some approximation algorithms have been proposed in literature. However, the proposed algorithms either only consider some special cases where k=1, 2 or km, or not easy to implement, or cannot provide performance ratio. In this paper, we propose a centralized heuristic algorithm, CSAA, which is easy to implement, and two distributed algorithms, DDA and DPA, which are deterministic and probabilistic methods respectively, to construct a kmCDS for general k and m. Theoretical analysis and simulation results indicate that our algorithms are efficient and effective.  相似文献   

20.
In this paper we develop some econometric theory for factor models of large dimensions. The focus is the determination of the number of factors (r), which is an unresolved issue in the rapidly growing literature on multifactor models. We first establish the convergence rate for the factor estimates that will allow for consistent estimation of r. We then propose some panel criteria and show that the number of factors can be consistently estimated using the criteria. The theory is developed under the framework of large cross‐sections (N) and large time dimensions (T). No restriction is imposed on the relation between N and T. Simulations show that the proposed criteria have good finite sample properties in many configurations of the panel data encountered in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号