首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The relative worst order ratio is a measure for the quality of online algorithms. Unlike the competitive ratio, it compares algorithms directly without involving an optimal offline algorithm. The measure has been successfully applied to problems like paging and bin packing. In this paper, we apply it to machine scheduling. We show that for preemptive scheduling, the measure separates multiple pairs of algorithms which have the same competitive ratios; with the relative worst order ratio, the algorithm which is “intuitively better” is also provably better. Moreover, we show one such example for non-preemptive scheduling.  相似文献   

2.
“Sequence set” is a mathematical model used in many applications such as biological sequences analysis and text processing. However, “single” sequence set model is not appropriate for the rapidly increasing problem size. For example, very large genome sequences should be separated and processed chunk by chunk. For these applications, the underlying mathematical model is “Multiple Sequence Sets” (MSS). To process multiple sequence sets, sequences are distributed to different sets and then sequences on each set are processed in parallel. Deriving effective algorithm for MSS processing is challenging.  相似文献   

3.
Hybrid Evolutionary Algorithms for Graph Coloring   总被引:19,自引:2,他引:17  
A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present such hybrid algorithms for the graph coloring problem. These algorithms combine a new class of highly specialized crossover operators and a well-known tabu search algorithm. Experiments of such a hybrid algorithm are carried out on large DIMACS Challenge benchmark graphs. Results prove very competitive with and even better than those of state-of-the-art algorithms. Analysis of the behavior of the algorithm sheds light on ways to further improvement.  相似文献   

4.
We study the so-called Generalized Median graph problem where the task is to construct a prototype (i.e., a ‘model’) from an input set of graphs. While our primary motivation comes from an important biological imaging application, the problem effectively captures many vision (e.g., object recognition) and learning problems, where graphs are increasingly being adopted as a powerful representation tool. Existing techniques for his problem are evolutionary search based; in this paper, we propose a polynomial time algorithm based on a linear programming formulation. We propose an additional algorithm based on a bi-level method to obtain solutions arbitrarily close to the optimal in (worst case) non-polynomial time. Within this new framework, one can optimize edit distance functions that capture similarity by considering vertex labels as well as he graph structure simultaneously. We first discuss experimental evaluations in context of molecular image analysis problems—he methods will provide the basis for building a topological map of all 23 pairs of the human chromosome. Later, we include (a) applications to other biomedical problems and (b) evaluations on a public pattern recognition graph database. This work was supported by NSF grants CCF-0546509, IIS-0713489, and NIH grant GM 072131-23. The second author was also supported in part by the Department of Biostatistics and Medical Informatics, UW-Madison and UW ICTR, funded through an NIH Clinical and Translational Science Award (CTSA), grant number 1 UL1 RR025011.  相似文献   

5.
The linear sum assignment problem has been well studied in combinatorial optimization. Because of the integrality property, it is a linear programming problem with a variety of efficient algorithms to solve it. In the given research, we present a reformulation of the linear sum assignment problem and a Lagrangian relaxation algorithm for its reformulation. An important characteristic of the new Lagrangian relaxation method is that the optimal Lagrangian multiplier yields a critical bottleneck value. Lagrangian relaxation has only one Lagrangian multiplier, which can only take on a limited number of values, making the search for the optimal multiplier easy. The interpretation of the optimal Lagrangian parameter is that its value is equal to the price that must be paid for all objects in the problem to be assigned.  相似文献   

6.
Greedy algorithms are simple, but their relative power is not well understood. The priority framework (Borodin et al. in Algorithmica 37:295–326, 2003) captures a key notion of “greediness” in the sense that it processes (in some locally optimal manner) one data item at a time, depending on and only on the current knowledge of the input. This algorithmic model provides a tool to assess the computational power and limitations of greedy algorithms, especially in terms of their approximability. In this paper, we study priority algorithm approximation ratios for the Subset-Sum Problem, focusing on the power of revocable decisions, for which the accepted data items can be later rejected to maintain the feasibility of the solution. We first provide a tight bound of α≈0.657 for irrevocable priority algorithms. We then show that the approximation ratio of fixed order revocable priority algorithms is between β≈0.780 and γ≈0.852, and the ratio of adaptive order revocable priority algorithms is between 0.8 and δ≈0.893. A preliminary version of this paper appeared in the Proceedings of COCOON 2007, LNCS 4598, pp. 504–514.  相似文献   

7.
A combinatorial optimization problem, called the Bandpass Problem, is introduced. Given a rectangular matrix A of binary elements {0,1} and a positive integer B called the Bandpass Number, a set of B consecutive non-zero elements in any column is called a Bandpass. No two bandpasses in the same column can have common rows. The Bandpass problem consists of finding an optimal permutation of rows of the matrix, which produces the maximum total number of bandpasses having the same given bandpass number in all columns. This combinatorial problem arises in considering the optimal packing of information flows on different wavelengths into groups to obtain the highest available cost reduction in design and operating the optical communication networks using wavelength division multiplexing technology. Integer programming models of two versions of the bandpass problems are developed. For a matrix A with three or more columns the Bandpass problem is proved to be NP-hard. For matrices with two or one column a polynomial algorithm solving the problem to optimality is presented. For the general case fast performing heuristic polynomial algorithms are presented, which provide near optimal solutions, acceptable for applications. High quality of the generated heuristic solutions has been confirmed in the extensive computational experiments. As an NP-hard combinatorial optimization problem with important applications the Bandpass problem offers a challenge for researchers to develop efficient computational solution methods. To encourage the further research a Library of Bandpass Problems has been developed. The Library is open to public and consists of 90 problems of different sizes (numbers of rows, columns and density of non-zero elements of matrix A and bandpass number B), half of them with known optimal solutions and the second half, without.  相似文献   

8.
We study a novel “coverage by directional sensors” problem with tunable orientations on a set of discrete targets. We propose a Maximum Coverage with Minimum Sensors (MCMS) problem in which coverage in terms of the number of targets to be covered is maximized whereas the number of sensors to be activated is minimized. We present its exact Integer Linear Programming (ILP) formulation and an approximate (but computationally efficient) centralized greedy algorithm (CGA) solution. These centralized solutions are used as baselines for comparison. Then we provide a distributed greedy algorithm (DGA) solution. By incorporating a measure of the sensors residual energy into DGA, we further develop a Sensing Neighborhood Cooperative Sleeping (SNCS) protocol which performs adaptive scheduling on a larger time scale. Finally, we evaluate the properties of the proposed solutions and protocols in terms of providing coverage and maximizing network lifetime through extensive simulations. Moreover, for the case of circular coverage, we compare against the best known existing coverage algorithm.  相似文献   

9.
为了提高货运供需匹配效率,建立了一种车货供需匹配数学模型,描述了车货匹配问题的目标与相关约束,对量子进化算法进行设计与改进用于对此问题求解,提出了有约束惩罚的适应度衰减方法,解决了量子群初期无强可行解时最优量子个体的选择问题,引入量子群成熟度对量子进化算法的退出机制进行改进。在实验中,使用改进的量子进化算法和标准遗传进化算法进行对比,并对算法参数进行优化,实验中量子进化算法表现出更好的收敛速度,准确性和稳定性,但是量子群规模存在“瓶颈问题”,更大规模的量子群对算法优化效果并不明显且需要耗费更长的计算机时间,量子旋转角增量与算法收敛速度正相关,与全局搜索能力负相关。结果表明,改进的量子进化算法可以高效地搜索到较为优秀的车货匹配方案,为车主和货主推荐较为合理的车货供需信息资源。  相似文献   

10.
In this paper, we consider a new visual cryptography scheme that allows for sharing of multiple secret images on graphs: we are given an arbitrary graph (V,E) where every node and every edge are assigned an arbitrary image. Images on the vertices are “public” and images on the edges are “secret”. The problem that we are considering is how to make a construction such that when the encoded images of two adjacent vertices are printed on transparencies and overlapped, the secret image corresponding to the edge is revealed. We define the most stringent security guarantees for this problem (perfect secrecy) and show a general construction for all graphs where the cost (in terms of pixel expansion and contrast of the images) is proportional to the chromatic number of the cube of the underlying graph. For the case of bounded degree graphs, this gives us constant-factor pixel expansion and contrast. This compares favorably to previous works, where pixel expansion and contrast are proportional to the number of images.  相似文献   

11.
We considered the problem of clustering binarized oligonucleotide fingerprints that attempts to identify clusters. Oligonucleotide fingerprinting is a powerful DNA array based method to characterize cDNA and rRNA libraries and has many applications including gene expression profiling and DNA clone classification. DNA clone classification is the main application for the problem considered in this paper. Most of the existing approaches for clustering use normalized real intensity values and thus do not treat positive and negative hybridization signals equally. This is demonstrated in a series of recent publications where a discrete approach typically useful in the classification of microbial rRNA clones has been proposed. In the discrete approach, hybridization intensities are normalized and thresholds are set such that a value of 1 represents hybridization, a value of 0 represents no hybridization, and an N represents unknown, which is also called a missing value. A combinatorial optimization problem is then formulated attempting to cluster the fingerprints and resolve the missing values simultaneously. It has been examined that missing values cause much difficulty in clustering analysis and most clustering methods are very sensitive to them. In this paper, we turned a little back to the traditional clustering problem, which takes in no missing values but with the revised goal to stabilize the number of clusters and maintain the clustering quality. We adopted the binarizing scheme used in the discrete approach as it is shown to be typically useful for the clone classifications. We formulated such a problem into another combinatorial optimization problem. The computational complexity of this new clustering problem and its relationships to the discrete approach and the traditional clustering problem were studied. We have designed an exact algorithm for the new clustering problem, which is an A* search algorithm for finding a minimum number of clusters. The experimental results on two commonly tested real datasets demonstrated that the A* search algorithm runs fast and performs better than some popular hierarchical clustering methods, in terms of separating clones that have different characteristics with respect to the given oligonucleotide probes.Supported by NSERC and CFI.Supported by NSERC.Supported partially by NSERC, CFI, and NNSF Grant 60373012.  相似文献   

12.
The two-dimensional strip packing problem is a generalization of the classic one-dimensional bin packing problem. It has many important applications such as costume clipping, material cutting, real-world planning, packing, scheduling etc. Average-case performance analysis of approximation algorithms attracts a lot of attention because it plays a crucial role in selecting an appropriate algorithm for a given application. While approximation algorithms for two-dimensional packing are frequently presented, the results of their average-case performance analyses have seldom been reported due to intractability. In this paper, we analyze the average-case performance of Next Fit Decreasing Height (NFDH) algorithm, one of the first strip packing algorithms proposed by Coffman, Jr. in 1980. We prove that the expected height of packing with NFDH algorithm, when the heights and widths of the rectangle items are independent and both obey (0, 1] uniform distribution, is about n/3, where n is the number of rectangle items. We also validate the theoretical result with experiments.This work is supported by National 973 Fundamental Research Project of China on NP Complete Problems and High Performance Software (No. G1998030403).  相似文献   

13.
Circular blanks are often cut from silicon steel plates to make stators and rotors of electric motors. The shearing and punching processes are often used to cut the blanks. First a guillotine shear cuts the plate into strips, and then a stamping press cuts the blanks from the strips. The unconstrained circle cutting problem discussed is the problem of cutting from a rectangular plate a number of circular blanks of given diameters and values, so as to maximize the value of blanks cut, where the shearing and punching processes are applied. Two algorithms based on dynamic programming are presented for generating cutting patterns, one is an exact algorithm and the other is a heuristic one. The computational results indicate that the exact algorithm is adequate for small and middle scale problems, the heuristic algorithm is much more time efficient, and can generate cutting patterns close to optimal.  相似文献   

14.
There are theories on brain functionality that can only be tested in very large models. In this work, a simulation model appropriate for working with large number of neurons was developed, and Information Theory measuring tools were designed to monitor the flow of information in such large networks. The model’s simulator can handle up to one million neurons in its current implementation by using a discretized version of the Lapicque integrate and fire neuron instead of interacting differential equations. A modular structure facilitates the setting of parameters of the neurons, networks, time and most importantly, architectural changes. Applications of this research are demonstrated by testing architectures in terms of mutual information. We present some preliminary architectural results showing that adding a virtual analogue to white matter called “jumps” to a simple representation of cortex results in: (1) an increase in the rate of mutual information flow, corresponding to the “bias” or “priming” hypothesis; thereby giving a possible explanation of the high speed response to stimuli in complex networks. (2) An increase in the stability of response of the network; i.e. a system with “jumps” is a more reliable machine. This also has an effect on the potential speed of response.  相似文献   

15.
多物品最优组合供应模式确定问题的模型研究   总被引:3,自引:2,他引:3  
电子商务B2B反向拍卖采购模式能够为采购方在价格竞争条件下提供更广泛的产品和服务,因此这种采购模式将会更加灵活.本文首先提出了电子商务B2B采购合同的反向组合拍卖机制,然后建立了反向组合拍卖的多物品最优组合供应模式确定问题的数学模型,并设计了求解该模型的两级优化算法,最后以应用实例说明了模型的普适性和算法的有效性.  相似文献   

16.
In recent years accounting researchers have identified “political” lobbying as a problem for accounting standard setting. This paper presents a simple game-theoretic analysis of the political process to identify situations where companies have incentives to lobby the political principal instead of participating in the usual due process of accounting standard setting. Analysis of the model suggests that “political” lobbying is more likely to happen in the EU than in the US. Furthermore it is suggested that if the relevant standard setters wish to achieve harmonization of accounting standards between the EU and the US, European companies have more lobbying leverage than their American counterparts because there are more European veto players than American ones.  相似文献   

17.
We present a new Immune Algorithm, IMMALG, that incorporates a Stochastic Aging operator and a simple local search procedure to improve the overall performances in tackling the chromatic number problem (CNP) instances. We characterize the algorithm and set its parameters in terms of Kullback Entropy. Experiments will show that the IA we propose is very competitive with the state-of-art evolutionary algorithms.  相似文献   

18.
The problem of optimal surface flattening in 3-D finds many applications in engineering and manufacturing. However, previous algorithms for this problem are all heuristics without any quality guarantee and the computational complexity of the problem was not well understood. In this paper, we prove that the optimal surface flattening problem is NP-hard. Further, we show that the problem of flattening a topologically spherical surface admits a PTAS and can be solved by a (1+ε)-approximation algorithm in O(nlog n) time for any constant ε>0, where n is the input size of the problem.  相似文献   

19.

In order to achieve efficient facility design for service type activities, operating under dynamic conditions and a large number of constraints, the use of a traditional approach has proved to be tedious and time consuming. Development of an efficient decision support system for such a situation calls for the consideration of the complex nature of interaction between the system parameters and the relationship between the working environment and the resources within the system. Mathematical programming techniques, e.g. linear and integer programming as well as queuing models, though useful in handling combinatorial optimization problems, are incapable of dealing with stochastic utilization problems normally encountered in the design of facilities of a fast changing environment. This paper makes use of a pattern search algorithm for the optimal allocation of service facility resources. The layout of the facilities has then been optimized by the use of the CLASS algorithm. The two separate algorithms have suitably been integrated together into a single simulation-based system. The effectiveness of the proposed methodology has been demonstrated by means of a real case study pertaining to design and layout optimization of a multi-functional gasoline service station in Bangkok.  相似文献   

20.
More than previous mild recessions, the current global crisis calls into question the merits of a model that held sway for almost three decades. Celebrated in the nineties in a “National Bestseller”, which even President Clinton considered to be a “blueprint” “for all officials to read”, the book—Reinventing Government—preached the reform of government in private sector ways. In stark contrast to Bureaucracy, which they considered “Bankrupt”. Authors Osborne and Gaebler (1993) promoted Debureaucratization, which they summed up as decentralization, deregulation, downsizing and outsourcing. It is time to revisit the assumptions of the Osborne-Gaebler model that has demonstrably failed. More than the model, however, the ways of its promotion in the market of ideas invites a word of caution. It prompted simplistic distortions, which typically followed from superficial analyses of Weber’s seminal opus (Timsit 2008:864–875). This paper reconsiders these two contrasting models, though not in “black” and “white”. Rather, the two are explored in dialectical terms, as outcomes of shifting ideologies, often taken to extremes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号