首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
In 1984, Gabow and Tarjan provided a very elegant and fast algorithm for the following problem: given a matroid defined on a red and blue colored ground set, determine a basis of minimum cost among those with k red elements, or decide that no such basis exists. In this paper, we investigate extensions of this problem from ordinary matroids to the more general notion of poset matroids which take precedence constraints on the ground set into account. We show that the problem on general poset matroids becomes -hard, already if the underlying partially ordered set (poset) consists of binary trees of height two. On the positive side, we present two algorithms: a pseudopolynomial one for integer polymatroids, i.e., the case where the poset consists of disjoint chains, and a polynomial algorithm for the problem to determine a minimum cost ideal of size l with k red elements, i.e., the uniform rank-l poset matroid, on series-parallel posets.  相似文献   

2.
We consider the frugal coverage problem, an interesting variation of set cover defined as follows. Instances of the problem consist of a universe of elements and a collection of sets over these elements; the objective is to compute a subcollection of sets so that the number of elements it covers plus the number of sets not chosen is maximized. The problem was introduced and studied by Huang and Svitkina (Proceedings of the 29th IARCS annual conference on foundations of software technology and theoretical computer science (FSTTCS), pp. 227–238, 2009) due to its connections to the donation center location problem. We prove that the greedy algorithm has approximation ratio at least 0.782, improving a previous bound of 0.731 in Huang and Svitkina (Proceedings of the 29th IARCS annual conference on foundations of software technology and theoretical computer science (FSTTCS), pp. 227–238, 2009). We also present a further improvement that is obtained by adding a simple corrective phase at the end of the execution of the greedy algorithm. The approximation ratio achieved in this way is at least 0.806. Finally, we consider a packing based algorithm that uses semi-local optimization, and show that its approximation ratio is not less than 0.872. Our analysis is based on the use of linear programs which capture the behavior of the algorithms in worst-case examples. The obtained bounds are proved to be tight.  相似文献   

3.
We consider a framework for bi-objective network construction problems where one objective is to be maximized while the other is to be minimized. Given a host graph G=(V,E) with edge weights w e ∈? and edge lengths ? e ∈? for eE we define the density of a pattern subgraph H=(V′,E′)?G as the ratio ?(H)=∑ eE w e /∑ eE ? e . We consider the problem of computing a maximum density pattern H under various additional constraints. In doing so, we compute a single Pareto-optimal solution with the best weight per cost ratio subject to additional constraints further narrowing down feasible solutions for the underlying bi-objective network construction problem. First, we consider the problem of computing a maximum density pattern with weight at least W and length at most L in a host G. We call this problem the biconstrained density maximization problem. This problem can be interpreted in terms of maximizing the return on investment for network construction problems in the presence of a limited budget and a target profit. We consider this problem for different classes of hosts and patterns. We show that it is NP-hard, even if the host has treewidth 2 and the pattern is a path. However, it can be solved in pseudo-polynomial linear time if the host has bounded treewidth and the pattern is a graph from a given minor-closed family of graphs. Finally, we present an FPTAS for a relaxation of the density maximization problem, in which we are allowed to violate the upper bound on the length at the cost of some penalty. Second, we consider the maximum density subgraph problem under structural constraints on the vertex set that is used by the patterns. While a maximum density perfect matching can be computed efficiently in general graphs, the maximum density Steiner-subgraph problem, which requires a subset of the vertices in any feasible solution, is NP-hard and unlikely to admit a constant-factor approximation. When parameterized by the number of vertices of the pattern, this problem is W[1]-hard in general graphs. On the other hand, it is FPT on planar graphs if there is no constraint on the pattern and on general graphs if the pattern is a path.  相似文献   

4.
The problem of partitioning a partially ordered set into a minimum number of chains is a well-known problem. In this paper we study a generalization of this problem, where we not only assume that the chains have bounded size, but also that a weight w i is given for each element i in the partial order such that w i w j if i j. The problem is then to partition the partial order into a minimum-weight set of chains of bounded size, where the weight of a chain equals the weight of the heaviest element in the chain. We prove that this problem is -hard, and we propose and analyze lower bounds for this problem. Based on these lower bounds, we exhibit a 2-approximation algorithm, and show that it is tight. We report computational results for a number of real-world and randomly generated problem instances.  相似文献   

5.
Motivated by the dynamic resource allocation problem for device-to-device (D2D) communications, we study the online set multicover problem (OSMC). In the online set multicover, the set X of elements to be covered is unknown in advance; furthermore, the coverage requirement of each element \(x \in X\) is initially unknown. Elements of X together with coverage requirements are presented one at a time in an online fashion; and a feasible solution must be maintained at all times. We provide the first deterministic, online algorithms for OSMC with competitive ratios. We consider two versions of OSMC; in the first, each set may be picked only once, while the second version allows each set to be picked multiple times. For both versions, we present the first deterministic, online algorithms, with competitive ratios \(O( \log n \log m )\) and \(O( \log n (\log m + \log k) )\), repectively, where n is the number of elements, m is the number of sets, and k is the maximum coverage requirement. By simulation, we show the efficacy of these algorithms for resource allocation in the D2D setting by analyzing network throughput and other metrics, obtaining a large improvement in running time over offline methods.  相似文献   

6.
Classical group testing is a search paradigm where the goal is the identification of individual positive elements in a large collection of elements by asking queries of the form “Does a set of elements contain a positive one?”. A graph reconstruction problem that generalizes the classical group testing problem is to reconstruct a hidden graph from a given family of graphs by asking queries of the form “Whether a set of vertices induces an edge”. Reconstruction problems on families of Hamiltonian cycles, matchings, stars and cliques on n vertices have been studied where algorithms of using at most 2nlg?n,(1+o(1))(nlg?n),2n and 2n queries were proposed, respectively. In this paper we improve them to \((1+o(1))(n\lg n),(1+o(1))(\frac{n\lg n}{2}),n+2\lg n\) and n+lg?n, respectively. Threshold group testing is another generalization of group testing which is to identify the individual positive elements in a collection of elements under a more general setting, in which there are two fixed thresholds ? and u, with ?<u, and the response to a query is positive if the tested subset of elements contains at least u positive elements, negative if it contains at most ? positive elements, and it is arbitrarily given otherwise. For the threshold group testing problem with ?=u?1, we show that p positive elements among n given elements can be determined by using O(plg?n) queries, with a matching lower bound.  相似文献   

7.
In this paper we study several geometric problems of color-spanning sets: given n points with m colors in the plane, selecting m points with m distinct colors such that some geometric properties of the m selected points are minimized or maximized. The geometric properties studied in this paper are the maximum diameter, the largest closest pair, the planar smallest minimum spanning tree, the planar largest minimum spanning tree and the planar smallest perimeter convex hull. We propose an O(n 1+ε ) time algorithm for the maximum diameter color-spanning set problem where ε could be an arbitrarily small positive constant. Then, we present hardness proofs for the other problems and propose two efficient constant factor approximation algorithms for the planar smallest perimeter color-spanning convex hull problem.  相似文献   

8.
We develop for set cover games several general cost-sharing methods that are approximately budget-balanced, in the core, and/or group-strategyproof. We first study the cost sharing for a single set cover game, which does not have a budget-balanced mechanism in the core. We show that there is no cost allocation method that can always recover more than $\frac{1}{\ln n}$ of the total cost and in the core. Here n is the number of all players to be served. We give a cost allocation method that always recovers $\frac{1}{\ln d_{\mathit{max}}}$ of the total cost, where d max is the maximum size of all sets. We then study the cost allocation scheme for all induced subgames. It is known that no cost sharing scheme can always recover more than $\frac{1}{n}$ of the total cost for every subset of players. We give an efficient cost sharing scheme that always recovers at least $\frac{1}{2n}$ of the total cost for every subset of players and furthermore, our scheme is cross-monotone. When the elements to be covered are selfish agents with privately known valuations, we present a strategyproof charging mechanism, under the assumption that all sets are simple sets; further, the total cost of the set cover is no more than ln?d max times that of an optimal solution. When the sets are selfish agents with privately known costs, we present a strategyproof payment mechanism to them. We also show how to fairly share the payments to all sets among the elements.  相似文献   

9.
We explore a reconfiguration version of the dominating set problem, where a dominating set in a graph G is a set S of vertices such that each vertex is either in S or has a neighbour in S. In a reconfiguration problem, the goal is to determine whether there exists a sequence of feasible solutions connecting given feasible solutions s and t such that each pair of consecutive solutions is adjacent according to a specified adjacency relation. Two dominating sets are adjacent if one can be formed from the other by the addition or deletion of a single vertex. For various values of k, we consider properties of \(D_k(G)\), the graph consisting of a node for each dominating set of size at most k and edges specified by the adjacency relation. Addressing an open question posed by Haas and Seyffarth, we demonstrate that \(D_{\varGamma (G)+1}(G)\) is not necessarily connected, for \(\varGamma (G)\) the maximum cardinality of a minimal dominating set in G. The result holds even when graphs are constrained to be planar, of bounded tree-width, or b-partite for \(b \ge 3\). Moreover, we construct an infinite family of graphs such that \(D_{\gamma (G)+1}(G)\) has exponential diameter, for \(\gamma (G)\) the minimum size of a dominating set. On the positive side, we show that \(D_{n-\mu }(G)\) is connected and of linear diameter for any graph G on n vertices with a matching of size at least \(\mu +1\).  相似文献   

10.
Search algorithms are often compared by the optimization speed achieved on some sets of cost functions. Here some properties of algorithms’ optimization speed are introduced and discussed. In particular, we show that determining whether a set of cost functions F admits a search algorithm having given optimization speed is an NP-complete problem. Further, we derive an explicit formula to calculate the best achievable optimization speed when F is closed under permutation. Finally, we show that the optimization speed achieved by some well-know optimization techniques can be much worse than the best theoretical value, at least on some sets of optimization benchmarks.  相似文献   

11.
This note confirms a conjecture of (Bramoullé in Games Econ Behav 58:30–49, 2007). The problem, which we name the maximum independent cut problem, is a restricted version of the MAX-CUT problem, requiring one side of the cut to be an independent set. We show that the maximum independent cut problem does not admit any polynomial time algorithm with approximation ratio better than n 1?? , where n is the number of nodes, and ? arbitrarily small, unless $\mathrm{P} = \mathrm{NP}$ . For the rather special case where each node has a degree of at most four, the problem is still APX-hard.  相似文献   

12.
In the Minimum Weight Partial Connected Set Cover problem, we are given a finite ground set \(U\), an integer \(q\le |U|\), a collection \(\mathcal {E}\) of subsets of \(U\), and a connected graph \(G_{\mathcal {E}}\) on vertex set \(\mathcal {E}\), the goal is to find a minimum weight subcollection of \(\mathcal {E}\) which covers at least \(q\) elements of \(U\) and induces a connected subgraph in \(G_{\mathcal {E}}\). In this paper, we derive a “partial cover property” for the greedy solution of the Minimum Weight Set Cover problem, based on which we present (a) for the weighted version under the assumption that any pair of sets in \(\mathcal {E}\) with nonempty intersection are adjacent in \(G_{\mathcal {E}}\) (the Minimum Weight Partial Connected Vertex Cover problem falls into this range), an approximation algorithm with performance ratio \(\rho (1+H(\gamma ))+o(1)\), and (b) for the cardinality version under the assumption that any pair of sets in \(\mathcal {E}\) with nonempty intersection are at most \(d\)-hops away from each other (the Minimum Partial Connected \(k\)-Hop Dominating Set problem falls into this range), an approximation algorithm with performance ratio \(2(1+dH(\gamma ))+o(1)\), where \(\gamma =\max \{|X|:X\in \mathcal {E}\}\), \(H(\cdot )\) is the Harmonic number, and \(\rho \) is the performance ratio for the Minimum Quota Node-Weighted Steiner Tree problem.  相似文献   

13.
The maximum independent set problem is one of the most important problems in theoretical analysis on time and space complexities of exact algorithms. Theoretical improvement on upper bounds on time complexity to solve this problem in low-degree graphs can lead to an improvement on that to the problem in general graphs. In this paper, we derive an upper bound \(O^*(1.1376^n)\) on the time complexity of a polynomial-space algorithm that solves the maximum independent set problem in an n-vertex graph with degree bounded by 4, improving all previous upper bounds on the time complexity of exact algorithms to this problem. Our algorithm is a branch-and-reduce algorithm and analyzed by using the measure-and-conquer method. To make an amortized analysis of the running time bound, we use an idea of “shift” to save some decrease of the measure from good branches to bad branches. Our algorithm first deals with small vertex cuts and vertices of degree \({\ge }5\), which may be created in our algorithm even if the input graph has maximum degree 4, then eliminates cycles of length 3 and 4 containing degree-4 vertices, and finally branches on degree-4 vertices. We invoke an exact algorithm for this problem in graphs with maximum degree 3 directly when the graph has no vertices of degree \({\ge }4\). Branching on degree-4 vertices on special local structures will be the bottleneck case, and we carefully design rules of choosing degree-4 vertices to branch on so that the resulting instances after branching decrease the measure effectively in the next step.  相似文献   

14.
Given an undirected graph G=(V,E) with vertex set V={1,…,n} and edge set E?V×V. The maximum clique problem is to determine in G a clique (i.e., a complete subgraph) of maximum cardinality. This paper presents an effective algorithm for the maximum clique problem. The proposed multistart tabu search algorithm integrates a constrained neighborhood, a dynamic tabu tenure mechanism and a long term memory based restart strategy. Our proposed algorithm is evaluated on the whole set of 80 DIMACS challenge benchmarks and compared with five state-of-the-art algorithms. Computational results show that our proposed algorithm attains the largest known clique for 79 benchmarks.  相似文献   

15.
We study a new coloring concept which generalizes the classical vertex coloring problem in a graph by extending the notion of stable sets to split graphs. First of all, we propose the packing problem of finding the split graph of maximum size where a split graph is a graph G = (V,E) in which the vertex set V can be partitioned into a clique K and a stable set S. No condition is imposed on the edges linking vertices in S to the vertices in K. This maximum split graph problem gives rise to an associated partitioning problem that we call the split-coloring problem. Given a graph, the objective is to cover all his vertices by a least number of split graphs. Definitions related to this new problem are introduced. We mention some polynomially solvable cases and describe open questions on this area. An erratum to this article is available at .  相似文献   

16.
In this paper we present approximation algorithm for the following NP-hard map labeling problem: Given a set S of n distinct sites in the plane, one needs to place at each site a uniform square of maximum possible size such that all the squares are along the same direction. This generalizes the classical problem of labeling points with axis-parallel squares and restricts the most general version where the squares can have different orientations. We obtain factor-4 and factor- approximation algorithms for this problem. These algorithms also work for two generalized versions of the problem. We also revisit the problem of labeling each point with maximum uniform axis-parallel square pairs and improve the previous approximation factor of 4 to 3.  相似文献   

17.
The best known expected time for the all pairs shortest path problem on a directed graph with non-negative edge costs is O(n 2logn) by Moffat and Takaoka. Let the solution set be the set of vertices to which the given algorithm has so far established shortest paths. The Moffat-Takaoka algorithm maintains complexities before and after the critical point in balance, which is the moment when the size of the solution set is n?n/logn. In this paper, we remove the concept of critical point, whereby we make the algorithm simpler and seamless, resulting in a simpler analysis.  相似文献   

18.
Since the seminal work of Ford and Fulkerson in the 1950s, network flow theory is one of the most important and most active areas of research in combinatorial optimization. Coming from the classical maximum flow problem, we introduce and study an apparently basic but new flow problem that features a couple of interesting peculiarities. We derive several results on the complexity and approximability of the new problem. On the way we also discover two closely related basic covering and packing problems that are of independent interest. Starting from an LP formulation of the maximum s-t-flow problem in path variables, we introduce unit upper bounds on the amount of flow being sent along each path. The resulting (fractional) flow problem is NP-hard; its integral version is strongly NP-hard already on very simple classes of graphs. For the fractional problem we present an FPTAS that is based on solving the k shortest paths problem iteratively. We show that the integral problem is hard to approximate and give an interesting O(log?m)-approximation algorithm, where m is the number of arcs in the considered graph. For the multicommodity version of the problem there is an $O(\sqrt{m})Since the seminal work of Ford and Fulkerson in the 1950s, network flow theory is one of the most important and most active areas of research in combinatorial optimization. Coming from the classical maximum flow problem, we introduce and study an apparently basic but new flow problem that features a couple of interesting peculiarities. We derive several results on the complexity and approximability of the new problem. On the way we also discover two closely related basic covering and packing problems that are of independent interest. Starting from an LP formulation of the maximum s-t-flow problem in path variables, we introduce unit upper bounds on the amount of flow being sent along each path. The resulting (fractional) flow problem is NP-hard; its integral version is strongly NP-hard already on very simple classes of graphs. For the fractional problem we present an FPTAS that is based on solving the k shortest paths problem iteratively. We show that the integral problem is hard to approximate and give an interesting O(log m)-approximation algorithm, where m is the number of arcs in the considered graph. For the multicommodity version of the problem there is an O(?m)O(\sqrt{m}) -approximation algorithm. We argue that this performance guarantee is best possible, unless P=NP.  相似文献   

19.
Consider a graph G. A subset of vertices, F, is called a vertex cover \(P_t\) (\(VCP_t\)) set if every path of order t contains at least one vertex in F. Finding a minimum \(VCP_t\) set in a graph is is NP-hard for any integer \(t\ge 2\) and is called the \(MVCP_3\) problem. In this paper, we study the parameterized algorithms for the \(MVCP_3\) problem when the underlying graph G is parameterized by the treewidth. Given an n-vertex graph together with its tree decomposition of width at most p, we present an algorithm running in time \(4^{p}\cdot n^{O(1)}\) for the \(MVCP_3\) problem. Moreover, we show that for the \(MVCP_3\) problem on planar graphs, there is a subexponential parameterized algorithm running in time \(2^{O(\sqrt{k})}\cdot n^{O(1)}\) where k is the size of the optimal solution.  相似文献   

20.
Anonymizing binary and small tables is hard to approximate   总被引:2,自引:1,他引:1  
The problem of publishing personal data without giving up privacy is becoming increasingly important. An interesting formalization recently proposed is the k-anonymity. This approach requires that the rows in a table are clustered in sets of size at least k and that all the rows in a cluster become the same tuple, after the suppression of some records. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is known to be NP-hard when the values are over a ternary alphabet, k=3 and the rows length is unbounded. In this paper we give a lower bound on the approximation factor that any polynomial-time algorithm can achieve on two restrictions of the problem, namely (i) when the records values are over a binary alphabet and k=3, and (ii) when the records have length at most 8 and k=4, showing that these restrictions of the problem are APX-hard.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号