首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In camera sensor networks (CSNs), the target coverage problem is of special importance since a sensor with different viewing directions captures distinct views for the same target. Furthermore, mission-driven monitoring applications in CSNs usually have special network lifetime requirements in which the limited battery lifetime of sensors probably can not sustain for full coverage. In this paper, based on effective-sensing model, we address three new coverage problems in mission-driven camera sensor networks, namely the target-temporal effective-sensing coverage with non-adjustable cameras (TEC-NC) problem, the target-temporal effective-sensing coverage with adjustable cameras (TEC-AC) problem, and the target-temporal effective-sensing coverage with fully-adjustable cameras (TEC-FAC) problem. Given a mission period, the common objective of the problems is to find a sleep-wakeup schedule such that the overall target-temporal coverage is maximized. For TEC-NC, we propose a 2-approximation algorithm and two new heuristics. We also design two greedy strategies, each of which can be combined with our solutions for TEC-NC to deal with TEC-AC and TEC-FAC, respectively. We finally conduct extensive experiments to evaluate the performance of the proposed algorithms, whose results indicate the proposed algorithms outperform the existing alternatives as well as are close to the theoretical optimum on average under certain conditions.  相似文献   

2.
As an imperative channel for fast information propagation, online social networks (OSNs) also have their defects. One of them is the information leakage, i.e., information could be spread via OSNs to the users whom we are not willing to share with. Thus the problem of constructing a circle of trust to share information with as many friends as possible without further spreading it to unwanted targets has become a challenging research topic but still remained open. Our work is the first attempt to study the Maximum Circle of Trust problem seeking to share the information with the maximum expected number of poster’s friends such that the information spread to the unwanted targets is brought to its knees. First, we consider a special and more practical case with the two-hop information propagation and a single unwanted target. In this case, we show that this problem is NP-hard, which denies the existence of an exact polynomial-time algorithm. We thus propose a Fully Polynomial-Time Approximation Scheme (FPTAS), which can not only adjust any allowable performance error bound but also run in polynomial time with both the input size and allowed error. FPTAS is the best approximation solution one can ever wish for an NP-hard problem. We next consider the number of unwanted targets is bounded and prove that there does not exist an FPTAS in this case. Instead, we design a Polynomial-Time Approximation Scheme (PTAS) in which the allowable error can also be controlled. When the number of unwanted targets are not bounded, we provide a randomized algorithm, along with the analytical theoretical bound and inapproximaibility result. Finally, we consider a general case with many hops information propagation and further show its #P-hardness and propose an effective Iterative Circle of Trust Detection (ICTD) algorithm based on a novel greedy function. An extensive experiment on various real-world OSNs has validated the effectiveness of our proposed approximation and ICTD algorithms. Such an extensive experiment also highlights several important observations on information leakage which help to sharpen the security of OSNs in the future.  相似文献   

3.
We consider the frugal coverage problem, an interesting variation of set cover defined as follows. Instances of the problem consist of a universe of elements and a collection of sets over these elements; the objective is to compute a subcollection of sets so that the number of elements it covers plus the number of sets not chosen is maximized. The problem was introduced and studied by Huang and Svitkina (Proceedings of the 29th IARCS annual conference on foundations of software technology and theoretical computer science (FSTTCS), pp. 227–238, 2009) due to its connections to the donation center location problem. We prove that the greedy algorithm has approximation ratio at least 0.782, improving a previous bound of 0.731 in Huang and Svitkina (Proceedings of the 29th IARCS annual conference on foundations of software technology and theoretical computer science (FSTTCS), pp. 227–238, 2009). We also present a further improvement that is obtained by adding a simple corrective phase at the end of the execution of the greedy algorithm. The approximation ratio achieved in this way is at least 0.806. Finally, we consider a packing based algorithm that uses semi-local optimization, and show that its approximation ratio is not less than 0.872. Our analysis is based on the use of linear programs which capture the behavior of the algorithms in worst-case examples. The obtained bounds are proved to be tight.  相似文献   

4.
This paper proposes an iterated greedy algorithm for solving the blocking flowshop scheduling problem for makespan minimization. Moreover, it presents an improved NEH-based heuristic, which is used as the initial solution procedure for the iterated greedy algorithm. The effectiveness of both procedures was tested on some of Taillard’s benchmark instances that are considered to be blocking flowshop instances. The experimental evaluation showed the efficiency of the proposed algorithm, in spite of its simple structure, in comparison with a state-of-the-art algorithm. In addition, new best solutions for Taillard’s instances are reported for this problem, which can be used as a basis of comparison in future studies.  相似文献   

5.
We study minimum-cost sensor placement on a bounded 3D sensing field, R, which comprises a number of discrete points that may or may not be grid points. Suppose we have ℓ types of sensors available with different sensing ranges and different costs. We want to find, given an integer σ ≥ 1, a selection of sensors and a subset of points to place these sensors such that every point in R is covered by at least σ sensors and the total cost of the sensors is minimum. This problem is known to be NP-hard. Let ki denote the maximum number of points that can be covered by a sensor of the ith type. We present in this paper a polynomial-time approximation algorithm for this problem with a proven approximation ratio . In applications where the distance of any two points has a fixed positive lower bound, each ki is a constant, and so we have a polynomial-time approximation algorithms with a constant guarantee. While γ may be large, we note that it is only a worst-case upper bound. In practice the actual approximation ratio is small, even on randomly generated points that do not have a fixed positive minimum distance between them. We provide a number of numerical results for comparing approximation solutions and optimal solutions, and show that the actual approximation ratios in these examples are all less than 3, even though γ is substantially larger. This research was supported in part by NSF under grant CCF-04080261 and by NSF of China under grant 60273062.  相似文献   

6.
Almost optimal solutions for bin coloring problems   总被引:1,自引:1,他引:0  
In this paper we study two interesting bin coloring problems: Minimum Bin Coloring Problem (MinBC) and Online Maximum Bin Coloring Problem (OMaxBC), motivated from several applications in networking. For the MinBC problem, we present two near linear time approximation algorithms to achieve almost optimal solutions, i.e., no more than OPT+2 and OPT+1 respectively, where OPT is the optimal solution. For the OMaxBC problem, we first introduce a deterministic 2-competitive greedy algorithm, and then give lower bounds for any deterministic and randomized (against adaptive offline adversary) online algorithms. The lower bounds show that our deterministic algorithm achieves the best possible competitive ratio. The research of this paper was partially supported by an NSF CAREER award CCF-0546509.  相似文献   

7.
We consider the two-level network design problem with intermediate facilities. This problem consists of designing a minimum cost network respecting some requirements, usually described in terms of the network topology or in terms of a desired flow of commodities between source and destination vertices. Each selected link must receive one of two types of edge facilities and the connection of different edge facilities requires a costly and capacitated vertex facility. We propose a hybrid decomposition approach which heuristically obtains tentative solutions for the vertex facilities number and location and use these solutions to limit the computational burden of a branch-and-cut algorithm. We test our method on instances of the power system secondary distribution network design problem. The results show that the method is efficient both in terms of solution quality and computational times.  相似文献   

8.
Many problems that appear in different contexts are conceptually similar and so are amenable to solution by a common technique. Three such technical Information Systems (IS) problems are: (1) segmentation of computer programs; (2) attribute discretization for decision tree induction; and (3) design of hash tables in database systems. In this paper we show how each of these seemingly different problems can be formulated as a sequential (set) partitioning problem, and solved using a parametric linear programming (LP) procedure. This approach provides optimal solutions unlike previous solution approaches which were either greedy heuristics or limited to solving only a specific problem situation. Given the likelihood that other applications of the sequential partitioning problem exist in IS, the material presented here could be useful to other researchers in formulating the problem at an appropriate level of abstraction so that available optimal solution approaches can be identified. In addition to providing a common solution method, parametric LP frees the user from having to make premature decisions regarding the number of groups for the partition, and this decision can be delayed post solution.  相似文献   

9.
10.
The paper addresses the relay node placement problem in two-tiered wireless sensor networks. Given a set of sensor nodes in Euclidean plane, our objective is to place minimum number of relay nodes to forward data packets from sensor nodes to the sink, such that: 1) the network is connected, 2) the network is 2-connected. For case one, we propose a (6+ε)-approximation algorithm for any ε > 0 with polynomial running time when ε is fixed. For case two, we propose two approximation algorithms with (24+ε) and (6/T+12+ε), respectively, where T is the ratio of the number of relay nodes placed in case one to the number of sensors. We further extend the results to the cases where communication radiuses of sensor nodes and relay nodes are different from each other.  相似文献   

11.
Let G be a undirected connected graph. Given g groups each being a subset of V(G) and a number of colors, we consider how to find a subgroup of subsets such that there exists a tree interconnecting all vertices in each subset and all trees can be colored properly with given colors (no two trees sharing a common edge receive the same color); the objective is to maximize the number of subsets in the subgroup. This problem arises from the application of multicast communication in all optical networks. In this paper, we first obtain an explicit lower bound on the approximability of this problem and prove Ω(g1−ε)-inapproximability even when G is a mesh. We then propose a simple greedy algorithm that achieves performance ratio O√|E(G)|, which matches the theoretical bounds. Supported in part by the NSF of China under Grant No. 70221001 and 60373012.  相似文献   

12.
Greedy algorithms are simple, but their relative power is not well understood. The priority framework (Borodin et al. in Algorithmica 37:295–326, 2003) captures a key notion of “greediness” in the sense that it processes (in some locally optimal manner) one data item at a time, depending on and only on the current knowledge of the input. This algorithmic model provides a tool to assess the computational power and limitations of greedy algorithms, especially in terms of their approximability. In this paper, we study priority algorithm approximation ratios for the Subset-Sum Problem, focusing on the power of revocable decisions, for which the accepted data items can be later rejected to maintain the feasibility of the solution. We first provide a tight bound of α≈0.657 for irrevocable priority algorithms. We then show that the approximation ratio of fixed order revocable priority algorithms is between β≈0.780 and γ≈0.852, and the ratio of adaptive order revocable priority algorithms is between 0.8 and δ≈0.893. A preliminary version of this paper appeared in the Proceedings of COCOON 2007, LNCS 4598, pp. 504–514.  相似文献   

13.
Young-Ho Cha  Yeong-Dae Kim 《Omega》2010,38(5):383-392
In this paper, we consider the fire scheduling problem (FSP) for field artillery, which is the problem of scheduling operations of firing at given targets with a given set of weapons. We consider a situation in which the number of available weapons is smaller than the number of targets, the targets are assigned to the weapons already, and targets may move and hence the probability that a target is destroyed by a firing attack decreases as time passes. We present a branch and bound algorithm for the FSP with the objective of minimizing total threat of the targets, which is expressed as a function of the destruction probabilities of the targets. Results of computational tests show that the suggested algorithm solves problems of a medium size in a reasonable amount of computation time.  相似文献   

14.

Greedy algorithms are among the most elementary ones in theoretical computer science and understanding the conditions under which they yield an optimum solution is a widely studied problem. Greedoids were introduced by Korte and Lovász at the beginning of the 1980s as a generalization of matroids. One of the basic motivations of the notion was to extend the theoretical background behind greedy algorithms beyond the well-known results on matroids. Indeed, many well-known algorithms of a greedy nature that cannot be interpreted in a matroid-theoretical context are special cases of the greedy algorithm on greedoids. Although this algorithm turns out to be optimal in surprisingly many cases, no general theorem is known that explains this phenomenon in all these cases. Furthermore, certain claims regarding this question that were made in the original works of Korte and Lovász turned out to be false only most recently. The aim of this paper is to revisit and straighten out this question: we summarize recent progress and we also prove new results in this field. In particular, we generalize a result of Korte and Lovász and thus we obtain a sufficient condition for the optimality of the greedy algorithm that covers a much wider range of known applications than the original one.

  相似文献   

15.
Motivated by a real world application, we study the multiple knapsack problem with assignment restrictions (MKAR). We are given a set of items, each with a positive real weight, and a set of knapsacks, each with a positive real capacity. In addition, for each item a set of knapsacks that can hold that item is specified. In a feasible assignment of items to knapsacks, each item is assigned to at most one knapsack, assignment restrictions are satisfied, and knapsack capacities are not exceeded. We consider the objectives of maximizing assigned weight and minimizing utilized capacity.We focus on obtaining approximate solutions in polynomial computational time. We show that simple greedy approaches yield 1/3-approximation algorithms for the objective of maximizing assigned weight. We give two different 1/2-approximation algorithms: the first one solves single knapsack problems successively and the second one is based on rounding the LP relaxation solution. For the bicriteria problem of minimizing utilized capacity subject to a minimum requirement on assigned weight, we give an (1/3,2)-approximation algorithm.  相似文献   

16.
Barrier coverage, as one of the most important applications of wireless sensor network (WSNs), is to provide coverage for the boundary of a target region. We study the barrier coverage problem by using a set of n sensors with adjustable coverage radii deployed along a line interval or circle. Our goal is to determine a range assignment \(\mathbf {R}=({r_{1}},{r_{2}}, \ldots , {r_{n}})\) of sensors such that the line interval or circle is fully covered and its total cost \(C(\mathbf {R})=\sum _{i=1}^n {r_{i}}^\alpha \) is minimized. For the line interval case, we formulate the barrier coverage problem of line-based offsets deployment, and present two approximation algorithms to solve it. One is an approximation algorithm of ratio 4 / 3 runs in \(O(n^{2})\) time, while the other is a fully polynomial time approximation scheme (FPTAS) of computational complexity \(O(\frac{n^{2}}{\epsilon })\). For the circle case, we optimally solve it when \(\alpha = 1\) and present a \(2(\frac{\pi }{2})^\alpha \)-approximation algorithm when \(\alpha > 1\). Besides, we propose an integer linear programming (ILP) to minimize the total cost of the barrier coverage problem such that each point of the line interval is covered by at least k sensors.  相似文献   

17.
The canadian traveller problem and its competitive analysis   总被引:1,自引:0,他引:1  
From the online point of view, we study the Canadian Traveller Problem (CTP), in which the traveller knows in advance the structure of the graph and the costs of all edges. However, some edges may fail and the traveller only observes that upon reaching an adjacent vertex of the blocked edge. The goal is to find the least-cost route from the source O to the destination D, more precisely, to find an adaptive strategy minimizing the competitive ratio, which compares the performance of this strategy with that of a hypothetical offline algorithm that knows the entire topology in advance. In this paper, we present two adaptive strategies—a greedy or myopic strategy and a comparison strategy combining the greedy strategy and the reposition strategy in which the traveller backtracks to the source every time when he/she sees a failed edge. We prove tight competitive ratios of 2 k+1−1 and 2k+1 respectively for the two strategies, where k is the number of failed edges in the graph. Finally, we propose an explanation of why the greedy strategy and the comparison strategy are usually preferred by drivers in an urban traffic environment, based on an argument related to the length of the second-shortest path in a grid graph. We would like to acknowledge the support from NSF of China (No. 70525004, No. 70121001 and No. 60736027), and the support from K.C. Wong Education Foundation, Hong Kong.  相似文献   

18.
In mobile networks using wideband code division multiple access (WCDMA), common pilot channel (CPICH) signals are used by mobile terminals for channel quality estimation, cell selection, and handover. The strength of the CPICH signal determines the coverage area of the cell, impacts the network capacity, and thereby the quality of service, and is therefore a crucial parameter in network planning and optimization. Pilot power is the most important parameter that allows us to control the strength of the CPICH signal. The more power is spent for pilot signals, the better coverage is obtained. On the other hand, a higher value of the pilot power level in a cell means higher pilot pollution in the network and less power available to serve user traffic in the cell. In this paper, we consider the problem of minimizing the total amount of pilot power subject to a coverage constraint. Our modeling and solution approaches are based on mathematical programming techniques. We present a basic model for pilot power optimization subject to a full coverage constraint as well as its extended version which allows us to study various coverage levels and to consider user traffic distribution over the network. We also propose an efficient algorithm that gives near-optimal solutions to the problem within a reasonable amount of time. We report our numerical experiments for three WCDMA networks of various sizes based on realistic planning scenarios and examine the effect of different levels of the required coverage degree on the total amount of pilot power.  相似文献   

19.
The complexity of the Bandpass problem is re-investigated. Specifically, we show that the problem with any fixed bandpass number B≥2 is NP-hard. Next, a row stacking algorithm is proposed for the problem with three columns, which produces a solution that is at most 1 less than the optimum. For the special case B=2, the row stacking algorithm guarantees an optimal solution. On approximation, for the general problem, we present an O(B 2)-algorithm, which reduces to a 2-approximation algorithm for the special case B=2.  相似文献   

20.
The linear ordering problem (LOP) is an NP\mathcal{NP}-hard combinatorial optimization problem with a wide range of applications in economics, archaeology, the social sciences, scheduling, and biology. It has, however, drawn little attention compared to other closely related problems such as the quadratic assignment problem and the traveling salesman problem. Due to its computational complexity, it is essential in practice to develop solution approaches to rapidly search for solution of high-quality. In this paper we propose a new algorithm based on a greedy randomized adaptive search procedure (GRASP) to efficiently solve the LOP. The algorithm is integrated with a Path-Relinking (PR) procedure and a new local search scheme. We tested our implementation on the set of 49 real-world instances of input-output tables (LOLIB instances) proposed in Reinelt (Linear ordering library (LOLIB) 2002). In addition, we tested a set of 30 large randomly-generated instances proposed in Mitchell (Computational experience with an interior point cutting plane algorithm, Tech. rep., Mathematical Sciences, Rensellaer Polytechnic Institute, Troy, NY 12180-3590, USA 1997). Most of the LOLIB instances were solved to optimality within 0.87 seconds on average. The average gap for the randomly-generated instances was 0.0173% with an average running time of 21.98 seconds. The results indicate the efficiency and high-quality of the proposed heuristic procedure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号