首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Group testing with inhibitors (GTI) is a variant of classical group testing where in addition to positive items and negative items, there is a third class of items called inhibitors. In this model the response to a test is YES if and only if the tested group of items contains at least one positive item and no inhibitor. This model of group testing has been introduced by Farach et al. (Proceedings of compression and complexity of sequences, pp 357–367, 1997) for applications in the field of molecular biology. In this paper we investigate the GTI problem both in the case when the exact number of positive items is given, and in the case when the number of positives is not given but we are provided with an upper bound on it. For the latter case, we present a lower bound on the number of tests required to determine the positive items in a completely nonadaptive fashion. Also under the same hypothesis, we derive an improved lower bound on the number of tests required by any algorithm (using any number of stages) for the GTI problem. As far as it concerns the case when the exact number of positives is known, we give an efficient trivial two-stage algorithm. Instrumental to our results are new combinatorial structures introduced in this paper. In particular we introduce generalized versions of the well known superimposed codes (Du, D.Z., Hwang, F.K. in Pooling designs and nonadaptive group testing, 2006; Dyachkov, A.G., Rykov, V.V. in Probl. Control Inf. Theory 12:7–13, 1983; Dyachkov, A.G., et al. in J. Comb. Theory Ser. A 99:195–218, 2002; Kautz, W.H., Singleton, R.R. in IEEE Trans. Inf. Theory 10:363–377, 1964) and selectors (Clementi, A.E.F, et al. in Proceedings of symposium on discrete algorithms, pp. 709–718, 2001; De Bonis, A., et al. in SIAM J Comput. 34(5):1253–1270, 2005; Indyk, P. in Proceedings of symposium on discrete algorithms, pp. 697–704, 2002) that we believe to be of independent interest.  相似文献   

2.
In this paper, we consider an interesting variant of the classical facility location problem called uncapacitated facility location problem with penalties (UFLWP for short) in which each client is either assigned to an opened facility or rejected by paying a penalty. The UFLWP problem has been effectively used to model the facility location problem with outliers. Three constant approximation algorithms have been obtained (Charikar et al. in Proceedings of the Symposium on Discrete Algorithms, pp. 642–651, 2001; Jain et al. in J. ACM 50(6):795–824, 2003; Xu and Xu in Inf. Process. Lett. 94(3):119–123, 2005), and the best known performance ratio is 2. The only known hardness result is a 1.463-inapproximability result inherited from the uncapacitated facility location problem (Guha and Khuller in J. Algorithms 31(1):228–248, 1999). In this paper, We present a 1.8526-approximation algorithm for the UFLWP problem. Our algorithm significantly reduces the gap between known performance ratio and the inapproximability result. Our algorithm first enhances the primal-dual method for the UFLWP problem (Charikar et al. in Proceedings of the Symposium on Discrete Algorithms, pp. 642–651, 2001) so that outliers can be recognized more efficiently, and then applies a local search heuristic (Charikar and Guha in Proceedings of the 39th IEEE Symposium on Foundations of Computer Science, pp. 378–388, 1999) to further reduce the cost for serving those non-rejected clients. Our algorithm is simple and can be easily implemented. The research of this work was supported in part by NSF through CAREER award CCF-0546509 and grant IIS-0713489. A preliminary version of this paper appeared in the Proceedings of the 11th Annual International Computing and Combinatorics Conference (COCOON’05).  相似文献   

3.
Understanding recombination is a central problem in population genetics. In this paper, we address an established computational problem in this area: compute lower bounds on the minimum number of historical recombinations for generating a set of sequences (Hudson and Kaplan in Genetics 111, 147–164, 1985; Myers and Griffiths in Genetics 163, 375–394, 2003; Gusfield et al. in Discrete Appl. Math. 155, 806–830, 2007; Bafna and Bansal in IEEE/ACM Trans. Comput. Biol. Bioinf. 1, 78–90, 2004 and in J. Comput. Biol. 13, 501–521, 2006; Song et al. in Bioinformatics 421, i413–i244, 2005). In particular, we propose a new recombination lower bound: the forest bound. We show that the forest bound can be formulated as the minimum perfect phylogenetic forest problem, a natural extension to the classic binary perfect phylogeny problem, which may be of interests on its own. We then show that the forest bound is provably higher than the optimal haplotype bound (Myers and Griffiths in Genetics 163, 375–394, 2003), a very good lower bound in practice (Song et al. in Bioinformatics 421, i413–i422, 2005). We prove that, like several other lower bounds (Bafna and Bansal in J. Comput. Biol. 13, 501–521, 2006), computing the forest bound is NP-hard. Finally, we describe an integer linear programming (ILP) formulation that computes the forest bound precisely for certain range of data. Simulation results show that the forest bound may be useful in computing lower bounds for low quality data. A preliminary version of this paper appeared in the Proceedings of COCOON 2007, LNCS, vol. 4598, pp. 16–26. The work was performed while Y. Wu was with UC Davis and supported by grants CCF-0515278 and IIS-0513910 from National Science Foundation. D. Gusfield supported by grants CCF-0515278 and IIS-0513910 from National Science Foundation.  相似文献   

4.
When a switching network topology is used for constructing optical cross-connects, as in the circuit switching case, no two routes are allowed to share a link. However, if two routes share too many switching elements, then crosstalk introduced at those switching elements degrades signal quality. Vaez and Lea (IEEE Trans. Commun. 48:(2)316–323, 2000) introduced a parameter c which is the maximum number of distinct switching elements a route can share with other routes in the network. This is called the general crosstalk constraint. This paper presents a new method of analyzing strictly nonblocking multi-log networks under this general crosstalk constraint using linear programming duality.  相似文献   

5.
The basic models of online time series search and one-way trading are introduced by El-Yaniv et al. in Algorithmica 30(1), 101–139 (2001) where it is assumed that the prices are bounded within interval [m,M] (0<m<M). In this paper, we consider another case where every two consecutive prices are interrelated, that is, the variation range of each price depends on its preceding price. We present optimal deterministic online algorithms for the two problems, respectively. According to one conclusion in Algorithmica 30(1), 101–139 (2001), we further point out that for the case we considered, an optimal deterministic algorithm for the one-way trading problem can be regarded as an optimal randomized one for the time series search problem, and randomization is useless for the one-way trading problem.  相似文献   

6.
In the paper “Fault-free Mutually Independent Hamiltonian Cycles in Hypercubes with Faulty Edges” (J. Comb. Optim. 13:153–162, 2007), the authors claimed that an n-dimensional hypercube can be embedded with (n−1−f)-mutually independent Hamiltonian cycles when fn−2 faulty edges may occur accidentally. However, there are two mistakes in their proof. In this paper, we give examples to explain why the proof is deficient. Then we present a correct proof. This work was supported in part by the National Science Council of the Republic of China under Contract NSC 95-2221-E-233-002.  相似文献   

7.
On lazy bureaucrat scheduling with common deadlines   总被引:1,自引:1,他引:0  
Lazy bureaucrat scheduling is a new class of scheduling problems introduced by Arkin et al. (Inf. Comput. 184:129–146, 2003). In this paper we focus on the case where all the jobs share a common deadline. Such a problem is denoted as CD-LBSP, which has been considered by Esfahbod et al. (Algorithms and data structures. Lecture notes in computer science, vol. 2748, pp. 59–66, 2003). We first show that the worst-case ratio of the algorithm SJF (Shortest Job First) is two under the objective function [min-time-spent], and thus answer an open question posed in (Esfahbod, et al. in Algorithms and data structures. Lecture notes in computer science, vol. 2748, pp. 59–66, 2003). We further present two approximation schemes A k and B k both having worst-case ratio of , for any given integer k>0, under the objective functions [min-makespan] and [min-time-spent], respectively. Finally, we prove that the problem CD-LBSP remains NP-hard under several objective functions, even if all jobs share the same release time. A preliminary version of the paper appeared in Proceedings of the 7th Latin American Symposium on Theoretical Informatics, pp 515–523, 2006. Research of G. Zhang supported in part by NSFC (60573020).  相似文献   

8.
It is well known that if G is a multigraph then χ′(G)≥χ*(G):=max {Δ(G),Γ(G)}, where χ′(G) is the chromatic index of G, χ*(G) is the fractional chromatic index of G, Δ(G) is the maximum degree of G, and Γ(G)=max {2|E(G[U])|/(|U|−1):UV(G),|U|≥3, |U| is odd}. The conjecture that χ′(G)≤max {Δ(G)+1,⌈Γ(G)⌉} was made independently by Goldberg (Discret. Anal. 23:3–7, 1973), Anderson (Math. Scand. 40:161–175, 1977), and Seymour (Proc. Lond. Math. Soc. 38:423–460, 1979). Using a probabilistic argument Kahn showed that for any c>0 there exists D>0 such that χ′(G)≤χ*(G)+c χ*(G) when χ*(G)>D. Nishizeki and Kashiwagi proved this conjecture for multigraphs G with χ′(G)>(11Δ(G)+8)/10; and Scheide recently improved this bound to χ′(G)>(15Δ(G)+12)/14. We prove this conjecture for multigraphs G with $\chi'(G)>\lfloor\Delta(G)+\sqrt{\Delta(G)/2}\rfloor$\chi'(G)>\lfloor\Delta(G)+\sqrt{\Delta(G)/2}\rfloor , improving the above mentioned results. As a consequence, for multigraphs G with $\chi'(G)>\Delta(G)+\sqrt {\Delta(G)/2}$\chi'(G)>\Delta(G)+\sqrt {\Delta(G)/2} the answer to a 1964 problem of Vizing is in the affirmative.  相似文献   

9.
Rocchio’s similarity-based relevance feedback algorithm, one of the most important query reformation methods in information retrieval, is essentially an adaptive supervised learning algorithm from examples. In practice, Rocchio’s algorithm often uses a fixed query updating factor. When this is the case, we strengthen the linear Ω(n) lower bound obtained by Chen and Zhu (Inf. Retr. 5:61–86, 2002) and prove that Rocchio’s algorithm makes Ω(k(nk)) mistakes in searching for a collection of documents represented by a monotone disjunction of k relevant features over the n-dimensional binary vector space {0,1} n , when the inner product similarity measure is used. A quadratic lower bound is obtained when k is linearly proportional to n. We also prove an O(k(nk)3) upper bound for Rocchio’s algorithm with the inner product similarity measure in searching for such a collection of documents with a constant query updating factor and a zero classification threshold.  相似文献   

10.
We study an information-theoretic variant of the graph coloring problem in which the objective function to minimize is the entropy of the coloring. The minimum entropy of a coloring is called the chromatic entropy and was shown by Alon and Orlitsky (IEEE Trans. Inform. Theory 42(5):1329–1339, 1996) to play a fundamental role in the problem of coding with side information. In this paper, we consider the minimum entropy coloring problem from a computational point of view. We first prove that this problem is NP-hard on interval graphs. We then show that, for every constant ε>0, it is NP-hard to find a coloring whose entropy is within (1−ε)log n of the chromatic entropy, where n is the number of vertices of the graph. A simple polynomial case is also identified. It is known that graph entropy is a lower bound for the chromatic entropy. We prove that this bound can be arbitrarily bad, even for chordal graphs. Finally, we consider the minimum number of colors required to achieve minimum entropy and prove a Brooks-type theorem. S. Fiorini acknowledges the support from the Fonds National de la Recherche Scientifique and GERAD-HEC Montréal. G. Joret is a F.R.S.-FNRS Research Fellow.  相似文献   

11.
We study the extremal parameter N(n,m,H) which is the largest number of copies of a hypergraph H that can be formed of at most n vertices and m edges. Generalizing previous work of Alon (Isr. J. Math. 38:116–130, 1981), Friedgut and Kahn (Isr. J. Math. 105:251–256, 1998) and Janson, Oleszkiewicz and the third author (Isr. J. Math. 142:61–92, 2004), we obtain an asymptotic formula for N(n,m,H) which is strongly related to the solution α q (H) of a linear programming problem, called here the fractional q-independence number of H. We observe that α q (H) is a piecewise linear function of q and determine it explicitly for some ranges of q and some classes of H. As an application, we derive exponential bounds on the upper tail of the distribution of the number of copies of H in a random hypergraph.  相似文献   

12.
Given d>2 and a set of n grid points Q in d , we design a randomized algorithm that finds a w-wide separator, which is determined by a hyper-plane, in sublinear time such that Q has at most points on either side of the hyper-plane, and at most points within distance to the hyper-plane, where c d is a constant for fixed d. In particular, c 3=1.209. To our best knowledge, this is the first sublinear time algorithm for finding geometric separators. Our 3D separator is applied to derive an algorithm for the protein side-chain packing problem, which improves and simplifies the previous algorithm of Xu (Research in computational molecular biology, 9th annual international conference, pp. 408–422, 2005). This research is supported by Louisiana Board of Regents fund under contract number LEQSF(2004-07)-RD-A-35. The part of this research was done while Bin Fu was associated with the Department of Computer Science, University of New Orleans, LA 70148, USA and the Research Institute for Children, 200 Henry Clay Avenue, New Orleans, LA 70118, USA.  相似文献   

13.
Greedy algorithms are simple, but their relative power is not well understood. The priority framework (Borodin et al. in Algorithmica 37:295–326, 2003) captures a key notion of “greediness” in the sense that it processes (in some locally optimal manner) one data item at a time, depending on and only on the current knowledge of the input. This algorithmic model provides a tool to assess the computational power and limitations of greedy algorithms, especially in terms of their approximability. In this paper, we study priority algorithm approximation ratios for the Subset-Sum Problem, focusing on the power of revocable decisions, for which the accepted data items can be later rejected to maintain the feasibility of the solution. We first provide a tight bound of α≈0.657 for irrevocable priority algorithms. We then show that the approximation ratio of fixed order revocable priority algorithms is between β≈0.780 and γ≈0.852, and the ratio of adaptive order revocable priority algorithms is between 0.8 and δ≈0.893. A preliminary version of this paper appeared in the Proceedings of COCOON 2007, LNCS 4598, pp. 504–514.  相似文献   

14.
Let G=(V,E) be a graph without isolated vertices. A set SV is a paired-dominating set if every vertex in VS is adjacent to a vertex in S and the subgraph induced by S contains a perfect matching. The paired-domination problem is to determine the paired-domination number, which is the minimum cardinality of a paired-dominating set. Motivated by a mistaken algorithm given by Chen, Kang and Ng (Discrete Appl. Math. 155:2077–2086, 2007), we present two linear time algorithms to find a minimum cardinality paired-dominating set in block and interval graphs. In addition, we prove that paired-domination problem is NP-complete for bipartite graphs, chordal graphs, even for split graphs.  相似文献   

15.
Popular matchings: structure and algorithms   总被引:2,自引:2,他引:0  
An instance of the popular matching problem (POP-M) consists of a set of applicants and a set of posts. Each applicant has a preference list that strictly ranks a subset of the posts. A matching M of applicants to posts is popular if there is no other matching M′ such that more applicants prefer M′ to M than prefer M to M′. Abraham et al. (SIAM J. Comput. 37:1030–1045, 2007) described a linear time algorithm to determine whether a popular matching exists for a given instance of POP-M, and if so to find a largest such matching. A number of variants and extensions of POP-M have recently been studied. This paper provides a characterization of the set of popular matchings for an arbitrary POP-M instance in terms of a structure called the switching graph, a directed graph computable in linear time from the preference lists. We show that the switching graph can be exploited to yield efficient algorithms for a range of associated problems, including the counting and enumeration of the set of popular matchings, generation of a popular matching uniformly at random, finding all applicant-post pairs that can occur in a popular matching, and computing popular matchings that satisfy various additional optimality criteria. Our algorithms for computing such optimal popular matchings improve those described in a recent paper by Kavitha and Nasre (Proceedings of MATCH-UP: Matching Under Preferences—Algorithms and Complexity, 2008).  相似文献   

16.
Approximation algorithms for connected facility location problems   总被引:1,自引:1,他引:0  
We study Connected Facility Location problems. We are given a connected graph G=(V,E) with nonnegative edge cost c e for each edge eE, a set of clients DV such that each client jD has positive demand d j and a set of facilities FV each has nonnegative opening cost f i and capacity to serve all client demands. The objective is to open a subset of facilities, say , to assign each client jD to exactly one open facility i(j) and to connect all open facilities by a Steiner tree T such that the cost is minimized for a given input parameter M≥1. We propose a LP-rounding based 8.29 approximation algorithm which improves the previous bound 8.55 (Swamy and Kumar in Algorithmica, 40:245–269, 2004). We also consider the problem when opening cost of all facilities are equal. In this case we give a 7.0 approximation algorithm.  相似文献   

17.
A note on online strip packing   总被引:1,自引:1,他引:0  
In online strip packing we are asked to pack a list of rectangles one by one into a vertical strip of unit width, without any information about future rectangles. The goal is to minimize the total height of strip used. The best known algorithm is First Fit Shelf algorithm (Baker and Schwarz in SIAM J. Comput. 12(3):508–525, 1983), which has an absolute competitive ratio of 6.99 under the assumption that the height of each rectangle is bounded from above by one. We improve the shelf algorithm and show an absolute competitive ratio of without the restriction on rectangle heights. Our algorithm also beats the best known online algorithm for parallel job scheduling. Ye’s research supported by NSFC(10601048). Zhang’s research supported by NSFC(60573020).  相似文献   

18.
19.
More than previous mild recessions, the current global crisis calls into question the merits of a model that held sway for almost three decades. Celebrated in the nineties in a “National Bestseller”, which even President Clinton considered to be a “blueprint” “for all officials to read”, the book—Reinventing Government—preached the reform of government in private sector ways. In stark contrast to Bureaucracy, which they considered “Bankrupt”. Authors Osborne and Gaebler (1993) promoted Debureaucratization, which they summed up as decentralization, deregulation, downsizing and outsourcing. It is time to revisit the assumptions of the Osborne-Gaebler model that has demonstrably failed. More than the model, however, the ways of its promotion in the market of ideas invites a word of caution. It prompted simplistic distortions, which typically followed from superficial analyses of Weber’s seminal opus (Timsit 2008:864–875). This paper reconsiders these two contrasting models, though not in “black” and “white”. Rather, the two are explored in dialectical terms, as outcomes of shifting ideologies, often taken to extremes.  相似文献   

20.
A Branch and Cut solver for the maximum stable set problem   总被引:1,自引:1,他引:0  
This paper deals with the cutting-plane approach to the maximum stable set problem. We provide theoretical results regarding the facet-defining property of inequalities obtained by a known project-and-lift-style separation method called edge-projection, and its variants. An implementation of a Branch and Cut algorithm is described, which uses edge-projection and two other separation tools which have been discussed for other problems: local cuts (pioneered by Applegate, Bixby, Chvátal and Cook) and mod-k cuts. We compare the performance of this approach to another one by Rossi and Smiriglio (Oper. Res. Lett. 28:63–74, 2001) and discuss the value of the tools we have tested.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号