首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider the problem of off-line throughput maximization for job scheduling on one or more machines, where each job has a release time, a deadline and a profit. Most of the versions of the problem discussed here were already treated by Bar-Noy et al. (Proc. 31st ACM STOC, 1999, pp. 622–631; http://www.eng.tau.ac.il/amotz/). Our main contribution is to provide algorithms that do not use linear programming, are simple and much faster than the corresponding ones proposed in Bar-Noy et al. (ibid., 1999), while either having the same quality of approximation or improving it. More precisely, compared to the results of in Bar-Noy et al. (ibid., 1999), our pseudo-polynomial algorithm for multiple unrelated machines and all of our strongly-polynomial algorithms have better performance ratios, all of our algorithms run much faster, are combinatorial in nature and avoid linear programming. Finally, we show that algorithms with better performance ratios than 2 are possible if the stretch factors of the jobs are bounded; a straightforward consequence of this result is an improvement of the ratio of an optimal solution of the integer programming formulation of the JISP2 problem (see Spieksma, Journal of Scheduling, vol. 2, pp. 215–227, 1999) to its linear programming relaxation.  相似文献   

2.
In this paper, we study the complexity and the approximation of the k most vital edges (nodes) and min edge (node) blocker versions for the minimum spanning tree problem (MST). We show that the k most vital edges MST problem is NP-hard even for complete graphs with weights 0 or 1 and 3-approximable for graphs with weights 0 or 1. We also prove that the k most vital nodes MST problem is not approximable within a factor n 1?? , for any ?>0, unless NP=ZPP, even for complete graphs of order n with weights 0 or 1. Furthermore, we show that the min edge blocker MST problem is NP-hard even for complete graphs with weights 0 or 1 and that the min node blocker MST problem is NP-hard to approximate within a factor 1.36 even for graphs with weights 0 or 1.  相似文献   

3.
In 2014, Desormeaux et al. (Discrete Math 319:15–23, 2014) proved a relationship between the annihilation number and 2-domination number of a tree. In this note, we provide a family of bounds for the 2-domination number of a tree based on the amount of vertices of small degree. This family of bounds extends current bounds on the 2-domination number of a tree, and provides an alternative proof for the relationship between the annihilation number and the 2-domination number of a tree that was shown by Desormeaux et al.  相似文献   

4.
The Steiner tree problem asks for the shortest tree connecting a given set of terminal points in a metric space. We design new approximation algorithms for the Steiner tree problems using a novel technique of choosing Steiner points in dependence on the possible deviation from the optimal solutions. We achieve the best up to now approximation ratios of 1.644 in arbitrary metric and 1.267 in rectilinear plane, respectively.  相似文献   

5.
The problem of colouring a k-colourable graph is well-known to be NP-complete, for k 3. The MAX-k-CUT approach to approximate k-colouring is to assign k colours to all of the vertices in polynomial time such that the fraction of `defect edges' (with endpoints of the same colour) is provably small. The best known approximation was obtained by Frieze and Jerrum (1997), using a semidefinite programming (SDP) relaxation which is related to the Lovász -function. In a related work, Karger et al. (1998) devised approximation algorithms for colouring k-colourable graphs exactly in polynomial time with as few colours as possible. They also used an SDP relaxation related to the -function.In this paper we further explore semidefinite programming relaxations where graph colouring is viewed as a satisfiability problem, as considered in De Klerk et al. (2000). We first show that the approximation to the chromatic number suggested in De Klerk et al. (2000) is bounded from above by the Lovász -function. The underlying semidefinite programming relaxation in De Klerk et al. (2000) involves a lifting of the approximation space, which in turn suggests a provably good MAX-k-CUT algorithm. We show that of our algorithm is closely related to that of Frieze and Jerrum; thus we can sharpen their approximation guarantees for MAX-k-CUT for small fixed values of k. For example, if k = 3 we can improve their bound from 0.832718 to 0.836008, and for k = 4 from 0.850301 to 0.857487. We also give a new asymptotic analysis of the Frieze-Jerrum rounding scheme, that provides a unifying proof of the main results of both Frieze and Jerrum (1997) and Karger et al. (1998) for k 0.  相似文献   

6.
This paper focuses on the distributed data aggregation collision-free scheduling problem, which is one of very important issues in wireless sensor networks. Bo et al. (Proc. IEEE INFOCOM, 2009) proposed an approximate distributed algorithm for the problem and Xu et al. (Proc. ACM FOWANC, 2009) proposed a centralized algorithm and its distributed implementation to generate a collision-free scheduling for the problem, which are the only two existing distributed algorithms. Unfortunately, there are a few mistakes in their performance analysis in Bo et al. (Proc. IEEE INFOCOM, 2009), and the distributed algorithm can not get the same latency as the centralized algorithm because the distributed implementation was not an accurate implementation of the centralized algorithm (Xu et al. in Proc. ACM FOWANC, 2009). According to those, we propose an improved distributed algorithm to generate a collision-free schedule for data aggregation in wireless sensor networks. Not an arbitrary tree in Bo et al. (Proc. IEEE INFOCOM, 2009) but a breadth first search tree (BFS) rooted at the sink node is adopted, the bounded latency 61R+5Δ?67 of the schedule is obtained, where R is the radius of the network with respect to the sink node and Δ is the maximum node degree. We also correct the latency bound of the schedule in Bo et al. (Proc. IEEE INFOCOM, 2009) as 61D+5Δ?67, where D is a diameter of the network and prove that our algorithm is more efficient than the algorithm (Bo et al. in Proc. IEEE INFOCOM, 2009). We also give a latency bound for the distributed implementation in Xu et al. (Proc. ACM FOWANC, 2009).  相似文献   

7.
In this paper, we consider an interesting variant of the classical facility location problem called uncapacitated facility location problem with penalties (UFLWP for short) in which each client is either assigned to an opened facility or rejected by paying a penalty. The UFLWP problem has been effectively used to model the facility location problem with outliers. Three constant approximation algorithms have been obtained (Charikar et al. in Proceedings of the Symposium on Discrete Algorithms, pp. 642–651, 2001; Jain et al. in J. ACM 50(6):795–824, 2003; Xu and Xu in Inf. Process. Lett. 94(3):119–123, 2005), and the best known performance ratio is 2. The only known hardness result is a 1.463-inapproximability result inherited from the uncapacitated facility location problem (Guha and Khuller in J. Algorithms 31(1):228–248, 1999). In this paper, We present a 1.8526-approximation algorithm for the UFLWP problem. Our algorithm significantly reduces the gap between known performance ratio and the inapproximability result. Our algorithm first enhances the primal-dual method for the UFLWP problem (Charikar et al. in Proceedings of the Symposium on Discrete Algorithms, pp. 642–651, 2001) so that outliers can be recognized more efficiently, and then applies a local search heuristic (Charikar and Guha in Proceedings of the 39th IEEE Symposium on Foundations of Computer Science, pp. 378–388, 1999) to further reduce the cost for serving those non-rejected clients. Our algorithm is simple and can be easily implemented. The research of this work was supported in part by NSF through CAREER award CCF-0546509 and grant IIS-0713489. A preliminary version of this paper appeared in the Proceedings of the 11th Annual International Computing and Combinatorics Conference (COCOON’05).  相似文献   

8.
In this paper, we study the following disc covering problem: Given a set of discs of various radii on the plane and centers on the grid points, find a subset of discs to maximize the area covered by exactly one disc. This problem originates from the application in digital halftoning, with the best known approximation factor being 5.83 (Asano et al., 2004). We show that if their radii are between two positive constants, then there exists a polynomial time approximation scheme. Our techniques are based on the width-bounded geometric separator recently developed in Fu and Wang (2004), Fu (2006). This research is supported by Louisiana Board of Regents fund under contract number LEQSF(2004-07)-RD-A-35.  相似文献   

9.
This paper makes extended studies on the discrete problem of online scheduling and reliable lead time quotation (discrete Q-SLTQ) introduced by Keskinocak et al. (Manag. Sci. 47(2):264–279, 2001). We first relax the assumption on revenue function from a linear decreasing function to any decreasing function. We present an online deterministic strategy which is optimal in competitiveness for concave revenue functions. The above results are further extended to the continuous Q-SLTQ model where orders are released at arbitrary time points. For the discrete Q-SLTQ problem, if orders are with nonuniform lengths, we prove the nonexistence of online strategies with bounded competitive ratios; otherwise if orders are with unit length but various weights, we present an optimal online strategy.  相似文献   

10.
建立Engle(2002)提出的动态条件相关多元GARCH模型计算深圳股市诸行业指数2001/07/02~2005/07/15期间的时变Beta系数,进而对系统风险Beta系数与收益的关系进行传统的检验和由Pettengill et al.(1995)提出的条件检验,并且探讨了非系统风险、总风险在资产定价中的作用.研究结果表明,Beta与收益间不存在传统的无条件相关关系;部分行业指数的Beta系数与收益符合条件相关关系:当超额市场收益大于0(上市场)时,Beta和收益正相关;当超额市场收益小于0(下市场)时,Beta与收益负相关.但对大多数指数而言,Beta与收益仅在下市场时呈显著的负相关关系.同时非系统风险以及总风险均得到了补偿,表明深圳股市的投资者并没有充分分散化其投资,政府应大力发展机构投资者.  相似文献   

11.
Interspecies Extrapolation: A Reexamination of Acute Toxicity Data   总被引:2,自引:0,他引:2  
We reanalyze the acute toxicity data on cancer chemotherapeutic agents compiled by Freireich et al.(1) and Schein et al.(2) to derive coefficients of the allometric equation for scaling toxic doses across species (toxic dose = a.[body weight]b). In doing so, we extend the analysis of Travis and White (Risk Analysis, 1988, 8, 119-125) by addressing uncertainties inherent in the analysis and by including the hamster data, previously not used. Through Monte Carlo sampling, we specifically account for measurement errors when deriving confidence intervals and testing hypotheses. Two hypotheses are considered: first, that the allometric scaling power (b) varies for chemicals of the type studied; second, that the same scaling power, or "scaling law," holds for all chemicals in the data set. Following the first hypothesis, in 95% of the cases the allometric power of body weight falls in the range from 0.42-0.97, with a population mean of 0.74. Assuming the second hypothesis to be true-that the same scaling law is followed for all chemicals-the maximum likelihood estimate of the scaling power is 0.74; confidence bounds on the mean depend on the size of measurement error assumed. Under a "best case" analysis, 95% confidence bounds on the mean are 0.71 and 0.77, similar to the results reported by Travis and White. For alternative assumptions regarding measurement error, the confidence intervals are larger and include 0.67, but not 1.00. Although a scaling power of about 0.75 provides the best fit to the data as a whole, a scaling power of 0.67, corresponding to scaling per unit surface area, is not rejected when the nonhomogeneity of variances is taken into account. Hence, both surface area and 0.75 power scaling are consistent with the Freireich et al. and Schein et al. data sets. To illustrate the potential impact of overestimating the scaling power, we compare reported human MTDs to values extrapolated from mouse LD10s.  相似文献   

12.
In this short note, we first improve the proof in Zhang et al. [1] to show the strict concavity of the unit time total profit of the whole supply chain with respect to preservation technology investment without approximation. We then generalize the model of Zhang et al. [1] to a broader class of market demand functions. Additionally, theoretical results are provided to illustrate the features of the proposed model.  相似文献   

13.
We study the optimal tree structure for the key management problem. In the key tree, when two or more leaves are deleted or replaced, the updating cost is defined to be the number of encryptions needed to securely update the remaining keys. The objective is to find the optimal tree structure where the worst case updating cost is minimum. We extend the result of 1-replacement problem in Snoeyink et al. (Proceedings of the twentieth annual IEEE conference on computer communications, pp. 422–431, 2001) to the k-user case. We first show that the k-deletion problem can be solved by handling k-replacement problem. Focusing on the k-replacement problem, we show that the optimal tree can be computed in $O(n^{(k+1)^{2}})$ time given a constant k. Then we derive a tight degree bound for optimal tree when replacing two users. By investigating the maximum number of leaves that can be placed on the tree given a fixed worst case replacement cost, we prove that a balanced tree with certain root degree 5≤d≤7 where the number of leaves in the subtrees differs by at most one and each subtree is a 2-3 tree can always achieve the minimum worst case 2-replacement cost. Thus the optimal tree for two-user replacement problem can be efficiently constructed in O(n) time.  相似文献   

14.
We present a primal-dual ?log(n)?-approximation algorithm for the version of the asymmetric prize collecting traveling salesman problem, where the objective is to find a directed tour that visits a subset of vertices such that the length of the tour plus the sum of penalties associated with vertices not in the tour is as small as possible. The previous algorithm for the problem (V.H. Nguyen and T.T Nguyen in Int. J. Math. Oper. Res. 4(3):294–301, 2012) which is not combinatorial, is based on the Held-Karp relaxation and heuristic methods such as the Frieze et al.’s heuristic (Frieze et al. in Networks 12:23–39, 1982) or the recent Asadpour et al.’s heuristic for the ATSP (Asadpour et al. in 21st ACM-SIAM symposium on discrete algorithms, 2010). Depending on which of the two heuristics is used, it gives respectively 1+?log(n)? and $3+ 8\frac{\log(n)}{\log(\log(n))}$ as an approximation ratio. Our algorithm achieves an approximation ratio of ?log(n)? which is weaker than $3+ 8\frac{\log(n)}{\log(\log(n))}$ but represents the first combinatorial approximation algorithm for the Asymmetric Prize-Collecting TSP.  相似文献   

15.
本文采用流行的终极所有权方法对我国上市公司大股东控制私利进行了量化,并在此基础上采用2004和2005年1099家上市公司的样本数据检验了公司治理状况对于大股东控制私利的约束效果.我们的研究表明,公司治理整体上与大股东控制私利呈现倒U型的曲线关系,这种公司治理的总体效果是监事会治理和经理层治理两种治理机制对于大股东控制私利作用的叠加,除此之外的四种治理机制并没有起到约束大股东控制私利的显著作用.此外我们还发现,不同的公司特征也在不同程度上影响着大股东控制私利.  相似文献   

16.

In a JIT manufacturing environment it may be desirable to learn from an archived history of data that contains information that reflects less than optimal factory performance. The purpose of this paper is to use rule induction to predict JIT factory performance from past data that reflects both poor (saturated or starved) and good (efficient) factory performance. Inductive learning techniques have previously been applied to JIT production systems (Markham et al. , Computers and Industrial Engineering, 34 , 717-726, 1998; Markham et al. , International Journal of Manufacturing Technology Management, 11 (4), 239-246, 2000), but these techniques were only applied to data sets that reflected a well-performing factory. This paper presents an approach based on inductive learning in a JIT manufacturing environment that (1) accurately classifies and predicts factory performance based on shop factors, and (2) identifies the important relationships between the shop factors that determine factory performance. An example application is presented in which the classification and regression tree (CART) technique is used to predict saturated, starved or efficient factory performance based on dynamic shop floor data. This means that the relationship between the variables that cause poor factory performance can be discovered and measures to assure efficient performance can then be taken.  相似文献   

17.
Fama认为,盈余惯性与价格惯性是仅有的两个对有效市场假说形成真正威胁的、持续的、稳健的市场异象。本文参考Chan等、Chordia和Shivakumar的研究方法,考察我国股市盈余惯性与价格惯性的关系。研究部分支持了Chordia和Shivakumar所提出的噪声代理变量假说,而未支持Chan等所提出的多种信息源假说。我国股市特殊的信息披露环境与投资者行为特征可以解释本文的研究结论。  相似文献   

18.
考虑决策者是模糊厌恶的,利用实物期权方法,解析地给出了银行价值,企业价值和最优停贷水平。分析了模糊厌恶和基准波动率对最优贷款利率,最优停贷水平,企业价值和银行价值的影响。数值分析表明:模糊厌恶提高了贷款利率,降低了企业和银行价值。在基准波动率水平较小时,模糊厌恶推迟了停贷水平;在基准波动率较大时,模糊厌恶加速了停贷水平。此外,在模糊中性下,企业价值是基准波动率凸函数,银行价值是基准波动率凹函数。而在模糊厌恶下,企业价值和银行价值都随着基准波动率的增加而递减。本文从行为金融角度为中小企业"融资贵"提供了新的解释。  相似文献   

19.
We consider the (preemptive bipartite scheduling problem PBS) (Crescenzi et al., On approximating a scheduling problem, Journal of Combinatorial Optimization, vol. 5, pp. 287–297, 2001) arising in switching communication systems, where each input and output port can be involved in at most one communication at the same time. Given a set of communication tasks to be communicated from the transmitters to the receivers of such a system, we aim to find a schedule minimizing the overall transmission time. To achieve this, we allow the preemption of communication tasks. However, in practice preemption comes with a cost, d, and this renders the problem NP-hard (Gopal et al., An optimal switching algorithm for multibeam satellite systems with variable bandwidth beams, IEEE Trans. Commun., vol.30, pp. 2475–2481, 1982). In this paper, we present a approximation algorithm, which is the first one for the PBS problem with approximation ratio strictly less than two. Furthermore, we propose a simple optimal polynomial time algorithm for a subclass of instances of the PBS problem.This work has been partially supported by the the European Union (FET-Working Group APPOL II), and the Greek General Secretariat of Research and Technology.  相似文献   

20.
In this paper, we consider the problem of enumerating all maximal motifs in an input string for the class of repeated motifs with wild cards. A maximal motif is such a representative motif that is not properly contained in any larger motifs with the same location lists. Although the enumeration problem for maximal motifs with wild cards has been studied in Parida et al. (2001), Pisanti et al. (2003) and Pelfrêne et al. (2003), its output-polynomial time computability has been still open. The main result of this paper is a polynomial space polynomial delay algorithm for the maximal motif enumeration problem for the repeated motifs with wild cards. This algorithm enumerates all maximal motifs in an input string of length n in O(n 3) time per motif with O(n) space, in particular O(n 3) delay. The key of the algorithm is depth-first search on a tree-shaped search route over all maximal motifs based on a technique called prefix-preserving closure extension. We also show an exponential lower bound and a succinctness result on the number of maximal motifs, which indicate the limit of a straightforward approach. The results of the computational experiments show that our algorithm can be applicable to huge string data such as genome data in practice, and does not take large additional computational cost compared to usual frequent motif mining algorithms. This work is done during the Hiroki Arimura’s visit in LIRIS, University Claude-Bernard Lyon 1, France.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号