首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Suppose G is a graph. Two edges e and e′ in G are said to be adjacent if they share a common end vertex, and distance two apart if they are nonadjacent but both are adjacent to a common edge. Let j and k be two positive integers. An L(j,k)-edge-labeling of a graph G is an assignment of nonnegative integers, called labels, to the edges of G such that the difference between labels of any two adjacent edges is at least j, and the difference between labels of any two edges that are distance two apart is at least k. The minimum range of labels over all L(j,k)-edge-labelings of a graph G is called the L(j,k)-edge-labeling number of G, denoted by $\lambda_{j,k}'(G)$ . Let m, j and k be positive integers. An m-circular-L(j,k)-edge-labeling of a graph G is an assignment f from {0,1,…,m?1} to the edges of G such that, for any two edges e and e′, |f(e)?f(e′)| m j if e and e′ are adjacent, and |f(e)?f(e′)| m k if e and e′ are distance two apart, where |a| m =min{a,m?a}. The minimum m such that G has an m-circular-L(j,k)-edge-labeling is called the circular-L(j,k)-edge-labeling number of G, denoted by $\sigma_{j,k}'(G)$ . This paper investigates the L(1,1)-edge-labeling numbers, the L(2,1)-edge-labeling numbers and the circular-L(2,1)-edge-labeling numbers of the hexagonal lattice, the square lattice, the triangular lattice and the strong product of two infinite paths.  相似文献   

2.
Suppose \(d\) is a positive integer. An \(L(d,1)\) -labeling of a simple graph \(G=(V,E)\) is a function \(f:V\rightarrow \mathbb{N }=\{0,1,2,{\ldots }\}\) such that \(|f(u)-f(v)|\ge d\) if \(d_G(u,v)=1\) ; and \(|f(u)-f(v)|\ge 1\) if \(d_G(u,v)=2\) . The span of an \(L(d,1)\) -labeling \(f\) is the absolute difference between the maximum and minimum labels. The \(L(d,1)\) -labeling number, \(\lambda _d(G)\) , is the minimum of span over all \(L(d,1)\) -labelings of \(G\) . Whittlesey et al. proved that \(\lambda _2(Q_n)\le 2^k+2^{k-q+1}-2,\) where \(n\le 2^k-q\) and \(1\le q\le k+1\) . As a consequence, \(\lambda _2(Q_n)\le 2n\) for \(n\ge 3\) . In particular, \(\lambda _2(Q_{2^k-k-1})\le 2^k-1\) . In this paper, we provide an elementary proof of this bound. Also, we study the \(L(1,1)\) -labeling number of \(Q_n\) . A lower bound on \(\lambda _1(Q_n)\) are provided and \(\lambda _1(Q_{2^k-1})\) are determined.  相似文献   

3.
Since Sedlá\(\breve{\hbox {c}}\)ek introduced the notion of magic labeling of a graph in 1963, a variety of magic labelings of a graph have been defined and studied. In this paper, we study consecutive edge magic labelings of a connected bipartite graph. We make a useful observation that there are only four possible values of b for which a connected bipartite graph has a b-edge consecutive magic labeling. On the basis of this fundamental result, we deduce various interesting results on consecutive edge magic labelings of bipartite graphs. As a matter of fact, we do not focus just on specific classes of graphs, but also discuss the more general classes of non-bipartite and bipartite graphs.  相似文献   

4.
5.
Environmental Protection Agency (EPA) ambient air quality guidelines are meant to limit long‐term exposures of toxins to safe levels. Unfortunately, there is little guidance for what constitutes a safe level from a one‐time (or very infrequent) short exposure(s). In the case of mercury, a review of the derivation of the EPA ambient air quality standard shows that it implicitly assumes a tissue burden model. The time dependence of the tissue burden is commonly described in terms of a half‐life, a modeling assumption that presumes that the decline in the tissue burden after a single exposure can be approximately described as an exponential decay. In this article, we use a simple exponential tissue burden model to derive a time‐dependent no observable adverse effect level (NOAEL) for mercury concentrations in air. The model predicts that tissue body burden will asymptotically approach the EPA air quality level for long exposure times, and reach workplace standard levels for exposures of a few hours. The model was used along with data on mercury levels from experimental work done by the Maine Department of Environmental Protection to evaluate the risks from a broken compact fluorescent lamp in a residential setting. Mercury levels approached the NOAEL only when the debris was left in an almost sealed room. Normal common‐sense cleaning measures: removal of debris to an outside area, and ventilation of the room for several minutes, reduced exposures to less than 1% of the NOAEL.  相似文献   

6.
An efficient approach for large scale graph partitioning   总被引:1,自引:0,他引:1  
In this paper, we consider the problem of partitioning the set of vertices of a graph intok subsets of approximately the same cardinality in such a way that the number of edges whose endpoints are in different subsets is minimized. A greedy heuristic, called Procedure1, based on the independent growth of each subset by fronts is proposed for constructing a good-quality graph partition. In addition, we present a more sophisticated version of the greedy heuristic, called Procedure2, which periodically calls a routine to refine the partition being built. We show that the partitions generated by Procedure1 are competitive with those obtained by several constructive heuristics in the literature, e.g. spectral, geometric, as well as other greedy heuristics. Moreover, the partitions produced by Procedure2 are compared with those produced by a leading graph partitioning method in the literature.  相似文献   

7.
Journal of Combinatorial Optimization - Keyword-aware optimal route queries are a combinatorial optimization problem with three factors, namely keyword coverage, route budget constraint and route...  相似文献   

8.
Fang and Qi (Optim. Methods Softw. 18:143–165, 2003) introduced a new generalized network flow model called manufacturing network flow model for manufacturing process modeling. A key distinguishing feature of such models is the assembling of component raw-materials, in a given proportion, into an end-product. This assembling operation cannot be modeled using usual generalized networks (which allow gains and losses in flows), or using multi-commodity networks (which allow flows of multiple commodity types on a single arc). The authors developed a network-simplex-based algorithm to solve a minimum cost flow problem formulated on such a generalized network and indicated systems of linear equations that need to be solved during the course of the network-simplex-based solution procedure. In this paper, it is first shown how various steps of the network-simplex-based solution procedure can be performed efficiently using appropriate data structures. Further, it is also shown how the resulting system of linear equations can be solved directly on the generalized network.  相似文献   

9.
We study temporary storage of fresh produce in a cross‐dock center. In order to minimize cooling cost, compact storage systems are used. A major disadvantage of these systems is that additional retrieval time is needed, caused by necessary reshuffles due to the improper storage sequence of unit loads. In practice therefore, a dedicated storage policy is used in which every storage lane in the system accommodates only one product. However, this policy does not use the planned arrival time information of the outbound trucks. To exploit this information, this study proposes a mathematical model for a shared storage policy that minimizes total retrieval time. The policy allows different products to share the same lane. In order to solve real‐sized problems, an effective and efficient heuristic is proposed, based on a greedy construction and an improvement part, which provides near optimal solutions. The gaps between the results of the heuristic and the lower bound are mostly less than 1%. The resulting shared storage policy is generally robust against disturbances in arrival or departure times. We compare our shared storage heuristic with dedicated storage to determine which policy performs best under which circumstances. For most practical cases, shared storage appears to outperform dedicated storage, with a shorter response time and better storage lane utilization.  相似文献   

10.
The \(k\)-distance total domination problem is to find a minimum vertex set \(D\) of a graph such that every vertex of the graph is within distance \(k\) from some vertex of \(D\) other than itself, where \(k\) is a fixed positive integer. In the present paper, by using a labeling method, we design an efficient algorithm for solving the \(k\)-distance total domination problem on block graphs, a superclass of trees.  相似文献   

11.
G.G. Hitchings  M Cottam 《Omega》1976,4(2):205-214
The limited ability of schematic procedures, constraints of linear programming techniques, inflexibility of construction methods and inadequacy of dynamic programming approaches to provide acceptable solutions to realistic size layout design problems has led to an ever increasing interest in heuristic procedures. Comparative studies have shown that heuristic procedures can satisfy more desirable qualities in an ‘ideal algorithm’ to a greater extent than competitive techniques. Excessive computational effort, which has been one of the main criticisms levelled against heuristic approaches in the past, proves to be less of a problem in relation to layout design. By utilizing unique attributes associated with real-life problems and using small incisive modifications it is demonstrated how a heuristic procedure can be significantly improved.  相似文献   

12.

Dynamic multi-objective optimization algorithms are used as powerful methods for solving many problems worldwide. Diversity, convergence, and adaptation to environment changes are three of the most important factors that dynamic multi-objective optimization algorithms try to improve. These factors are functions of exploration, exploitation, selection and adaptation operators. Thus, effective operators should be employed to achieve a robust dynamic optimization algorithm. The algorithm presented in this study is known as spread-based dynamic multi-objective algorithm (SBDMOA) that uses bi-directional mutation and convex crossover operators to exploit and explore the search space. The selection operator of the proposed algorithm is inspired by the spread metric to maximize diversity. When the environment changed, the proposed algorithm removes the dominated solutions and mutated all the non-dominated solutions for adaptation to the new environment. Then the selection operator is used to select desirable solutions from the population of non-dominated and mutated solutions. Generational distance, spread, and hypervolume metrics are employed to evaluate the convergence and diversity of solutions. The overall performance of the proposed algorithm is evaluated and investigated on FDA, DMOP, JY, and the heating optimization problem, by comparing it with the DNSGAII, MOEA/D-SV, DBOEA, KPEA, D-MOPSO, KT-DMOEA, Tr-DMOEA and PBDMO algorithms. Empirical results demonstrate the superiority of the proposed algorithm in comparison to other state-of-the-art algorithms.

  相似文献   

13.
《Omega》2004,32(3):213-219
Super-efficiency data envelopment analysis (DEA) model can be used in ranking the performance of efficient decision making units (DMUs). Because of the infeasibility problem associated with the super-efficiency DEA model, ranking has been restricted to the model where constant returns to scale and proportional changes in all inputs or all outputs are assumed. In fact, when super-efficiency is used as an efficiency stability measure, infeasibility means the highest super-efficiency. However, if super-efficiency is interpreted as input saving or output surplus achieved by a specific efficient DMU, infeasibility does not necessarily mean the highest super-efficiency. In order to obtain a complete ranking of efficient DMUs when the two assumptions are relaxed, a modified super-efficiency DEA model is proposed to overcome the infeasibility problem and to correctly capture the super-efficiency represented by the input saving or the output surplus. The current paper suggests using both input- and output-oriented super-efficiency models to characterize the super-efficiency when infeasibility occurs. As a result, we can rank the efficient DMUs if infeasibility occurs. The approach is applied to 20 largest Japanese companies and 15 US cities, respectively.  相似文献   

14.
In this paper, an efficient tabu search algorithm is prepared for solving the single-machine mean tardiness problem. The proposed implementation of the tabu search approach suggests simple techniques for generating neighbourhoods of a given sequence and a combined scheme for intensification and diversification. The tabu search method is shown to produce results very close to the optimal solution using randomly generated problems with varying degrees of difficulty.  相似文献   

15.
A major step in effectively managing radio resources in a cellular network is to design an appropriate scheme for assigning cells to a location area (LA), serviced by a switch, and allocate resources for individual switches. However, this assignment is already proven in the literature to be an NP-hard problem [Merchant A, Sengupta B. Assignment of cells to switches in PCS networks. IEEE/ACM Transactions on Networking 3(5) (1995) 521–6] that requires efficient heuristic search techniques for obtaining real-time solutions. This work presents a state-space search technique, which is a variant of best first search heuristic technique. The algorithm called the block depth first search (BDFS), allocates cells to switches during switch level resource planning. Under various simulated performance criteria, we compare the performance of the proposed technique with other similar procedures in the literature. Our results indicate that the BDFS outperforms the meta-heuristic procedures in terms of both efficiency and quality of solutions. Hence, we conclude that our proposed technique can be effectively used for switch level planning leading to an efficient management of scarce radio resource in cellular networks.  相似文献   

16.
In this paper, we study the general problem of one-dimensional periodic task scheduling under storage requirement, irrespective of machine constraints. We have already presented in (Touati and Eisenbeis, Parallel Process. Lett. 14(2):287–313, 2004) a theoretical framework that allows an optimal optimisation of periodic storage requirement in a cyclic schedule. Since our optimisation problem is NP-hard (Touati, PhD thesis, 2002), solving an exact integer linear programming formulation is too expensive in practice. In this article, we propose an efficient two-steps heuristic using model’s properties that allows fast computation times while providing highly satisfactory results. This method includes the solution of an integer linear program with a totally unimodular constraints matrix in first step, then the solution of a linear assignment problem. Our heuristic is implemented for an industrial compiler for embedded VLIW processors.  相似文献   

17.

This paper investigates and suggests an efficient solution to the problem of scheduling the steel making line in the Mini Steel Mill, which consists of three major processes: molten steel making, continuous slab casting, and hot charged rolling. Careful synchronization of these processes is a key productivity factor, since a very limited amount of work-in-process inventory is allowed. Since each process must run in batch, the schedule for the Mini-Mill consists of grouping and sequencing of lots for each process. However, each process has its own criteria for judging the quality of its lot grouping, which often conflicts with other processes. An efficient scheduling algorithm for the Mini-Mill is proposed. Numerical experiments with real world data suggest that the proposed algorithm yield satisfactory schedules very efficiently. The algorithm is currently used for the actual scheduling of a Mini-Mill in Korea.  相似文献   

18.
19.
20.
In a previous work we proposed a variable fixing heuristics for the 0-1 Multidimensional knapsack problem (01MDK). This approach uses fractional optima calculated in hyperplanes which contain the binary optimum. This algorithm obtained best lower bounds on the OR-Library benchmarks. Although it is very attractive in terms of results, this method does not prove the optimality of the solutions found and may fix variables to a non-optimal value. In this paper, we propose an implicit enumeration based on a reduced costs analysis which tends to fix non-basic variables to their exact values. The combination of two specific constraint propagations based on reduced costs and an efficient enumeration framework enable us to fix variables on the one hand and to prune significantly the search tree on the other hand. Experimentally, our work provides two main contributions: (1) we obtain several new optimal solutions on hard instances of the OR-Library and (2) we reduce the bounds of the number of items at the optimum on several harder instances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号