首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers the on-line problem of scheduling nonpreemptively n independent jobs on m > 1 identical and parallel machines with the objective to maximize the minimum machine completion time. It is assumed that the values of the processing times are unknown but the order of the jobs by their processing times is known in advance. We are asked to decide the assignment of all the jobs to some machines at time zero by utilizing only ordinal data rather than the actual magnitudes of jobs. Algorithms to slove the problem are called ordinal algorithms. In this paper, we give lower bounds and ordinal algorithms. We first propose an algorithm MIN which is at most -competitive for any m machine case, while the lower bound is i=1 m 1/i. Both are on the order of (ln m). Furthermore, for m = 3, we present an optimal algorithm.  相似文献   

2.
Context in the Risk Assessment of Digital Systems   总被引:1,自引:0,他引:1  
As the use of digital computers for instrumentation and control of safety-critical systems has increased, there has been a growing debate over the issue of whether probabilistic risk assessment techniques can be applied to these systems. This debate has centered on the issue of whether software failures can be modeled probabilistically. This paper describes a context-based approach to software risk assessment that explicitly recognizes the fact that the behavior of software is not probabilistic. The source of the perceived uncertainty in its behavior results from both the input to the software as well as the application and environment in which the software is operating. Failures occur as the result of encountering some context for which the software was not properly designed, as opposed to the software simply failing randomly. The paper elaborates on the concept of error-forcing context as it applies to software. It also illustrates a methodology which utilizes event trees, fault trees, and the Dynamic Flowgraph Methodology (DFM) to identify error-forcing contexts for software in the form of fault tree prime implicants.  相似文献   

3.
This study uses a sample comprised of U.S. students and Iraqi students to determine if differences occur over ethical perceptions based on cultural/demographic issues. Irrespective of demographics, the results of this study indicate significant cultural differences between Iraqi students and American students with regard to selected ethical issues concerning graduate education. Specifically the differences occurred in the students' perceptions of winning is everything, selling one's soul, logic before emotion, and pander to professors. Iraqi students consistently viewed these beliefs as more necessary for success in their graduate education than did their American counterparts.  相似文献   

4.
We study one of the most basic online scheduling models, online one machine scheduling with delivery times where jobs arrive over time. We provide the first randomized algorithm for this model, show that it is 1.55370-competitive and show that this analysis is tight. The best possible deterministic algorithm is 1.61803-competitive. Our algorithm is a distribution between two deterministic algorithms. We show that any such algorithm is no better than 1.5-competitive. To our knowledge, this is the first lower bound proof for a distribution between two deterministic algorithms.  相似文献   

5.
As far as we know, for most polynomially solvable network optimization problems, their inverse problems under l 1 or l norm have been studied, except the inverse maximum-weight matching problem in non-bipartite networks. In this paper we discuss the inverse problem of maximum-weight perfect matching in a non-bipartite network under l 1 and l norms. It has been proved that the inverse maximum-weight perfect matching under l norm can be formulated as a maximum-mean alternating cycle problem of an undirected network, and can be solved in polynomial time by a binary search algorithm and in strongly polynomial time by an ascending algorithm, and under l 1 norm it can be solved by the ellipsoid method. Therefore, inverse problems of maximum-weight perfect matching under l 1 and l norms are solvable in polynomial time.  相似文献   

6.
We consider the problem of approximating the global minimum of a general quadratic program (QP) with n variables subject to m ellipsoidal constraints. For m=1, we rigorously show that an -minimizer, where error (0, 1), can be obtained in polynomial time, meaning that the number of arithmetic operations is a polynomial in n, m, and log(1/). For m 2, we present a polynomial-time (1- )-approximation algorithm as well as a semidefinite programming relaxation for this problem. In addition, we present approximation algorithms for solving QP under the box constraints and the assignment polytope constraints.  相似文献   

7.
The problems dealt with in this paper are generalizations of the set cover problem, min{cx | Ax b, x {0,1}n}, where c Q+n, A {0,1}m × n, b 1. The covering 0-1 integer program is the one, in this formulation, with arbitrary nonnegative entries of A and b, while the partial set cover problem requires only mK constrains (or more) in Ax b to be satisfied when integer K is additionall specified. While many approximation algorithms have been recently developed for these problems and their special cases, using computationally rather expensive (albeit polynomial) LP-rounding (or SDP-rounding), we present a more efficient purely combinatorial algorithm and investigate its approximation capability for them. It will be shown that, when compared with the best performance known today and obtained by rounding methods, although its performance comes short in some special cases, it is at least equally good in general, extends for partial vertex cover, and improves for weighted multicover, partial set cover, and further generalizations.  相似文献   

8.
Hattis  Dale  Banati  Prerna  Goble  Rob  Burmaster  David E. 《Risk analysis》1999,19(4):711-726
This paper reviews existing data on the variability in parameters relevant for health risk analyses. We cover both exposure-related parameters and parameters related to individual susceptibility to toxicity. The toxicity/susceptibility data base under construction is part of a longer term research effort to lay the groundwork for quantitative distributional analyses of non-cancer toxic risks. These data are broken down into a variety of parameter types that encompass different portions of the pathway from external exposure to the production of biological responses. The discrete steps in this pathway, as we now conceive them, are:Contact Rate (Breathing rates per body weight; fish consumption per body weight)Uptake or Absorption as a Fraction of Intake or Contact RateGeneral Systemic Availability Net of First Pass Elimination and Dilution via Distribution Volume (e.g., initial blood concentration per mg/kg of uptake)Systemic Elimination (half life or clearance)Active Site Concentration per Systemic Blood or Plasma ConcentrationPhysiological Parameter Change per Active Site Concentration (expressed as the dose required to make a given percentage change in different people, or the dose required to achieve some proportion of an individual's maximum response to the drug or toxicant)Functional Reserve Capacity–Change in Baseline Physiological Parameter Needed to Produce a Biological Response or Pass a Criterion of Abnormal FunctionComparison of the amounts of variability observed for the different parameter types suggests that appreciable variability is associated with the final step in the process–differences among people in functional reserve capacity. This has the implication that relevant information for estimating effective toxic susceptibility distributions may be gleaned by direct studies of the population distributions of key physiological parameters in people that are not exposed to the environmental and occupational toxicants that are thought to perturb those parameters. This is illustrated with some recent observations of the population distributions of Low Density Lipoprotein Cholesterol from the second and third National Health and Nutrition Examination Surveys.  相似文献   

9.
The problem of colouring a k-colourable graph is well-known to be NP-complete, for k 3. The MAX-k-CUT approach to approximate k-colouring is to assign k colours to all of the vertices in polynomial time such that the fraction of `defect edges' (with endpoints of the same colour) is provably small. The best known approximation was obtained by Frieze and Jerrum (1997), using a semidefinite programming (SDP) relaxation which is related to the Lovász -function. In a related work, Karger et al. (1998) devised approximation algorithms for colouring k-colourable graphs exactly in polynomial time with as few colours as possible. They also used an SDP relaxation related to the -function.In this paper we further explore semidefinite programming relaxations where graph colouring is viewed as a satisfiability problem, as considered in De Klerk et al. (2000). We first show that the approximation to the chromatic number suggested in De Klerk et al. (2000) is bounded from above by the Lovász -function. The underlying semidefinite programming relaxation in De Klerk et al. (2000) involves a lifting of the approximation space, which in turn suggests a provably good MAX-k-CUT algorithm. We show that of our algorithm is closely related to that of Frieze and Jerrum; thus we can sharpen their approximation guarantees for MAX-k-CUT for small fixed values of k. For example, if k = 3 we can improve their bound from 0.832718 to 0.836008, and for k = 4 from 0.850301 to 0.857487. We also give a new asymptotic analysis of the Frieze-Jerrum rounding scheme, that provides a unifying proof of the main results of both Frieze and Jerrum (1997) and Karger et al. (1998) for k 0.  相似文献   

10.
This paper first reviews the literature on the role of codes of conduct for organisations in Hong Kong in their attempts to manage increasingly complex ethical problems and issues. It shows that, although valuable foundations exist upon which to build, research and understanding of the subject is at it's embryonic stage. Social Psychology literature is examined to investigate what lessons those concerned with the study of ethics may learn and the work of Hofstede, as seminal in the area of work-related values, is emphasised in this context.Following Hofstede's proposals for strategies for operationalizing1 constructs about human values, a content analysis2 was conducted on a pilot sample of codes provided by Hong Kong organisations. The results show three clearly identified clusters of organisations with common formats. The first group, described as Foreign Legal, emphasises legal compliance, has criteria for invoking penalties and consists of foreign-owned, large multinational organisations. Companies in the second cluster have codes which, except in the case of a couple of larger organisations, mainly follow the Independent Commission Against Corruption's (ICAC) standard format. The third cluster, described as the Bank Network, also appear to largely conform to a format: the Hong Kong Banking Association's guidelines.Further analysis conducted here of the Hong Kong codes indicates the important role of emic teleological values3, such as trust and reputation, amongst indigenous organisations, rather than the amorality suggested by an earlier study (Dolecheck and Bethke, 1990). These results support the proposition that Hong Kong ethical perspectives are culture bound4, as there appear to be different emphases than revealed in an American study (Stevens, 1994), which identified an emphasis in the US codes upon introverted organisational issues and a failure to espouse deontological values5.The conclusion is that designing a research programme on business values in Hong Kong requires reference to studies of values in cross-cultural psychology generally and to Hofstede's work in particular. It also supports the need for indigenous research and models in this field which avoid the ethnocentrism inherent in much Western theory and research.  相似文献   

11.
Let k 5 be a fixed integer and let m = (k – 1)/2. It is shown that the independence number of a C k-free graph is at least c 1[ d(v)1/(m – 1)](m – 1)/m and that, for odd k, the Ramsey number r(C k, K n) is at most c 2(n m + 1/log n)1/m , where c 1 = c 1(m) > 0 and c 2 = c 2(m) > 0.  相似文献   

12.
Let T = (V,E,w) be an undirected and weighted tree with node set V and edge set E, where w(e) is an edge weight function for e E. The density of a path, say e1, e2,..., ek, is defined as ki = 1 w(ei)/k. The length of a path is the number of its edges. Given a tree with n edges and a lower bound L where 1 L n, this paper presents two efficient algorithms for finding a maximum-density path of length at least L in O(nL) time. One of them is further modified to solve some special cases such as full m-ary trees in O(n) time.  相似文献   

13.
Challenges to the Acceptance of Probabilistic Risk Analysis   总被引:3,自引:0,他引:3  
Bier  Vicki M. 《Risk analysis》1999,19(4):703-710
This paper discusses a number of the key challenges to the acceptance and application of probabilistic risk analysis (PRA). Those challenges include: (a) the extensive reliance on subjective judgment in PRA, requiring the development of guidance for the use of PRA in risk-informed regulation, and possibly the development of robust or reference prior distributions to minimize the reliance on judgment; and (b) the treatment of human performance in PRA, including not only human error per se but also management and organizational factors more broadly. All of these areas are seen as presenting interesting research challenges at the interface between engineering and other disciplines.  相似文献   

14.
Two primal-dual affine scaling algorithms for linear programming are extended to semidefinite programming. The algorithms do not require (nearly) centered starting solutions, and can be initiated with any primal-dual feasible solution. The first algorithm is the Dikin-type affine scaling method of Jansen et al. (1993b) and the second the classical affine scaling method of Monteiro et al. (1990). The extension of the former has a worst-case complexity bound of O(0nL) iterations, where 0 is a measure of centrality of the the starting solution, and the latter a bound of O(0nL2) iterations.  相似文献   

15.
This article employs an institutional perspective in formulating predictions about the ethical futures of privatization partnerships. Although this paper focuses on ethical concerns in the U.S. public sector, it incorporates a multinational dimension in (a) comparing the meaning of privatization among societies and (b) probing privatization financing in the global economy. Five assumptions that flow from institutional reasoning are made explicit as supports for subsequent predictions. The institutional logic shifts privatization conversation away from conventional debate about competition and efficiency toward centralizing forces in both sectors in response to globalization. In that regard, this study identifies the systemic erosion of (local) community integrity as the key privatization problem of the future.  相似文献   

16.
For a Boolean function given by a Boolean formula (or a binary circuit) S we discuss the problem of building a Boolean formula (binary circuit) of minimal size, which computes the function g equivalent to , or -equivalent to , i.e., . In this paper we prove that if P NP then this problem can not be approximated with a good approximation ratio by a polynomial time algorithm.  相似文献   

17.
Reforms known collectively as the new public management (NPM) are sweeping governments worldwide. For a movement espousing customer-based service orientations, there is a curious paucity of research on citizens' attitudes toward these reforms. We know little about how citizens feel about them, how they arrive at their conclusions, and how durable their attitudes are likely to be. Using citizen attitudes culled from the 1987--1992 British General Election Panel Survey, we apply multiple regression analysis to begin exploring these issues in one critical area of NPM reform: privatization of state-owned enterprises. We find that the overall predictive power of the five theoretical perspectives culled from public opinion research and operationalized in our model is quite respectable, but that evaluations are too complex for any single explanation of public opinion formation to capture. We also find that British attitudes toward privatization were most associated with cue-taking from leaders and parties, ideological moorings associated with individualism, and income. From these findings we offer a set of hypotheses suitable for testing in future research—most especially, a disparate impact hypothesis—that have important implications for practice and theory-building regarding public opinion and market-based administrative reforms worldwide.  相似文献   

18.
Given a digraph D, the minimum integral dicycle cover problem (known also as the minimum feedback arc set problem) is to find a minimum set of arcs that intersects every dicycle; the maximum integral dicycle packing problem is to find a maximum set of pairwise arc disjoint dicycles. These two problems are NP-complete.Assume D has a 2-vertex cut. We show how to derive a minimum dicycle cover (a maximum dicycle packing) for D, by composing certain covers (packings) of the corresponding pieces. The composition of the covers is simple and was partially considered in the literature before. The main contribution of this paper is to the packing problem. Let be the value of a minimum integral dicycle cover, and * () the value of a maximum (integral) dicycle packing. We show that if = then a simple composition, similar to that of the covers, is valid for the packing problem. We use these compositions to extend an O(n3) (resp., O(n4)) algorithm for finding a minimum integral dicycle cover (resp., packing) from planar digraphs to K3,3-free digraphs (i.e., digraphs not containing any subdivision of K3,3).However, if , then such a simple composition for the packing problem is not valid. We show, that if the pieces satisfy, what we call, the stability property, then a simple composition does work. We prove that if = * holds for each piece, then the stability property holds as well. Further, we use the stability property to show that if = * holds for each piece, then = * holds for D as well.  相似文献   

19.
We present a few comments on the paper Attacking the market split problem with lattice point enumeration by A. Wasserman, published in Journal of Combinatorial Optimization, vol. 6, pp. 5–16, 2002.  相似文献   

20.
The independence number of a graph and its chromatic number are known to be hard to approximate. Due to recent complexity results, unless coRP = NP, there is no polynomial time algorithm which approximates any of these quantities within a factor of n 1– for graphs on n vertices.We show that the situation is significantly better for the average case. For every edge probability p = p(n) in the range n –1/2+ p 3/4, we present an approximation algorithm for the independence number of graphs on n vertices, whose approximation ratio is O((np)1/2/log n) and whose expected running time over the probability space G(n, p) is polynomial. An algorithm with similar features is described also for the chromatic number.A key ingredient in the analysis of both algorithms is a new large deviation inequality for eigenvalues of random matrices, obtained through an application of Talagrand's inequality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号