首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 172 毫秒
1.
2.
Any management method which is to work in practice must start by recognizing man as he really is, and not what he ought to be. The typical textbook company of the 1960's was peopled by obedient, logical, profit-oriented, ox-like employees—sterilized and faceless characters with£'s signs in their thought bubbles, but the study of Behavioural Science in recent years has taught us that man is a rather more complex animal than we used to think. Simple observations tell us that in practice no business is as tidy, flawless and rational as a 1960's textbook. The reason is that we are people—difficult as individuals for a start and infinitely more complex still when we combine in groups. “Businessman” must be accepted in his full glory, with his ungovernable motives, his intelligence, creativity, his desire for a quiet life, his intuition, his private needs, fears and ambitions and his paradoxical self-seeking ability to cooperate. Everybody who has worked in a company knows the gloriously complicated muddles which “businessman” can get himself into; the misused routines; the convoluted organization structures; the dreadful panics during which all the rules are broken, and the peculiar thing is that, in spite of all these untidinesses, things seem to get done just the same. My point is that management and planning is an extremely complicated business, it is complicated because it is a matter of handling people, it is an art not a science.  相似文献   

3.
The problem of minimizing the mean squared deviation (MSD) of completion times from a common due date in both unconstrained and constrained cases is addressed. It is shown that the unconstrained MDS function is unimodal for n ≤ 6, where n is the number of jobs. The constrained case is shown to be unimodular for n ≤ 3, The unconstrained case is shown, by counterexample, not to be unimodular for n = 8. The constrained case is shown not to be unimodular for n = 5. For the unimodular cases, a proposed search routine can find the optimum solution in less than three CPU seconds for n = 100. It provides an excellent heuristic solution otherwise. Computational results are shown in both cases.  相似文献   

4.
Given a graph G=(V,E), two players, Alice and Bob, alternate their turns in choosing uncoloured vertices to be coloured. Whenever an uncoloured vertex is chosen, it is coloured by the least positive integer not used by any of its coloured neighbours. Alice’s goal is to minimise the total number of colours used in the game, and Bob’s goal is to maximise it. The game Grundy number of G is the number of colours used in the game when both players use optimal strategies. It is proved in this paper that the maximum game Grundy number of forests is 3, and the game Grundy number of any partial 2-tree is at most 7.  相似文献   

5.
In this paper, our major theme is a unifying framework for duality in robust linear programming. We show that there are two pair of dual programs allied with a robust linear program; one in which the primal is constructed to be “ultra-conservative” and one in which the primal is constructed to be “ultra-optimistic.” Furthermore, as one would expect, if the uncertainly in the primal is row-based, the corresponding uncertainty in the dual is column-based, and vice-versa. Several examples are provided that illustrate the properties of these primal and dual models.  相似文献   

6.
Consider the problem of partitioning n nonnegative numbers into p parts, where part i can be assigned ni numbers with ni lying in a given range. The goal is to maximize a Schur convex function F whose ith argument is the sum of numbers assigned to part i. The shape of a partition is the vector consisting of the sizes of its parts, further, a shape (without referring to a particular partition) is a vector of nonnegative integers (n1,..., np) which sum to n. A partition is called size-consecutive if there is a ranking of the parts which is consistent with their sizes, and all elements in a higher-ranked part exceed all elements in the lower-ranked part. We demonstrate that one can restrict attention to size-consecutive partitions with shapes that are nonmajorized, we study these shapes, bound their numbers and develop algorithms to enumerate them. Our study extends the analysis of a previous paper by Hwang and Rothblum which discussed the above problem assuming the existence of a majorizing shape. This research is partially supported by ROC National Science grant NSC 92-2115-M-009-014.  相似文献   

7.
The Web proxy location problem in general networks is an NP-hard problem. In this paper, we study the problem in networks showing a general tree of rings topology. We improve the results of the tree case in literature and get an exact algorithm with time complexity O(nhk), where n is the number of nodes in the tree, h is the height of the tree (the server is in the root of the tree), and k is the number of web proxies to be placed in the net. For the case of networks with a general tree of rings topology we present an exact algorithm with O(kn 2) time complexity.This research has been supported by NSF of China (No. 10371028) and the Educational Department grant of Zhejiang Province (No. 20030622).  相似文献   

8.
This paper argues that the ability of logistics to achieve its aim of efficient and effective interfunctional co-ordination is hindered by the particular paradigm to which it unwittingly adheres. We begin by defining and setting out the objectives of both ‘traditional’ and ‘non-traditional’ forms of logistics, and demonstrate that there is doubt about whether even the newer form can achieve proper interfunctional co-ordination or gain widespread acceptance in the practitioner community. In order to understand why, it is suggested that an analysis is required of the theoretical assumptions upon which contemporary logistics is based. Despite recent developments in logistics it may be that it is some unquestioned paradigm, upon which all logisticians hitherto have relied, which is preventing genuine progress being made. A paradigm analysis is conducted which reveals that both traditional and non-traditional logistics are ‘functionalist’ in nature. It is argued that the main problems faced by logistics derive from this. Logistics suffers from the failings of functionalist thinking and its ambitions will continue to be frustrated unless it is able to achieve an ‘epistemological break’ from functionalism. Logisticians must look towards other paradigms in order to progress. Logistics is in need of its own revolutionaries.  相似文献   

9.
The paper develops integrated production, inventory and maintenance models for a deteriorating production system in which the production facility may not only shift from an ‘in-control’ state to an ‘out-of-control’ state but also may break down at any random point in time during a production run. In case of machine breakdown, production of the interrupted lot is aborted and a new production lot is started when the on-hand inventory is depleted after corrective repair. The process is inspected during each production run to examine the state of the production process. If it is found in the ‘in-control’ state then either (a) no action is taken except at the time of last inspection where preventive maintenance is done (inspection policy-I) or (b) preventive maintenance is performed (inspection policy-II). If, however, the process is found to be in the ‘out-of-control’ state at any inspection then restoration is done. The proposed models are formulated under general shift, breakdown and repair time distributions. As it is, in general, difficult to find the optimal production policy under inspection policy-I, a suboptimal production policy is derived. Numerical examples are taken to determine numerically the optimal/suboptimal production policies of the proposed models, to examine the sensitivity of important model parameters and to compare the performance of inspection and no inspection policies.  相似文献   

10.
Max Boholm 《Risk analysis》2019,39(6):1243-1261
In risk analysis and research, the concept of risk is often understood quantitatively. For example, risk is commonly defined as the probability of an unwanted event or as its probability multiplied by its consequences. This article addresses (1) to what extent and (2) how the noun risk is actually used quantitatively. Uses of the noun risk are analyzed in four linguistic corpora, both Swedish and English (mostly American English). In total, over 16,000 uses of the noun risk are studied in 14 random (n = 500) or complete samples (where n ranges from 173 to 5,144) of, for example, news and magazine articles, fiction, and websites of government agencies. In contrast to the widespread definition of risk as a quantity, a main finding is that the noun risk is mostly used nonquantitatively. Furthermore, when used quantitatively, the quantification is seldom numerical, instead relying on less precise expressions of quantification, such as high risk and increased risk. The relatively low frequency of quantification in a wide range of language material suggests a quantification bias in many areas of risk theory, that is, overestimation of the importance of quantification in defining the concept of risk. The findings are also discussed in relation to fuzzy‐trace theory. Findings of this study confirm, as suggested by fuzzy‐trace theory, that vague representations are prominent in quantification of risk. The application of the terminology of fuzzy‐trace theory for explaining the patterns of language use are discussed.  相似文献   

11.
In this paper an economic production quantity (EPQ) model in which the production rate is variable is studied. An analysis is presented of the impact of a variable production rate on the optimal production quantity and the total relevant cost. It is observed that this EPQ production and inventory system, in which the production rate is close to the demand rate, possesses many characteristics that are similar to a just-in-time (JIT) production system. It is shown that the normal prerequisites and benefits of JIT production can be identified from an analysis of such an EPQ system.  相似文献   

12.
Samuel Eilon 《Omega》1983,11(5):479-490
Elasticity of demand is a measure of market response to a change in price. Three definitions of elasticity of demand are commonly found in the literature: (1) εp = point elasticity, defined for a given point on the demand function and relies on the derivative of the function at that point; (2) εa = arc elasticity, defined for the midpoint of an arc connecting two points, irrespective of the shape of the demand function; (3) ε = relative change elasticity, defined for two given points as minus the ratio of the relative volume increment to the relative price increment. Of the three, εp is the most widely cited and has the merit that marginal revenue is zero at εp = 1. It is also convenient when curves with εp = constant can be fitted to price-demand data. In practice, apart from the fact that εp is often difficult to determine, management is mainly concerned with discrete increments and not marginal changes, and in this respect εa is regarded as more useful. However, the relative change elasticity ε is preferable in all practical applications, both because relative increments refer to a given base period (instead of a midpoint, as in the case of εa), and because of the simplicity of using it in the analysis of company performance.  相似文献   

13.
Consider the following scheduling game. A set of jobs, each controlled by a selfish agent, are to be assigned to m uniformly related machines. The cost of a job is defined as the total load of the machine that its job is assigned to. A job is interested in minimizing its cost, while the social objective is maximizing the minimum load (the value of the cover) over the machines. This goal is different from the regular makespan minimization goal, which was extensively studied in a game theoretic context. We study the price of anarchy (poa) and the price of stability (pos) for uniformly related machines. The results are expressed in terms of s, which is the maximum speed ratio between any two machines. For uniformly related machines, we prove that the pos is unbounded for s>2, and the poa is unbounded for s≥2. For the remaining cases we show that while the poa grows to infinity as s tends to 2, the pos is at most 2 for any s≤2.  相似文献   

14.
Since the seminal work of Ford and Fulkerson in the 1950s, network flow theory is one of the most important and most active areas of research in combinatorial optimization. Coming from the classical maximum flow problem, we introduce and study an apparently basic but new flow problem that features a couple of interesting peculiarities. We derive several results on the complexity and approximability of the new problem. On the way we also discover two closely related basic covering and packing problems that are of independent interest. Starting from an LP formulation of the maximum s-t-flow problem in path variables, we introduce unit upper bounds on the amount of flow being sent along each path. The resulting (fractional) flow problem is NP-hard; its integral version is strongly NP-hard already on very simple classes of graphs. For the fractional problem we present an FPTAS that is based on solving the k shortest paths problem iteratively. We show that the integral problem is hard to approximate and give an interesting O(log?m)-approximation algorithm, where m is the number of arcs in the considered graph. For the multicommodity version of the problem there is an $O(\sqrt{m})Since the seminal work of Ford and Fulkerson in the 1950s, network flow theory is one of the most important and most active areas of research in combinatorial optimization. Coming from the classical maximum flow problem, we introduce and study an apparently basic but new flow problem that features a couple of interesting peculiarities. We derive several results on the complexity and approximability of the new problem. On the way we also discover two closely related basic covering and packing problems that are of independent interest. Starting from an LP formulation of the maximum s-t-flow problem in path variables, we introduce unit upper bounds on the amount of flow being sent along each path. The resulting (fractional) flow problem is NP-hard; its integral version is strongly NP-hard already on very simple classes of graphs. For the fractional problem we present an FPTAS that is based on solving the k shortest paths problem iteratively. We show that the integral problem is hard to approximate and give an interesting O(log m)-approximation algorithm, where m is the number of arcs in the considered graph. For the multicommodity version of the problem there is an O(?m)O(\sqrt{m}) -approximation algorithm. We argue that this performance guarantee is best possible, unless P=NP.  相似文献   

15.
We consider a framework for bi-objective network construction problems where one objective is to be maximized while the other is to be minimized. Given a host graph G=(V,E) with edge weights w e ∈? and edge lengths ? e ∈? for eE we define the density of a pattern subgraph H=(V′,E′)?G as the ratio ?(H)=∑ eE w e /∑ eE ? e . We consider the problem of computing a maximum density pattern H under various additional constraints. In doing so, we compute a single Pareto-optimal solution with the best weight per cost ratio subject to additional constraints further narrowing down feasible solutions for the underlying bi-objective network construction problem. First, we consider the problem of computing a maximum density pattern with weight at least W and length at most L in a host G. We call this problem the biconstrained density maximization problem. This problem can be interpreted in terms of maximizing the return on investment for network construction problems in the presence of a limited budget and a target profit. We consider this problem for different classes of hosts and patterns. We show that it is NP-hard, even if the host has treewidth 2 and the pattern is a path. However, it can be solved in pseudo-polynomial linear time if the host has bounded treewidth and the pattern is a graph from a given minor-closed family of graphs. Finally, we present an FPTAS for a relaxation of the density maximization problem, in which we are allowed to violate the upper bound on the length at the cost of some penalty. Second, we consider the maximum density subgraph problem under structural constraints on the vertex set that is used by the patterns. While a maximum density perfect matching can be computed efficiently in general graphs, the maximum density Steiner-subgraph problem, which requires a subset of the vertices in any feasible solution, is NP-hard and unlikely to admit a constant-factor approximation. When parameterized by the number of vertices of the pattern, this problem is W[1]-hard in general graphs. On the other hand, it is FPT on planar graphs if there is no constraint on the pattern and on general graphs if the pattern is a path.  相似文献   

16.
This article goes beyond the traditional celebration of Entrepreneurship to focus on why Entrepreneurship is not enough. First, entrepreneurial companies that have overcome the start-up difficulties, must become “professionalized” in order to consolidate their gains and face a period of stable growth. This, as is well-known, is not easy. But there is another, even harder problem: in becoming “professional”, most companies lose the entrepreneurial spirit that made them successful in the first place. In an ever-changing world, with more and more international competition in the face of Europe's unification in 1992, a company that loses its entrepreneurial sparkle is just waiting for dismissal. This article analyzes the main causes of that “hardening of the arteries” and their proven remedies.  相似文献   

17.
Yrjö Seppälä 《Omega》1980,8(1):39-45
A relative value of a management information system (MIS) is defined in this paper by a ratio u1u0, where u0 is a value of a utility function of an enterprise whose management information system is perfect, and u1 is its value when it is not perfect and may produce inaccurate or out-of-date data among correct information. Our simulation model contains beuristics which describe the operational and strategic information system of an enterprise. The environment of the enterprise may be stable or dynamic. A mathematical formula, based on simulations, is developed. This formula describes how the relative value of an MIS depends on such factors as the accuracy of an operational information system, delays in information flow, the quality of a strategic information system, a reinvestment ratio used in the enterprise, and a number of investment periods. This formula has been found suitable in an enterprise with a strategically stable environment, but not with a turbulent environment.  相似文献   

18.
The combination of group technology and heuristic is used to schedule jobs on a machine equipped with an automatic tool changer (ATC). The problem is prominent in flexible manufacturing systems where the efficiency in operation is, in part, obtained by use of ATC. The procedure presented here is efficient and does not require complex mathematics.  相似文献   

19.
A decision maker (DM) is characterized by two binary relations. The first reflects choices that are rational in an “objective” sense: the DM can convince others that she is right in making them. The second relation models choices that are rational in a “subjective” sense: the DM cannot be convinced that she is wrong in making them. In the context of decision under uncertainty, we propose axioms that the two notions of rationality might satisfy. These axioms allow a joint representation by a single set of prior probabilities and a single utility index. It is “objectively rational” to choose f in the presence of g if and only if the expected utility of f is at least as high as that of g given each and every prior in the set. It is “subjectively rational” to choose f rather than g if and only if the minimal expected utility of f (with respect to all priors in the set) is at least as high as that of g. In other words, the objective and subjective rationality relations admit, respectively, a representation à la Bewley (2002) and à la Gilboa and Schmeidler (1989). Our results thus provide a bridge between these two classic models, as well as a novel foundation for the latter.  相似文献   

20.
It is widely known that when there are errors with a moving‐average root close to −1, a high order augmented autoregression is necessary for unit root tests to have good size, but that information criteria such as the AIC and the BIC tend to select a truncation lag (k) that is very small. We consider a class of Modified Information Criteria (MIC) with a penalty factor that is sample dependent. It takes into account the fact that the bias in the sum of the autoregressive coefficients is highly dependent on k and adapts to the type of deterministic components present. We use a local asymptotic framework in which the moving‐average root is local to −1 to document how the MIC performs better in selecting appropriate values of k. In Monte‐Carlo experiments, the MIC is found to yield huge size improvements to the DFGLS and the feasible point optimal PT test developed in Elliott, Rothenberg, and Stock (1996). We also extend the M tests developed in Perron and Ng (1996) to allow for GLS detrending of the data. The MIC along with GLS detrended data yield a set of tests with desirable size and power properties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号