首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Samuel Eilon   《Omega》1987,15(6)
The budget problem of selecting projects (or activities) with known values (or payoffs) and associated costs, subject to a prescribed maximum budget, is akin to the knapsack problem, which is well documented in the literature. The optimal solution to maximise the total value of selected projects for a given budget constraint can readily be obtained. In practice, budgets are often somewhat flexible, or subject to possible changes, so that an optimal solution for a given budget value may not remain optimal when the budget is modified. It is, therefore, sensible in many situations to consider a budget range, instead of a single budget value. In addition to their original objective of maximising the total value of selected projects, decision makers are often concerned to get ‘value for money’, indicated by the ratio of payoff to cost. This paper examines how these questions can be tackled through the introduction of a stability index, to guide project selection within a defined budget range, and the use of a portfolio diagram, to help in the ranking of projects with respect to the stated twin objectives.  相似文献   

2.
Collusion in auctions, with different assumptions on distributions of bidders' private valuation, has been studied extensively over the years. With the recent development of on‐line markets, auctions are becoming an increasingly popular procurement method. The emergence of Internet marketplaces makes auction participation much easier and more convenient, since no physical presence of bidders is required. In addition, bidders in on‐line auctions can easily switch their identities. Thus, it may very well happen that the bidders in an auction have very little, if any, prior knowledge about the distributions of other bidders' valuations. We are proposing an efficient distribution of collusive profit for second‐price sealed bid auctions in such an environment. Unlike some known mechanism, which balance the budget only in expectation, our approach (which we call Random k) balances the budget ex‐post. While truth‐telling is not a dominant strategy for Random k, it is a minimax regret equilibrium.  相似文献   

3.
This article presents a qualitative risk assessment of the acquisition of meticillin‐resistant Staphylococcus aureus (MRSA) in pet dogs, representing an important first step in the exploration of risk of bidirectional MRSA transfer between dogs and humans. A conceptual model of the seven potential pathways for MRSA acquisition in a dog in any given 24‐hour period was developed and the data available to populate that model were considered qualitatively. Humans were found to represent the most important source of MRSA for dogs in both community and veterinary hospital settings. The environment was found to be secondary to humans in terms of importance and other dogs less still. This study highlights some important methodological limitations of a technique that is heavily relied upon for qualitative risk assessments and applies a novel process, the use of relative risk ranking, to enable the generation of a defensible output using a matrix combination approach. Given the limitations of the prescribed methods as applied to the problem under consideration, further validation, or repudiation, of the findings contained herein is called for using a subsequent quantitative assessment.  相似文献   

4.
Cox LA 《Risk analysis》2012,32(7):1244-1252
Simple risk formulas, such as risk = probability × impact, or risk = exposure × probability × consequence, or risk = threat × vulnerability × consequence, are built into many commercial risk management software products deployed in public and private organizations. These formulas, which we call risk indices, together with risk matrices, “heat maps,” and other displays based on them, are widely used in applications such as enterprise risk management (ERM), terrorism risk analysis, and occupational safety. But, how well do they serve to guide allocation of limited risk management resources? This article evaluates and compares different risk indices under simplifying conditions favorable to their use (statistically independent, uniformly distributed values of their components; and noninteracting risk‐reduction opportunities). Compared to an optimal (nonindex) approach, simple indices produce inferior resource allocations that for a given cost may reduce risk by as little as 60% of what the optimal decisions would provide, at least in our simple simulations. This article suggests a better risk reduction per unit cost index that achieves 98–100% of the maximum possible risk reduction on these problems for all budget levels except the smallest, which allow very few risks to be addressed. Substantial gains in risk reduction achieved for resources spent can be obtained on our test problems by using this improved index instead of simpler ones that focus only on relative sizes of risk (or of components of risk) in informing risk management priorities and allocating limited risk management resources. This work suggests the need for risk management tools to explicitly consider costs in prioritization activities, particularly in situations where budget restrictions make careful allocation of resources essential for achieving close‐to‐maximum risk‐reduction benefits.  相似文献   

5.
It is well-known that the multiple knapsack problem is NP-hard, and does not admit an FPTAS even for the case of two identical knapsacks. Whereas the 0-1 knapsack problem with only one knapsack has been intensively studied, and some effective exact or approximation algorithms exist. A natural approach for the multiple knapsack problem is to pack the knapsacks successively by using an effective algorithm for the 0-1 knapsack problem. This paper considers such an approximation algorithm that packs the knapsacks in the nondecreasing order of their capacities. We analyze this algorithm for 2 and 3 knapsack problems by the worst-case analysis method and give all their error bounds.  相似文献   

6.
The most interesting developments in the search for new planning procedures are in the areas of ecology, as an eco system with the matrix structure of environmental impact analysis, and Ekistics, the science of human settlement. The grid system developed in this approach combined with the environmental impact analysis model, provides a complete systems approach. The problem is to relate this to systems dynamics and to integrate this approach with the concept of society as a dynamic social system. The literature review highlights key works which lead to or use the systems approach. By attempting to identify a systems approach many planning techniques and concepts are not included. However, if one adopts this approach it is possible to test the use of any technique within a recognized framework and not on the piecemeal basis which is used at present.  相似文献   

7.
Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose‐response modeling. It is a well‐known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low‐dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal‐response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap‐based confidence limits for the BMD. We explore the confidence limits’ small‐sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty.  相似文献   

8.
Maritime transportation is the major conduit of international trade, and the primary link for global crude oil movement. Given the volume of oil transported on international maritime links, it is not surprising that oil spills of both minor and major types result, although most of the risk‐related work has been confined to the local settings. We propose an expected consequence approach for assessing oil‐spill risk from intercontinental transportation of crude oil that not only adheres to the safety guidelines specified by the International Maritime Organization but also outlines a novel technique that makes use of coarse global data to estimate accident probabilities. The proposed estimation technique, together with four of the most popular cost‐of‐spill models from the literature, were applied to study and analyze a realistic size problem instance. Numerical analyses showed that: a shorter route may not necessarily be less risky; an understanding of the inherent oil‐spill risk of different routes could potentially facilitate tanker routing decisions; and the associated negotiations over insurance premium between the transport company and the not‐for‐profit prevention and indemnity clubs. Finally, we note that only the linear model should be used with one of the three nonlinear cost‐of‐spill models for evaluating tanker routes.  相似文献   

9.
The U.S. Environmental Protection Agency undertook a case study in the Detroit metropolitan area to test the viability of a new multipollutant risk‐based (MP/RB) approach to air quality management, informed by spatially resolved air quality, population, and baseline health data. The case study demonstrated that the MP/RB approach approximately doubled the human health benefits achieved by the traditional approach while increasing cost less than 20%—moving closer to the objective of Executive Order 12866 to maximize net benefits. Less well understood is how the distribution of health benefits from the MP/RB and traditional strategies affect the existing inequalities in air‐pollution‐related risks in Detroit. In this article, we identify Detroit populations that may be both most susceptible to air pollution health impacts (based on local‐scale baseline health data) and most vulnerable to air pollution (based on fine‐scale PM2.5 air quality modeling and socioeconomic characteristics). Using these susceptible/vulnerable subpopulation profiles, we assess the relative impacts of each control strategy on risk inequality, applying the Atkinson Index (AI) to quantify health risk inequality at baseline and with either risk management approach. We find that the MP/RB approach delivers greater air quality improvements among these subpopulations while also generating substantial benefits among lower‐risk populations. Applying the AI, we confirm that the MP/RB strategy yields less PM2.5 mortality and asthma hospitalization risk inequality than the traditional approach. We demonstrate the value of this approach to policymakers as they develop cost‐effective air quality management plans that maximize risk reduction while minimizing health inequality.  相似文献   

10.
Sharing common production, resources, and services to reduce cost are important for not for profit operations due to limited and mission‐oriented budget and effective cost allocation mechanisms are essential for encouraging effective collaborations. In this study, we illustrate how rigorous methodologies can be developed to derive effective cost allocations to facilitate sustainable collaborations in not for profit operations by modeling the cost allocation problem arising from an economic lot‐sizing (ELS) setting as a cooperative game. Specifically, we consider the economic lot‐sizing (ELS) game with general concave ordering cost. In this cooperative game, multiple retailers form a coalition by placing joint orders to a single supplier in order to reduce ordering cost. When both the inventory holding cost and backlogging cost are linear functions, it can be shown that the core of this game is non‐empty. The main contribution of this study is to show that a core allocation can be computed in polynomial time under the assumption that all retailers have the same cost parameters. Our approach is based on linear programming (LP) duality. More specifically, we study an integer programming formulation for the ELS problem and show that its LP relaxation admits zero integrality gap, which makes it possible to analyze the ELS game by using LP duality. We show that there exists an optimal dual solution that defines an allocation in the core. An interesting feature of our approach is that it is not necessarily true that every optimal dual solution defines a core allocation. This is in contrast to the duality approach for other known cooperative games in the literature.  相似文献   

11.
Many service industries use revenue management to balance demand and capacity. The assumption of risk-neutrality lies at the heart of the classical approaches, which aim at maximizing expected revenue. In this paper, we give a comprehensive overview of the existing approaches, most of which were only recently developed, and discuss the need to take risk-averse decision makers into account. We then present a heuristic that maximizes conditional value-at-risk (CVaR). Although CVaR has become increasingly popular in finance and actuarial science due to its beneficial properties, this risk measure has not yet been considered in the context of revenue management. We are able to efficiently solve the optimization problem inherent in CVaR by taking advantage of specific structural properties that allow us to reformulate this optimization problem as a continuous knapsack problem. In order to demonstrate the applicability and robustness of our approach, we conduct a simulation study that shows that the new approach can significantly improve the risk profile in various scenarios.  相似文献   

12.
Duan Li 《Risk analysis》2012,32(11):1856-1872
Roy pioneers the concept and practice of risk management of disastrous events via his safety‐first principle for portfolio selection. More specifically, his safety‐first principle advocates an optimal portfolio strategy generated from minimizing the disaster probability, while subject to the budget constraint and the mean constraint that the expected final wealth is not less than a preselected disaster level. This article studies the dynamic safety‐first principle in continuous time and its application in asset and liability management. We reveal that the distortion resulting from dropping the mean constraint, as a common practice to approximate the original Roy’s setting, either leads to a trivial case or changes the problem nature completely to a target‐reaching problem, which produces a highly leveraged trading strategy. Recognizing the ill‐posed nature of the corresponding Lagrangian method when retaining the mean constraint, we invoke a wisdom observed from a limited funding‐level regulation of pension funds and modify the original safety‐first formulation accordingly by imposing an upper bound on the funding level. This model revision enables us to solve completely the safety‐first asset‐liability problem by a martingale approach and to derive an optimal policy that follows faithfully the spirit of the safety‐first principle and demonstrates a prominent nature of fighting for the best and preventing disaster from happening.  相似文献   

13.
W Thomas Lin 《Omega》1980,8(3):375-382
An important problem confronting decision makers in modern organizations is how to plan and control in a multiple goal decision setting. The usual approach for attacking this problem is to assume one dominant goal and treat others as constraints for the budget planning purpose. The traditional accounting control system is a variance analysis which makes a comparison between an ex ante planning budget, a budget adjusted to the actual activity level, and actual results. The present paper describes how to set up multiple goal planning models by using goal programming and multiple objective linear programming techniques. And an opportunity cost concept of ex post accounting variance analysis (which a comparison is made between an ex ante budget, ex post optimum budget, and actual results) is used as a control device. This ex post analysis will signal a deviation in any data input parameter in the planning models.  相似文献   

14.
Cost‐benefit analysis (CBA) is commonly applied as a tool for deciding on risk protection. With CBA, one can identify risk mitigation strategies that lead to an optimal tradeoff between the costs of the mitigation measures and the achieved risk reduction. In practical applications of CBA, the strategies are typically evaluated through efficiency indicators such as the benefit‐cost ratio (BCR) and the marginal cost (MC) criterion. In many of these applications, the BCR is not consistently defined, which, as we demonstrate in this article, can lead to the identification of suboptimal solutions. This is of particular relevance when the overall budget for risk reduction measures is limited and an optimal allocation of resources among different subsystems is necessary. We show that this problem can be formulated as a hierarchical decision problem, where the general rules and decisions on the available budget are made at a central level (e.g., central government agency, top management), whereas the decisions on the specific measures are made at the subsystem level (e.g., local communities, company division). It is shown that the MC criterion provides optimal solutions in such hierarchical optimization. Since most practical applications only include a discrete set of possible risk protection measures, the MC criterion is extended to this situation. The findings are illustrated through a hypothetical numerical example. This study was prepared as part of our work on the optimal management of natural hazard risks, but its conclusions also apply to other fields of risk management.  相似文献   

15.
Knapsack problem is one kind of NP-Complete problem. Unbounded knapsack problems are more complex and harder than general knapsack problem. In this paper, we apply QGAs (Quantum Genetic Algorithms) to solve unbounded knapsack problem and then follow other procedures. First, present the problem into the mode of QGAs and figure out the corresponding genes types and their fitness functions. Then, find the perfect combination of limitation and largest benefit. Finally, the best solution will be found. Primary experiment indicates that our method has well results.  相似文献   

16.
We consider stochastic variants of the NP-hard 0/1 knapsack problem in which item values are deterministic and item sizes are independent random variables with known, arbitrary distributions. Items are placed in the knapsack sequentially, and the act of placing an item in the knapsack instantiates its size. The goal is to compute a policy for insertion of the items, that maximizes the expected value of the set of items placed in the knapsack. These variants that we study differ only in the formula for computing the value of the final solution obtained by the policy. We consider both nonadaptive policies (that designate a priori a fixed subset or permutation of items to insert) and adaptive policies (that can make dynamic decisions based on the instantiated sizes of the items placed in the knapsack thus far). Our work characterizes the benefit of adaptivity. For this purpose we use a measure called the adaptivity gap: the supremum over instances of the ratio between the expected value obtained by an optimal adaptive policy and the expected value obtained by an optimal non-adaptive policy. We show that while for the variants considered in the literature this quantity is bounded by a constant there are other variants where it is unbounded.  相似文献   

17.
A new approach for transforming MRP orders, planned periodically, e.g. on a weekly base, into a detailed sequence of jobs is presented. In this model for a single machine environment, the jobs are partitioned into families and a family specific set-up time is required at the start of each period and of each batch, where a batch is a maximal set of jobs in the same family, that are processed consecutively. An integer program is formulated for both the problem of minimizing the number of overloaded periods and the problem of minimizing the total overtime. These programs generate benchmark results for the heuristic approach. A heuristic model is developed that constructs a schedule in which overloaded periods are relieved and set-up time is saved. In this approach, the job sequence is constructed by repeatedly solving a knapsack problem. The weights used in this knapsack problem relate to the preferred priorities of the jobs not yet scheduled and determine the quality of the final sequence. The different features of the heuristic model are compared using a large set of test problems. The results show that the quality of the final sequence depends on an appropriate choice for the weights.  相似文献   

18.
We consider the problem of defining a strategy consisting of a set of facilities taking into account also the location where they have to be assigned and the time in which they have to be activated. The facilities are evaluated with respect to a set of criteria. The plan has to be devised respecting some constraints related to different aspects of the problem such as precedence restrictions due to the nature of the facilities. Among the constraints, there are some related to the available budget. We consider also the uncertainty related to the performances of the facilities with respect to considered criteria and plurality of stakeholders participating to the decision. The considered problem can be seen as the combination of some prototypical operations research problems: knapsack problem, location problem and project scheduling. Indeed, the basic brick of our model is a variable xilt which takes value 1 if facility i is activated in location l at time t, and 0 otherwise. Due to the conjoint consideration of a location and a time in the decision variables, what we propose can be seen as a general space-time model for operations research problems. We discuss how such a model permits to handle complex problems using several methodologies including multiple attribute value theory and multiobjective optimization. With respect to the latter point, without any loss of the generality, we consider the compromise programming and an interactive methodology based on the Dominance-based Rough Set Approach. We illustrate the application of our model with a simple didactic example.  相似文献   

19.
Over the past decade, terrorism risk has become a prominent consideration in protecting the well‐being of individuals and organizations. More recently, there has been interest in not only quantifying terrorism risk, but also placing it in the context of an all‐hazards environment in which consideration is given to accidents and natural hazards, as well as intentional acts. This article discusses the development of a regional terrorism risk assessment model designed for this purpose. The approach taken is to model terrorism risk as a dependent variable, expressed in expected annual monetary terms, as a function of attributes of population concentration and critical infrastructure. This allows for an assessment of regional terrorism risk in and of itself, as well as in relation to man‐made accident and natural hazard risks, so that mitigation resources can be allocated in an effective manner. The adopted methodology incorporates elements of two terrorism risk modeling approaches (event‐based models and risk indicators), producing results that can be utilized at various jurisdictional levels. The validity, strengths, and limitations of the model are discussed in the context of a case study application within the United States.  相似文献   

20.
This paper shows that the problem of testing hypotheses in moment condition models without any assumptions about identification may be considered as a problem of testing with an infinite‐dimensional nuisance parameter. We introduce a sufficient statistic for this nuisance parameter in a Gaussian problem and propose conditional tests. These conditional tests have uniformly correct asymptotic size for a large class of models and test statistics. We apply our approach to construct tests based on quasi‐likelihood ratio statistics, which we show are efficient in strongly identified models and perform well relative to existing alternatives in two examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号