首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
Various consensus methods proposed for ranking problems yield controversial rankings and/or tied rankings which are vulnerable to considerable dispute. These include Borda-Kendall (BK) and minimum-variance (MV) methods. This paper compares three continuous (ratio-scale) consensus scoring methods with BK and MV ranking methods. One method, termed GM, is an eigenvector scaling of the geometric-mean consensus matrix. GM allows for (1) paired-comparison voting inputs (as opposed to all-at-once ranking), (2) pick-the-winner preference voting, and (3) ratio-scale preference voting. GM is relatively simple to calculate on small computers or calculators, and merging of “close” candidates into tied rankings can be achieved by using an e-threshold tie rule discussed in this paper. The GM method thus can be used for paired-comparison voting to calculate both a ratio-scaled consensus index (based on a consensus eigenvector) and a ranking of candidates that allows for ties between “close” candidates. Eigenvalue analysis is used as a means of evaluating voter inconsistencies.  相似文献   

2.
Abstract. This paper deals with decisions of workload temporal distribution in scheduling discrete and diversified productions. A new way of formulating the scheduling problem is proposed, from which some concepts and tools are presented. The notion of time resource interval objects, TRIs, allows the management of technical (time and resource) aspects at the different levels of a hierarchical structuring of the set of decisions taken in the workshop, from ‘load distribution’ types, to ‘effective realization of the operations’ types. Constraint-based reasoning handles different TRIs corresponding to given kinds of decisions. It helps to highlight the bounds or limits to be respected while deciding, to remain consistent with an initial set of constraints, issued for example from an upper level of decisions. Decisions of load temporal distribution consist in readjustments of some time constraints on a set of planned operations, by taking into account the (or some more detailed) constraints on the resource(s) on which they have been planned, such as finite capacity and/or minimal profitability. The analysis on temporal proximities of the planned operations involves some particular structuring of the time axis into successive time intervals: these structures are associated with sets of temporal bounds, and are called adjacent decompositions of the time axis. Such a decomposition introduces some specific TRIs, associated with load constraints (coming from the planned operations), and resource constraints (coming from limited quantities of resource, or profitability concerns). By respecting the given time and resource constraints, they can ‘exchange’ some quantities of load according to communicating vessels processes. These phenomena have been modelled as bounded flows in a temporal network, and offer new flexible curves of load with finite capacities, to help the decision.  相似文献   

3.
Three laboratory experiments were conducted to assess the relative strengths and weaknesses of bar, symbol, and line graphs for performing a variety of elementary information extraction tasks using two dependent variables, time and accuracy. The findings indicate that the degree of support provided by a particular graph format for a particular data extraction task depends on the matching of format and task in terms of their anchoring characteristics. Anchoring, in this context, refers to the phenomenon that specific and diverse parts of a graph are segmented by the reader to act as salient and relevant cues, or anchors, when different classes of information are to be extracted from the graph. A data extraction task has high x-value (y-value) anchoring if the x-axis (y-axis) component is represented in the question as either a given value or an unknown value. Conversely, a task has low x-value (y-value) anchoring if the x-axis (y-axis) component is not represented in the question as either a given value or as an unknown value. Data extraction accuracy was not significantly affected by presentation format. Bars provided the best time performance for data extraction tasks having high anchoring on both axes but were not appropriate for tasks having low anchoring on either the y axis or both the x and y axes. Line graphs tended to be worse in terms of time performance for tasks having high anchoring on both axes although they were as fast or better than other representations for tasks having low anchoring on both axes. Symbol plots appeared to possess anchoring characteristics associated with both bars and line graphs. Symbols (as with bars) tended to produce a time performance superior to that of line graphs for tasks having high anchoring on both axes; and (as with line graphs) symbols allowed faster results than bar graphs for tasks having low anchoring on either the y axis or both the x and y axes.  相似文献   

4.
This paper attempts to isolate and analyze the principal ideas of multiobjective optimization. We do this without casting aspersions on single-objective optimization or championing any one multiobjective technique. We examine each fundamental idea for strengths and weaknesses and subject two—efficiency and utility—to extended consideration. Some general recommendations are made in light of this analysis. Besides the simple advice to retain single-objective optimization as a possible approach, we suggest that three broad classes of multiobjective techniques are very promising in terms of reliably, and believably, achieving a most preferred solution. These are: (1) partial generation of the efficient set, a rubric we use to unify a wide spectrum of both interactive and analytic methods; (2) explicit utility maximization, a much-overlooked approach combining multiattribute decision theory and mathematical programming; and (3) interactive implicit utility maximization, the popular class of methods introduced by Geoffrion, Dyer, and Feinberg [24] and extended significantly by others.  相似文献   

5.
A recent Decision Sciences paper considered maximizing the probability of achieving a profit target in a two-product newsboy problem. Numerical solutions to this problem revealed some intriguing properties, but the authors were unable to analytically explain many of their results. This paper presents an analytical solution procedure to this problem for the case of uniformly distributed demands. The analytical structure reveals more intriguing properties and these properties are proven and explained.  相似文献   

6.
The distribution of lead time demand is essential for determining reorder points in inventory systems. Usually, the distribution of lead time demand is approximated directly. However, in some cases it may be worthwhile to take the demand per unit time and lead time into account, particularly when specific information is available. This paper deals with the situation where a supplier, who produces on order in fixed production cycles, provides information on the status of the coming production run. The retailer can use this information to gain insight into the lead-time process. A fixed order (svQ) strategy is presented, with a set of reorder points sv depending on the time t until the first possible delivery, which is determined by the information of the supplier. A Markov model that analyzes a given (svQ) strategy is used to quantify the value of the information provided by the supplier. Some numerical examples show that the approach may lead to considerable cost savings compared to the traditional approach that uses only one single reorder point, based on a two-moments approximation. Using this numerical insight, the pros and cons of a more frequent exchange of information between retailers and suppliers can be balanced.  相似文献   

7.
Robert Doktor 《决策科学》1983,14(4):607-612
Ample literature attests to the existence of differential views of causation held by Japanese as compared to Americans. Some new evidence links, in a rather complex manner, these differing causation maps to physiological brain structure. Review of this new evidence somewhat clarifies the nature of the differences in views of causation and preliminarily points toward the developmental phenomena underlying these differences. This, in turn, may help researchers interpret the differences in management practices existent within the two cultures under study.  相似文献   

8.
In a recent issue of this journal, Watkins [13] presented an approach for discovery of decision-maker perceptions of the complexity (dimensionality) of information items that might be supplied by a decision support system. Through use of multidimensional scaling and cluster analysis, relatively homogeneous groups of decision makers, sharing common perceptions of various information items, were formed. This prior research was referred to as a first step in suggesting that information reports could be tailored to groups of decision makers classified on the basis of common perceptions of information. The current research extends the prior study by evaluating decision maker preferences for information in a variety of decision-making scenarios in relation to the previously identified perceptions of the information. Based on the results of the study, conclusions are made which suggest that the tailoring of information to groups of decision makers should be based on both perceptions and preferences for information. Even so, it is demonstrated that the decision tasks have an impact on the preferences for information which may affect the attempt to tailor information to groups of decision makers.  相似文献   

9.
Jang W. Ra 《决策科学》1999,30(2):581-599
The pairwise comparison technique is a building block of the Analytic Hierarchy Process (AHP), which has been popularly used for multicriteria decision analysis. This paper develops a shortcut technique in which only n paired comparisons forming a closed chain are needed for n decision elements. Together with the development of a simple and intuitive measure of (inconsistency, this technique derives the relative weights of decision elements via easy step-by-step calculations on a spreadsheet format. Its performance has been tested on Saaty's wealth of nations example. It is important to notice that ranking and weights yielded from this alternative technique are identical to Harker's incomplete pairwise comparison solution for the same chain orientation for the example tested.  相似文献   

10.
This paper analyzes an expert resolution problem under an uncertain dichotomous choice situation. The experts share a common system of norms and therefore they all prefer the alternative that best suits their purpose. The selection of such an alternative is referred to as a correct choice. Our analysis of optimal decision rules for panels of independent experts is pursued for n-member decision-making bodies, n≤ 5. The suggested optimality criterion is the maximization of the probability of the panel's making the correct choice. Within our framework, this criterion is equivalent to the more common criterion of expected-utility maximization. For three-member panels of experts, the expert resolution problem is solved and illustrated by means of a medical application. For four-member panels, we list the three relevant decision rules, specify the conditions for all possible rankings of these rules, and, finally, present an extended consulting application. We conclude by listing seven relevant decision rules in the case of five-member decision-making bodies.  相似文献   

11.
This paper explores the two-product newsboy problem. Solution procedures are developed to find the optimal production quantities of each product that will maximize the probability of achieving a profit target. The problem is shown to be surprisingly challenging, and numerical results obtained for only the more restrictive cases exhibit interesting behavior with important decision implications. For example, the results suggest that, if a firm has two single-product divisions and each will receive a bonus for achieving a profit target, it is beneficial for the two divisions to cooperate if the targets are lax and the profit margins are high, but not if the targets are tight and the profit margins low. Also, between two products having different profit margins, one should produce more of the product with the lower profit margin if the target is sufficiently lax. This exploration motivates further efforts in solving the two-product problem for the more general cases and also in extending the problem to three or more products.  相似文献   

12.
Disasters garner attention when they occur, and organizations commonly extract valuable lessons from visible failures, adopting new behaviors in response. For example, the United States saw numerous security policy changes following the September 11 terrorist attacks and emergency management and shelter policy changes following Hurricane Katrina. But what about those events that occur that fall short of disaster? Research that examines prior hazard experience shows that this experience can be a mixed blessing. Prior experience can stimulate protective measures, but sometimes prior experience can deceive people into feeling an unwarranted sense of safety. This research focuses on how people interpret near‐miss experiences. We demonstrate that when near‐misses are interpreted as disasters that did not occur and thus provide the perception that the system is resilient to the hazard, people illegitimately underestimate the danger of subsequent hazardous situations and make riskier decisions. On the other hand, if near‐misses can be recognized and interpreted as disasters that almost happened and thus provide the perception that the system is vulnerable to the hazard, this will counter the basic “near‐miss” effect and encourage mitigation. In this article, we use these distinctions between resilient and vulnerable near‐misses to examine how people come to define an event as either a resilient or vulnerable near‐miss, as well as how this interpretation influences their perceptions of risk and their future preparedness behavior. Our contribution is in highlighting the critical role that people's interpretation of the prior experience has on their subsequent behavior and in measuring what shapes this interpretation.  相似文献   

13.
In an earlier issue of Decision Sciences, Jesse, Mitra, and Cox [1] examined the impact of inflationary conditions on the economic order quantity (EOQ) formula. Specifically, the authors analyzed the effect of inflation on order quantity decisions by means of a model that takes into account both inflationary trends and time discounting (over an infinite time horizon). In their analysis, the authors utilized two models: Current-dollars model and Constant-dollars model. These models were derived, of course, by setting up a total cost equation in the usual manner then finding the optimum order quantity that minimizes the total cost. Jesse, Mitra, and Cox [1] found that EOQ is approximately the same under both conditions; with or without inflation. However, we disagree with the conclusion drawn by [2] and show that EOQ will be different under inflationary conditions, provided that the inflationary conditions are properly accounted for in the formulation of the total cost model.  相似文献   

14.
This paper describes a case assignment (calendaring) algorithm for a multi-judge appellate court system. In the algorithm, cases of unequal work content are selected for assignment to one of m panels (or clusters) from a set of N available cases. Each panel of cases is heard by a team of three judges. Each appellate case has an estimated work load and a priority ranking based on the type of appeal and filing date with the court. The algorithm balances both the total work load and the number of cases assigned to each panel while insuring that the highest priority cases are assigned to those available. The assignment problem is normally capacity constrained in that not all of the N cases can be assigned to one of the m panels on the monthly calendar. The algorithm is based on a neighborhood search and bounding principle that continually improves upon an initial feasible solution. Empirical results are presented to demonstrate the effectiveness and efficiency of the algorithm.  相似文献   

15.
A critical aspect in establishing environmental policies lies in the proper assessment of the value of the resource being affected. Standard risk assessment analyses calculate the cost of pollution as consisting, solely, of the cost to remediate a site. This traditional definition is extended here to include the lost value of groundwater. These concepts and their impact on decision-making analyses are illustrated through the case of municipal waste landfills. Based on data from existing polluting sites, a simple cost-benefit probabilistic analysis is conducted first, which equates, as is the practice, the cost of pollution to that of remediation. This leads rationally to selection of the lowest-protection technology. Using plausible arguments the reduction in value of groundwater from potable high-quality water to irrigation water, which is what is returned after remediation, is argued. The arguments consist of: (a) the ratio of the subsidized prices of drinking to irrigation water reflects the relative value of the use of water; (b) the amount paid for remediation, in each case, represents, at a minimum, the value of the water recovered; and (c) the lost value of groundwater equals the value of drinking water minus the value of irrigation water. Incorporation of this lost value of groundwater is sufficient to drastically alter the conclusions of the decision-making analysis and make the highest level technology the most rational and profitable alternative. The broader point of this article lies in that proper accounting of environmental costs is necessary in order to alter environmental policies and practices.  相似文献   

16.
Recently, the Financial Accounting Standards Board (FASB) reevaluated accounting for pension plans. The issue is emotional and highly political in nature. The FASB attempted to justify its approach on the basis of measuring economic activity, but it failed to provide much in the way of analytical support. This paper provides a managerial decision model and an economic basis for the existence of pension plans. A pension plan is described as a cost-saving, risk-sharing, incentive contract. The analysis is developed using agency theory. The model presented here meets three suggested objectives of an employer: 1. Maximization of utility through the maximization of profit 2. Ability to conform the risk characteristics of an employment contract to the risk characteristics of the employer 3. Diversification of the risk inherent in the employment contract Profit is maximized by producing cost savings associated with employee tenure and loyalty. Sharing cost savings with employees (i.e., offering a pension plan) meets the above objectives. The employer determines the optimal sharing rate for the expected cost savings. An examination of the employer's underlying decision process reveals implications for pension plan accounting which generally are consistent with and support the FASB's Statement of Financial Accounting Standards No. 87 [5].  相似文献   

17.
This study describes a novel method of assessing risk communication effectiveness by reporting an evaluation of a tsunami information brochure by 90 residents of three Pacific coast communities that are vulnerable to a Cascadia Subduction Zone earthquake and tsunami—Commencement Bay, Washington; Lincoln City, Oregon; and Eureka, California. Study participants viewed information that was presented in DynaSearch, an internet-based computer system that allowed them to view text boxes and tsunami inundation zone maps. DynaSearch recorded the number of times each text box or map was clicked and the length of time that it was viewed. This information viewing phase was followed by questionnaire pages assessing important aspects of tsunami hazard and sources of tsunami warnings. Participants gave the longest click durations to what to do in the emergency period during earthquake shaking and in its immediate aftermath before a tsunami arrives—topics that should be displayed prominently in tsunami brochures and emphasized in talks to community groups. The smallest adjusted click durations were associated with advance preparations for a tsunami—topics that can be posted on websites whose URLs are printed in the brochures.  相似文献   

18.
Lixin Tang  Gongshu Wang   《Omega》2008,36(6):976
This paper investigates two batching problems for steelmaking and continuous-casting (SCC) production in an integrated iron and steel enterprise. The tasks of the problems are to make the decisions as how to consolidate ordered slabs into charges, and then how to group charges into casts. The effective decisions on these batching problems can help to balance the requirements of materials in downstream production lines, improve the customer satisfaction levels, and reduce production costs (including reduction of open ordered slabs, less slabs quality upgrading, reduction of steel-grade changeovers, and reduction of inefficient utilization of tundishes lives). We first formulate the problems as integer-programming models by consider practical constraints and requirements, and then develop the two heuristic algorithms for the corresponding batching problems. By embedding above models and algorithms, we develop decision support system (DSS) software with interactive planning editor. The DSS has been tested by using practical data set collected from the steelmaking plant in Baosteel which is one of the most advanced iron and steel enterprises in China. Computational experiments demonstrate that the models and algorithms developed can generate the satisfactory solutions when they work together with the planning editor in the DSS.  相似文献   

19.
一种改进的CBR案例检索相似性度量模型   总被引:1,自引:0,他引:1  
案例相似性度量是基于案例推理(CBR)的突发事件应急决策管理的关键问题。本文基于最近邻算法,提出一种改进的案例相似性度量模型。提出结构相似度度量缺失数据对于案例结构特征相似性的影响,引入变异系数度量案例属性的可替代性,精确描述替代程度不同的属性对于案例相似性的贡献。最后通过具体案例检索实例验证了该模型的有效性。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号