首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Bryan H Massam  Ian D Askew 《Omega》1982,10(2):195-204
This paper looks at a variety of methods that can be used in evaluating a set of alternate policies using multiple criteria. The methods examined are the structural mapping of indifferences, utility values, lexicographic ordering, factor analysis, concordance analysis and multidimension scaling. Each method is tested using hypothetical data for a problem in which alternative policies are proposed for allocating monies to housing and health projects in a town. The aim is to try to reveal as objectively as possible, a set of preferred alternatives from which one can be chosen in the political decision-making process. After describing and testing the methods individually, they are compared both on the basis of their results and on the principles involved in their approach. Conclusions about the validity of each method are given, and it is emphasized that all methods should only be used as aids in the choice of an optimal policy.  相似文献   

2.
Once a process is stabilized using control charts, it is necessary to determine whether this process is capable of producing die desired quality, as determined by the specifications, without the use of some additional inspection procedure such as 100 percent inspection or acceptance sampling. One common method of making this determination is the use of process capability ratios. However, this approach may lead to erroneous decisions due to the omission of economic information. This paper attempts to remedy this situation by developing economic models to examine the profitability of different inspection policies. These models employ the quadratic loss function to represent the economic cost of quality from external failures, which is commonly omitted or overlooked. Moreover, assuming a normal distribution for the quality characteristic allows the use of simplified formulas that are provided. Thus the calculations can be made using standard normal tables and a calculator. Additionally, these economic models may be used to determine if additional inspection procedures should be reinstated if the quality of the process was to decline, to make capital budgeting decisions involving new equipment that produces parts of a higher quality, and to determine the preferred 100 percent inspection plan or acceptance sampling plan.  相似文献   

3.
碳排放是气候变暖的重要原因之一,研究和预测碳排放增长率能为低碳政策的制定提供理论指导。利用经验模态分解方法,本文将我国碳排放增长率序列分解为短期波动项和趋势项两个序列,并分析了国家政策、国内宏观经济变化、金融危机对短期波动项和趋势项的影响。在此基础上,利用动态神经网络分别对趋势项和短期波动项进行预测,并将二者之和作为最终的碳排放增长率的预测值。最后,从误差序列绝对值的最大值、最小值、均值和标准差四个角度来比较该预测方法与单独以碳排放量和碳排放增长率为输入变量的神经网络模型的优劣,并得出本文提出的模型具有预测有效性的结论。  相似文献   

4.
This paper compares several different production control policies in terms of their robustness to random disturbances such as machine failures, demand fluctuations, and system parameter changes. Simulation models based on VLSI wafer fabrication facilities are utilized to test the performance of the policies. Three different criteria, namely, the average total WIP, the average backlog, and a cost function combining these measures, are used to evaluate performance. Among the policies tested, the Two‐Boundary Control policy outperforms the others.  相似文献   

5.
Non-stationary stochastic demands are very common in industrial settings with seasonal patterns, trends, business cycles, and limited-life items. In such cases, the optimal inventory control policies are also non-stationary. However, due to high computational complexity, non-stationary inventory policies are not usually preferred in real-life applications. In this paper, we investigate the cost of using a stationary policy as an approximation to the optimal non-stationary one. Our numerical study points to two important results: (i) Using stationary policies can be very expensive depending on the magnitude of demand variability. (ii) Stationary policies may be efficient approximations to optimal non-stationary policies when demand information contains high uncertainty, setup costs are high and penalty costs are low.  相似文献   

6.
In this paper, we address several issues related to the use of data envelopment analysis (DEA). These issues include model orientation, input and output selection/definition, the use of mixed and raw data, and the number of inputs and outputs to use versus the number of decision making units (DMUs). We believe that within the DEA community, researchers, practitioners, and reviewers may have concerns and, in many cases, incorrect views about these issues. Some of the concerns stem from what is perceived as being the purpose of the DEA exercise. While the DEA frontier can rightly be viewed as a production frontier, it must be remembered that ultimately DEA is a method for performance evaluation and benchmarking against best-practice. DEA can be viewed as a tool for multiple-criteria evaluation problems where DMUs are alternatives and each DMU is represented by its performance in multiple criteria which are coined/classified as DEA inputs and outputs. The purpose of this paper is to offer some clarification and direction on these matters.  相似文献   

7.
Bhp Rivett 《Omega》1980,8(1):81-93
Indifference mapping uses multi dimensional scaling techniques to allocate multi criteria policies to points in a space based on an input which consists of those pairs of policies which are equally attractive. The paper takes sets of policies for each of which a single value has been pre-assigned and maps them in two dimensions by applying a probability law to the assessment of indifferent pairs. The paper shows that the maps can be used both to deduce the original value of each policy and can also deduce the probability law which was applied.  相似文献   

8.
A large number of techniques for solving the cell formation problem has emerged in recent years. However, little effort has been spent on determining the procedures' relative performance. This paper identifies four problem areas for which important decisions must be made in connection with a comparative study: asymmetry among procedures with respect to input data, sensitivity to input data, ability of cell formation techniques to generate different solutions, and criteria for acceptable cell performance. Relying on a new taxonomy that categorizes cell formation techniques based on required input data, and a new approach to describing and manipulating shop data, this paper illustrates how choices within the four areas above can be resolved within the context of a comparative study. The experiments uncover fundamental relations between cell formation techniques, the types of input data they use, the characteristics of the data that drive the models, and the resulting performance.  相似文献   

9.
This paper discusses the attempts made to improve the profitability of a paper mill. The preliminary analysis on the company's management practices revealed that current sales forecasting, production and sales planning methods and inventory policies are potential areas for profitability improvement. Appropriate Box-Jenkins models were selected for sales forecasting. A linear programming model is developed to obtain an optimal production and sales plan. Inventory policies of class “A” items are revised to cut down ordering and holding costs. An analysis is made to decide on the optimal operating strategy when demand is less than production capacity. The total anticipated annual savings as a result of the study are very significant.  相似文献   

10.
Patrick Rivett 《Omega》1978,6(5):407-417
The paper is a development of the use of multi dimensional scaling techniques for multiple criteria decision. It takes the special case of policies to each of which a single value can be applied and assigns indifferences between all pairs of these policies based on a series of probability laws. It is shown that the mapping performs well not only in throwing to opposite ends the high and low value policies but also shows that the position of policy points along the principal axis of the map is related to the single values.  相似文献   

11.
12.
A robust process minimises the effect of the noise factors on the performance of a product or process. The variation of the performance of a robust process can be measured through modelling and analysis of process robustness. In this paper, a comprehensive methodology for modelling and analysis of process robustness is developed considering a number of relevant tools and techniques such as multivariate regression, control charting and simulation within the broad framework of Taguchi method. The methodology as developed considers, in specific terms, process modelling using historical data pertaining to responses, inputs variables and parameters as well as simulated noise variables data, identification of the model responses at each experimental setting of the controllable variables, estimation of multivariate process capability indices and control of their variability using control charting for determining optimal settings of the process variables using design of experiment-based Taguchi Method. The methodology is applied to a centrifugal casting process that produces worm-wheels for steam power plants in view of its critical importance of maintaining consistent performance in various under controllable situations (input conditions). The results show that the process settings as determined ensure minimum in-control variability with maximum performance of the centrifugal casting process, indicating improved level of robustness.  相似文献   

13.
Longitudinal data are important in exposure and risk assessments, especially for pollutants with long half‐lives in the human body and where chronic exposures to current levels in the environment raise concerns for human health effects. It is usually difficult and expensive to obtain large longitudinal data sets for human exposure studies. This article reports a new simulation method to generate longitudinal data with flexible numbers of subjects and days. Mixed models are used to describe the variance‐covariance structures of input longitudinal data. Based on estimated model parameters, simulation data are generated with similar statistical characteristics compared to the input data. Three criteria are used to determine similarity: the overall mean and standard deviation, the variance components percentages, and the average autocorrelation coefficients. Upon the discussion of mixed models, a simulation procedure is produced and numerical results are shown through one human exposure study. Simulations of three sets of exposure data successfully meet above criteria. In particular, simulations can always retain correct weights of inter‐ and intrasubject variances as in the input data. Autocorrelations are also well followed. Compared with other simulation algorithms, this new method stores more information about the input overall distribution so as to satisfy the above multiple criteria for statistical targets. In addition, it generates values from numerous data sources and simulates continuous observed variables better than current data methods. This new method also provides flexible options in both modeling and simulation procedures according to various user requirements.  相似文献   

14.
Beyond Markowitz with multiple criteria decision aiding   总被引:1,自引:1,他引:0  
The paper is about portfolio selection in a non-Markowitz way, involving uncertainty modeling in terms of a series of meaningful quantiles of probabilistic distributions. Considering the quantiles as evaluation criteria of the portfolios leads to a multiobjective optimization problem which needs to be solved using a Multiple Criteria Decision Aiding (MCDA) method. The primary method we propose for solving this problem is an Interactive Multiobjective Optimization (IMO) method based on so-called Dominance-based Rough Set Approach (DRSA). IMO-DRSA is composed of two phases: computation phase, and dialogue phase. In the computation phase, a sample of feasible portfolio solutions is calculated and presented to the Decision Maker (DM). In the dialogue phase, the DM indicates portfolio solutions which are relatively attractive in a given sample; this binary classification of sample portfolios into ‘good’ and ‘others’ is an input preference information to be analyzed using DRSA; DRSA is producing decision rules relating conditions on particular quantiles with the qualification of supporting portfolios as ‘good’; a rule that best fits the current DM’s preferences is chosen to constrain the previous multiobjective optimization in order to compute a new sample in the next computation phase; in this way, the computation phase yields a new sample including better portfolios, and the procedure loops a necessary number of times to end with the most preferred portfolio. We compare IMO-DRSA with two representative MCDA methods based on traditional preference models: value function (UTA method) and outranking relation (ELECTRE IS method). The comparison, which is of methodological nature, is illustrated by a didactic example.  相似文献   

15.
DA Caplin  JSH Kornbluth 《Omega》1975,3(4):423-441
In this paper we consider the relevance of various planning methods and decision criteria to multiobjective investment planning under uncertainty. Assuming that a natural reaction to uncertainty is to operate so as to leave open as many good options as possible (as opposed to maximizing subjective expected utility) we argue that the planning process should concentrate on analyzing the effects of the initial decision, and that for this exercise the classical methods of mixed integer programming are inappropriate. We demonstrate how the technique of dynamic programming can be extended to take account of multiple objectives and use dynamic programming as a framework in which we analyze the robustness of an initial decision in the face of various types of uncertainty. In so doing we also analyze the risks involved in both the planning and decision making functions.  相似文献   

16.
Changes in demand when manufacturing different products require an optimization model that includes robustness in its definition and methods to deal with it. In this work we propose the r-TSALBP, a multiobjective model for assembly line balancing to search for the most robust line configurations when demand changes. The robust model definition considers a set of demand scenarios and presents temporal and spatial overloads of the stations in the assembly line of the products to be assembled. We present two multiobjective evolutionary algorithms to deal with one of the r-TSALBP variants. The first algorithm uses an additional objective to evaluate the robustness of the solutions. The second algorithm employs a novel adaptive method to evolve separate populations of robust and non-robust solutions during the search. Results show the improvements of using robustness information during the search and the outstanding behavior of the adaptive evolutionary algorithm for solving the problem. Finally, we analyze the managerial impacts of considering the r-TSALBP model for the different organization departments by exploiting the values of the robustness metrics.  相似文献   

17.
Several approaches to the widely recognized challenge of managing product variety rely on the pooling effect. Pooling can be accomplished through the reduction of the number of products or stock‐keeping units (SKUs), through postponement of differentiation, or in other ways. These approaches are well known and becoming widely applied in practice. However, theoretical analyses of the pooling effect assume an optimal inventory policy before pooling and after pooling, and, in most cases, that demand is normally distributed. In this article, we address the effect of nonoptimal inventory policies and the effect of nonnormally distributed demand on the value of pooling. First, we show that there is always a range of current inventory levels within which pooling is better and beyond which optimizing inventory policy is better. We also find that the value of pooling may be negative when the inventory policy in use is suboptimal. Second, we use extensive Monte Carlo simulation to examine the value of pooling for nonnormal demand distributions. We find that the value of pooling varies relatively little across the distributions we used, but that it varies considerably with the concentration of uncertainty. We also find that the ranges within which pooling is preferred over optimizing inventory policy generally are quite wide but vary considerably across distributions. Together, this indicates that the value of pooling under an optimal inventory policy is robust across distributions, but that its sensitivity to suboptimal policies is not. Third, we use a set of real (and highly erratic) demand data to analyze the benefits of pooling under optimal and suboptimal policies and nonnormal demand with a high number of SKUs. With our specific but highly nonnormal demand data, we find that pooling is beneficial and robust to suboptimal policies. Altogether, this study provides deeper theoretical, numerical, and empirical understanding of the value of pooling.  相似文献   

18.
In this paper we study a hybrid system with both manufacturing and remanufacturing. The inventory control strategy we use in the manufacturing loop is an automatic pipeline, inventory and order based production control system (APIOBPCS). In the remanufacturing loop we employ a Kanban policy to represent a typical pull system. The methodology adopted uses control theory and simulation. The aim of the research is to analyse the dynamic (as distinct from the static) performance of the specified hybrid system. Dynamics have implications on total costs in terms of inventory holding, capacity utilisation and customer service failures. We analyse the parameter settings to find preferred “nominal”, “fast” and “slow” values in terms of system dynamics performance criteria such as rise time, settling time and overshoot. Based on these parameter settings, we investigate the robustness of the system to changes in return yield and the manufacturing/remanufacturing lead time. Our results clearly show that the system is robust with respect to the system dynamics performance and the remanufacturing process can help to improve system dynamics performance. Thus, the perceived benefits of remanufacturing of products, both environmentally and economically, as quoted in the literature are found not to be detrimental to system dynamics performance when a Kanban policy is used to control the remanufacturing process.  相似文献   

19.
本文研究了电子市场环境下的供应链运作问题,提出了不确定环境下的鲁棒优化模型.这一研究的实质是在外界需求最差条件下,如何得到电子市场中供应链最优供应量的策略.文中采用区间方法,设计供应链运作的鲁棒最优策略.进一步,在电子市场不确定环境下,进行了鲁棒策略仿真工作,结果表明鲁棒策略能为决策者提供最坏情况下供应商提供产品数量的鲁棒解决方案.  相似文献   

20.
Donald V Mathusz 《Omega》1977,5(5):593-604
Cost-benefit analysis has a considerable literature in which information systems have been patently ignored. This reflects the considerable difficulties of applying the theory to information systems, and the state-of-the art remains relatively as Koopmans described it some 19 years ago (1957). A bar to further development would appear to be the lack of an applicable value-of-information concept. This paper seeks to clarify the issues and provide a robust theoretical and data analysis framework that will cover most situations. The approach here is to separate explicitly the dimensions of cost from those of information benefit, and examine the implications. The Null Information Benefit condition emerges as a special theoretical case, but potentially a most important one in applications. This case together with the Pareto optimum defines a large class of such problems that can be handled by the decision criteria and data analysis techniques tabulated and discussed here. The selection of input data techniques defines the limits of later project justification and may be crucial to the political viability of the projects throughout its life. Finally, the general management vs information systems management relationships are discussed in terms of this situation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号