首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The paper deals with optimal allocation of a given amount of advertising expenditure among various sales districts. It applies dynamic programming to arrive at the optimal policy regarding such allocation. The basic model is that of Zentler and Ryde which heretofore has been solved using a graphical method. The optimal solution is illustrated with an example.  相似文献   

2.
The primary objective of energy policy in many countries is to change the structure of their energy systems so as to reduce the dependence on imported oil. A large amount of funds is spent on energy research and development. The technologies competing for such funds have widely varying characteristics. These relate to costs and benefits, technical performance, environmental effects, the requirements for land, water and materials and the impact on employment. It is necessary to analyse these effects in some detail before decisions on technology programmes can be made. A complete assessment of the possible value of a particular technology cannot be made on an individual basis. It is necessary to consider many technologies simultaneously, competing against each other for various shares of the energy market. However, analysing the behaviour of the entire energy system requires the handling of an extensive amount of data and can only be done effectively with the help of a computerised system. The model, MARKAL, described in this paper is a multi-period linear programming model which has been developed and applied by 15 OECD countries for the purpose of energy technology research and development planning. Examples of the use of the model for this purpose are given both for the group as a whole and for individual countries. The model is structured so that an exogenously specified set of end-use demands must be satisfied given available technologies and energy supplies. The model allows for substitution possibilities in both the energy supply and demand sectors. A feature of the model is the use of varying objective functions such as minimum discounted costs, oil imports, or environmental effluents. These can be used individually or in combination in trade-off situations. The broader question of the use of energy modelling for technology assessment, including its limitations, is also discussed with particular reference to the insights that can be gained from the MARKAL model. Information from the MARKAL model has been used by the International Energy Agency to assist it in formulating a strategy for energy research development and demonstration.  相似文献   

3.
JE Samouilidis 《Omega》1980,8(6):609-621
The Arab oil embargo in 1973 and the subsequent price rises and production restrictions have given birth to a distinct branch within Management Science: energy modelling. This paper gives a critical and selective review on energy modelling, an industry which though thriving in an era of general economic anxiety, is showing signs of arrogant immaturity. After giving a historical background, the paper classifies energy models into three groups: open loop demand or supply models; energy closed loop models; energy-economy closed loop models. For each group the problem area is analysed and some illustrative examples are described. In the last sections, an attempt is made to sum up the experience that has been gained with energy modelling: the basic deficiencies, the impact of this activity on policy formulation and its position within Management Science. It is concluded that energy models, though very poor forecasting devices, can be very useful to policy makers as tools for analysis; energy model developers must convince potential model users and for that purpose they can benefit immensely from the 35-year-long experience accumulated by their colleagues in Management Science.  相似文献   

4.
DEA is used in this paper to investigate target achievements of the operational units of the Norwegian Public Roads Administration (NPRA) charged with traffic safety services. The DEA framework applied corresponds to a BCC model with a unique constant input, or equivalently, with no inputs. This framework is further extended to a DEA-based Malmquist index to measure productivity growth in target achievements. Finally, we use a bootstrapping method to ascertain confidence intervals for efficiency scores derived and to test hypotheses on the extent of productivity growth or regress. The mean efficiency scores by which targets are achieved across the sample years are in the range 0.81–0.93 and significant at the 5% level. Total productivity in target achievements shows progress with significance, on average at 7%. Much of the progress is attributed to technological progress. The results illustrate the usefulness of using a decomposable index for productivity measurement, and the use of bootstrapping for sensitivity tests.  相似文献   

5.
Dynamic Programming (or DP as it is commonly known) is a mathematical programming method which would appear to be a very powerful technique for use in management decision problems. A great number of theoretical texts have been written on the mathematics of DP and a few articles have been published on the more practical aspects, but DP has remained very much on the theoretical shelf as far as practising management has been concerned. This paper gives the results of a survey carried out at the beginning of 1972 and is intended to provide some insight into the use of DP in real management problems in U.K. companies and to show what sort of problems are apparently restricting its use. The findings demonstrate that several firms have used DP in various applications with considerable success. There are also some enlightening comments on the difficulties involved, and on the future potential of DP in industry.  相似文献   

6.
In this note, we consider a discrete fractional programming in light of a decision problem where limited number of indivisible resources are allocated to several heterogeneous projects to maximize the ratio of total profit to total cost. For each project, both profit and cost are solely determined by the amount of resources allocated to it. Although the problem can be reformulated as a linear program with \(O(m^2 n)\) variables and \(O(m^2 n^2)\) constraints, we further show that it can be efficiently solved by induction in \(O(m^3 n^2 \log mn)\) time. In application, this method leads to an \(O(m^3 n^2 \log mn)\) algorithm for assortment optimization problem under nested logit model with cardinality constraints (Feldman and Topaloglu, Oper Res 63: 812–822, 2015).  相似文献   

7.
In this paper, a mathematical programming methodology is applied to a production planning problem involving a soybean processing plant which can purchase its raw materials from multiple origins and must ship its finished products to multiple destinations. A time horizon production planning model is developed, with the objective of maximizing the net income produced by this plant. This model is tested for a five origin, three destination, processing network, over a thirteen month time horizon. Test results, in terms of a production plan and associated purchasing-allocation decisions, are presented and discussed.  相似文献   

8.
Existing research works on process quality improvement focus largely on the linkages between quality improvement cost and production economics such as set-up cost and defect rate reduction. This paper deals with the optimal design problem for process improvement by balancing the sunk investment cost and revenue increments due to the process improvement. We develop an optimal model based on Taguchi cost functions. The model is validated through a real case study in automotive industry where the 6-sigma DMAIC methodology has been applied. According to this research, the management can adjust the investment on prevention and appraisal costs on quality improvement that enhances process capability, reduces product defect rate and, as a result, generates remarkable financial return.  相似文献   

9.
A multiple objective linear programming (MOLP) method utilizing interval criterion weights is applied to the problem of media selection. Using this technique, it is possible to analyze a problem more explicitly in terms of the several objectives inherent in many media selection situations. In order to illustrate the interval criterion weights approach, a multiple objective hierarchical media selection model is presented and its computer results discussed. In addition to being able to deal more directly with different decision criteria, a distinguishing feature of the mathematical analysis applied here is that its output enables the media-planner to be presented with a small cluster of candidate media schedules (rather than just one). Then, from this list, the media-planner should be in a position to qualitatively make a final choice as a close approximation to his optimal solution.  相似文献   

10.
Leadership theory has not lived up to its promise of helping practitioners resolve the challenges and problematics that occur in organizational leadership. Many current theories and models are not contextualized, nor do the dynamic and critical issues facing leaders drive their construction. Alternatively, practitioners too often approach leadership problems using trial and error tactics derived more from anecdotes and popular fads than validated scientific data and models. Yet, while this gap between theory and research has bedeviled the leadership community for much of its history, there have been few if any systematic examinations of its causes. In this review, we have sought to highlight the particular barriers on the leadership practice and theory-building/testing constituencies, respectively, that constrain efforts to integrate them. We also offer a number of propositions and guidelines that we hope can break through these barriers and help stakeholders create a more effective leadership theory and practice symbiosis (LTPS). Finally, we have offered two cases of effective LTPS as examples and models for such integrative research efforts.  相似文献   

11.
Epileptic seizures are manifestations of intermittent spatiotemporal transitions of the human brain from chaos to order. Measures of chaos, namely maximum Lyapunov exponents (STL max ), from dynamical analysis of the electroencephalograms (EEGs) at critical sites of the epileptic brain, progressively converge (diverge) before (after) epileptic seizures, a phenomenon that has been called dynamical synchronization (desynchronization). This dynamical synchronization/desynchronization has already constituted the basis for the design and development of systems for long-term (tens of minutes), on-line, prospective prediction of epileptic seizures. Also, the criterion for the changes in the time constants of the observed synchronization/desynchronization at seizure points has been used to show resetting of the epileptic brain in patients with temporal lobe epilepsy (TLE), a phenomenon that implicates a possible homeostatic role for the seizures themselves to restore normal brain activity. In this paper, we introduce a new criterion to measure this resetting that utilizes changes in the level of observed synchronization/desynchronization. We compare this criterion’s sensitivity of resetting with the old one based on the time constants of the observed synchronization/desynchronization. Next, we test the robustness of the resetting phenomena in terms of the utilized measures of EEG dynamics by a comparative study involving STL max , a measure of phase (φ max ) and a measure of energy (E) using both criteria (i.e. the level and time constants of the observed synchronization/desynchronization). The measures are estimated from intracranial electroencephalographic (iEEG) recordings with subdural and depth electrodes from two patients with focal temporal lobe epilepsy and a total of 43 seizures. Techniques from optimization theory, in particular quadratic bivalent programming, are applied to optimize the performance of the three measures in detecting preictal entrainment. It is shown that using either of the two resetting criteria, and for all three dynamical measures, dynamical resetting at seizures occurs with a significantly higher probability (α=0.05) than resetting at randomly selected non-seizure points in days of EEG recordings per patient. It is also shown that dynamical resetting at seizures using time constants of STL max synchronization/desynchronization occurs with a higher probability than using the other synchronization measures, whereas dynamical resetting at seizures using the level of synchronization/desynchronization criterion is detected with similar probability using any of the three measures of synchronization. These findings show the robustness of seizure resetting with respect to measures of EEG dynamics and criteria of resetting utilized, and the critical role it might play in further elucidation of ictogenesis, as well as in the development of novel treatments for epilepsy.  相似文献   

12.
We study the problem of (off-line) broadcast scheduling in minimizing total flow time and propose a dynamic programming approach to compute an optimal broadcast schedule. Suppose the broadcast server has k pages and the last page request arrives at time n. The optimal schedule can be computed in O(k3(n+k)k−1) time for the case that the server has a single broadcast channel. For m channels case, i.e., the server can broadcast m different pages at a time where m < k, the optimal schedule can be computed in O(nkm) time when k and m are constants. Note that this broadcast scheduling problem is NP-hard when k is a variable and will take O(nkm+1) time when k is fixed and m ≥ 1 with the straightforward implementation of the dynamic programming approach. The preliminary version of this paper appeared in Proceedings of the 11th Annual International Computing and Combinatorics Conference as “Off-line Algorithms for Minimizing the Total Flow Time in Broadcast Scheduling”.  相似文献   

13.
Despite their diverse applications in many domains, the variable precision rough sets (VPRS) model lacks a feasible method to determine a precision parameter (β)(β) value to control the choice of ββ-reducts. In this study we propose an effective method to find the ββ-reducts. First, we calculate a precision parameter value to find the subsets of information system that are based on the least upper bound of the data misclassification error. Next, we measure the quality of classification and remove redundant attributes from each subset. We use a simple example to explain this method and even a real-world example is analyzed. Comparing the implementation results from the proposed method with the neural network approach, our proposed method demonstrates a better performance.  相似文献   

14.
This paper explores the possibilities and potential surrounding inclusive talent management in contrast to conventional normative treatments. By closely examining the meaning of ‘inclusive’ in relation to talent, the paper moves towards a definition of inclusive talent management which is contextualised in a four-part typology of talent management strategies which offers greater conceptual clarity to researchers working in this field. Our conceptualisation of inclusive talent management is further located in the traditions of positive psychology and the Capability Approach. The practical implications of introducing inclusive talent management strategies are considered.  相似文献   

15.
Theory and research in National Human Resource Development (NHRD) to date has relied extensively on literature reviews or on the analysis of secondary sources of data. To address this perceived gap, I elected to interview an NHRD practitioner to advance the scholarly conversation using the case example of India. This article is an interview with the chairperson of the National Skill Development Corporation (NSDC), India. The NSDC is the first private–public–government partnership or coalition, organized to support workforce education and development issues. Given the urgent demand for a skilled workforce, the Government of India has targeted skill development of 500 million people in the next decade (NSDC website). In terms of the state of the art in NHRD practice, the goals of the organization (NSDC) and the perspectives shared by the chairperson offer valuable insights in the implementation of one of the many evolving NHRD strategies in India. Interestingly, the chairperson of the NSDC, in a previous life, was the head of the Murugappa Group, one of the biggest industrial groups of companies in India. Thus, this interview offers unique perspectives of a successful chief executive officer turned NHRD practitioner on the challenges and opportunities facing India. Further, the interview highlights the contribution of NSDC and similar industry–government partnerships and suggests potential implications for other developing countries.  相似文献   

16.
ABSTRACT

This paper has a provocative purpose. From both HRD and academic practice perspectives, it considers the digital pedagogy pivot made necessary by the Covid-19 pandemic. Universities have traditionally resisted substantial change in learning and teaching processes. This paper addresses the challenge they face of achieving the equivalent of a ten-year digital learning strategy in mere months. From a position that HE pedagogy constitutes a site of HRD practice, the paper considers the characteristics of a meaningful, digitally enabled pedagogy in Higher Education (HE) and their alignment with established HRD theories and concepts. It considers the pedagogic opportunities arising from the ‘digital pivot’ and the HRD processes appropriate to facilitate game-changing approaches to academic practice in Higher Education. The paper advances debate about the relationship between HRD and HE academic practice and contributes proposals for HRD processes to support rapid pedagogic change. It further contributes an original categorization of the way in which HRD concepts and theories are aligned with principles of HE pedagogy and a digital pedagogy pivot model.  相似文献   

17.
Abstract

Planners are often billed as leaders and change agents of the (un)built environment. It is, however, important to recognize that they are in reality only one of many players in a sea of actors involved in shaping future developments and projects. Plans and interventions today are co-created and in fact co-evolve relying as much on the input, cooperation and actions of inhabitants, users, developers, politicians as on expert planners and a wide variety of other professions. In this introductory section, we, as editors of this special issue, posit that planners therefore require skills for co-creation drawing on science and working with other disciplines. In turn, planning programmes and curricula need to incorporate learning and teaching approaches that prepare students in higher education for working in co-creation settings by purposefully exposing them to learning environments that involve community, science and practice. The collection of papers, which were presented initially at the 2014 Association of European Schools of Planning congress in Utrecht hereafter showcase curriculum developments and pedagogical research of planning educators from different world regions that in the round shed light on a variety of issues and challenges of embedding learning and teaching for co-creation and co-evolution. In particular, we elaborate on the tensions of employing transformational yet high-risk pedagogies in higher education settings that are becoming increasingly risk-averse and streamlined and we suggest an agenda for planning curriculum development.  相似文献   

18.
This paper considers a multi-year capacity expansion plan for an electric utility with the option of investing in solar generation units. It is demonstrated how issues such as randomness on both the demand and supply side of the problem, as well as the varying availability of solar energy may be conveniently modelled under some plausible assumptions, to yield a large-scale linear programming problem. A brief discussion is provided relating to the evaluation of the capacity credit attributable to the solar generation units. Computational considerations, some possible simplifications, and an illustrative example are also presented.  相似文献   

19.
A new integrated approach to capacity management in complex manufacturing systems is developed and the resulting framework is applied to a case study in tyre production. A hierarchical multilayered decomposition of the planning process is proposed, in which lower layers provide an increased level of detail and accuracy in capacity representation and analysis. Thus, a large and comprehensive model describing the manufacturing system is subdivided into smaller sub-models via relaxation and decomposition techniques. The Lagrangean multipliers provide a bi-directional link among different layers, reducing the risk of sub-optimization and infeasibility of the aggregate plans. Each of the sub-models is easier to solve than the original one and involves a different set of decision variables. Moreover, they relate to different levels of the management hierarchy, so that there emerges a strict correspondence between the decomposition scheme and the decision process underlying capacity management.  相似文献   

20.
The present study fills a gap between the benchmarking literature and multi-output based efficiency and productivity studies by proposing a benchmarking framework to analyze total factor productivity (TFP). Different specifications of the Hicks–Moorsteen TFP index are tailored for specific benchmarking perspectives: (1) static, (2) fixed base and unit, and (3) dynamic TFP change. These approaches assume fixed units and/or base technologies as benchmarks. In contrast to most technology-based productivity indices, the standard Hicks–Moorsteen index always leads to feasible results. Through these specifications, managers can assess different facets of the firm's strategic choices in comparison with firm-specific relevant benchmarks and thus have a broad background for decision making. An empirical application for the Spanish banking industry between 1998 and 2006 illustrates the managerial implications of the proposed framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号