首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Discrete‐choice models are widely used to model consumer purchase behavior in assortment optimization and revenue management. In many applications, each customer segment is associated with a consideration set that represents the set of products that customers in this segment consider for purchase. The firm has to make a decision on what assortment to offer at each point in time without the ability to identify the customer's segment. A linear program called the Choice‐based Deterministic Linear Program (CDLP) has been proposed to determine these offer sets. Unfortunately, its size grows exponentially in the number of products and it is NP‐hard to solve when the consideration sets of the segments overlap. The Segment‐based Deterministic Concave Program with some additional consistency equalities (SDCP+) is an approximation of CDLP that provides an upper bound on CDLP's optimal objective value. SDCP+ can be solved in a fraction of the time required to solve CDLP and often achieves the same optimal objective value. This raises the question under what conditions can one guarantee equivalence of CDLP and SDCP+. In this study, we obtain a structural result to this end, namely that if the segment consideration sets overlap with a certain tree structure or if they are fully nested, CDLP can be equivalently replaced with SDCP+. We give a number of examples from the literature where this tree structure arises naturally in modeling customer behavior.  相似文献   

2.
State-of-the-art market segmentation often involves simultaneous consideration of multiple and overlapping variables. These variables are studied to assess their relationships, select a subset of variables which best represent the subgroups (segments) within a market, and determine the likelihood of membership of a given individual in a particular segment. Such information, obtained in the exploratory phase of a multivariate market segmentation study, leads to the construction of more parsimonious models. These models have less stringent data requirements while facilitating substantive evaluation to aid marketing managers in formulating more effective targeting and positioning strategies within different market segments. This paper utilizes the information-theoretic (IT) approach to address several issues in multivariate market segmentation studies. A marketing data set analyzed previously is employed to examine the suitability and usefulness of the proposed approach [12]. Some useful extensions of the IT framework and its applications are also discussed.  相似文献   

3.
This paper analyses the practices of ‘integration’ of HRM into the corporate strategy and ‘devolvement’ of responsibility for HRM to line managers in six British manufacturing industries. The findings are based on a questionnaire survey, in‐depth interviews and cognitive mapping methodologies. The results show that over 50% of the firms under study practise a high level of strategic integration. On the other hand, over 61% of the sample firms practise a low level of devolvement practices. Interestingly, both the practices of integration of HRM into the corporate strategy and devolvement of HRM to line managers are more determined by a number of organizational policies than traditional contingent variables. The adoption of the mixed methodology has been useful. The findings contribute to strategic HRM literature, and also have some key messages for policy‐makers in the field. The cognitive maps developed in the paper could be used to give feedback and training to managers.  相似文献   

4.
This article revisits an old problem; “systematically explore the information contained in a set of operating data records and find from it how to improve operational performance by taking the appropriate decisions in the space of operating conditions,” thus leading to continuous process improvement. A series of industrial case studies within the framework of the internships in the Leaders for Manufacturing (LFM) program at Massachusetts Institute of Technology led us to a reexamination of the traditional formulations for the above problem. The resulting methodology is characterized by the following features: (1) problem statement and solutions are expressed in terms of hyperrectangles in the decision space, replacing conventional pointwise results; (2) data-driven, nonparametric learning methodologies were advanced to produce the requisite mapping between performance and decisions; (3) operating performance is in essence multifaceted, leading to a multiobjective problem, which is treated as such. The proposed methodology has been applied to a number of industrial examples and in this paper we provide a brief overview only of those that can be discussed in the open literature.  相似文献   

5.
We propose a new multiple criteria decision aiding approach for market segmentation that integrates preference analysis and segmentation decision within a unified framework. The approach employs an additive value function as the preference model and requires consumers to provide pairwise comparisons of some products as the preference information. To analyze each consumer’s preferences, the approach applies the disaggregation paradigm and the stochastic multicriteria acceptability analysis to derive a set of value functions according to the preference information provided by each consumer. Then, each consumer’s preferences can be represented by the distribution of possible rankings of products and associated support degrees by applying the derived value functions. On the basis of preference analysis, a new metric is proposed to measure the similarity between preferences of different consumers, and a hierarchical clustering algorithm is developed to perform market segmentation. To help firms serve consumers from different segments with targeted marketing policies and appropriate products, the approach proposes to work out a representative value function and the univocal ranking of products for each consumer so that products that rank in the front of the list can be presented to her/him. Finally, an illustrative example of a market segmentation problem details the application of the proposed approach.  相似文献   

6.
In a previous work we proposed a variable fixing heuristics for the 0-1 Multidimensional knapsack problem (01MDK). This approach uses fractional optima calculated in hyperplanes which contain the binary optimum. This algorithm obtained best lower bounds on the OR-Library benchmarks. Although it is very attractive in terms of results, this method does not prove the optimality of the solutions found and may fix variables to a non-optimal value. In this paper, we propose an implicit enumeration based on a reduced costs analysis which tends to fix non-basic variables to their exact values. The combination of two specific constraint propagations based on reduced costs and an efficient enumeration framework enable us to fix variables on the one hand and to prune significantly the search tree on the other hand. Experimentally, our work provides two main contributions: (1) we obtain several new optimal solutions on hard instances of the OR-Library and (2) we reduce the bounds of the number of items at the optimum on several harder instances.  相似文献   

7.
In a make‐to‐order product recovery environment, we consider the allocation decision for returned products decision under stochastic demand of a firm with three options: refurbishing to resell, parts harvesting, and recycling. We formulate the problem as a multiperiod Markov decision process (MDP) and present a linear programming (LP) approximation that provides an upper bound on the optimal objective function value of the MDP model. We then present two solution approaches to the MDP using the LP solution: a static approach that uses the LP solution directly and a dynamic approach that adopts a revenue management perspective and employs bid‐price controls technique where the LP is resolved after each demand arrival. We calculate the bid prices based on the shadow price interpretation of the dual variables for the inventory constraints and accept a demand if the marginal value is higher than the bid price. Since the need for solving the LP at each demand arrival requires a very efficient solution procedure, we present a transportation problem formulation of the LP via variable redefinitions and develop a one‐pass optimal solution procedure for it. We carry out an extensive numerical analysis to compare the two approaches and find that the dynamic approach provides better performance in all of the tested scenarios. Furthermore, the solutions obtained are within 2% of the upper bound on the optimal objective function value of the MDP model.  相似文献   

8.
This paper applies some general concepts in decision theory to a simple instrumental variables model. There are two endogenous variables linked by a single structural equation; k of the exogenous variables are excluded from this structural equation and provide the instrumental variables (IV). The reduced‐form distribution of the endogenous variables conditional on the exogenous variables corresponds to independent draws from a bivariate normal distribution with linear regression functions and a known covariance matrix. A canonical form of the model has parameter vector (ρ, φ, ω), where φis the parameter of interest and is normalized to be a point on the unit circle. The reduced‐form coefficients on the instrumental variables are split into a scalar parameter ρand a parameter vector ω, which is normalized to be a point on the (k−1)‐dimensional unit sphere; ρmeasures the strength of the association between the endogenous variables and the instrumental variables, and ωis a measure of direction. A prior distribution is introduced for the IV model. The parameters φ, ρ, and ωare treated as independent random variables. The distribution for φis uniform on the unit circle; the distribution for ωis uniform on the unit sphere with dimension k‐1. These choices arise from the solution of a minimax problem. The prior for ρis left general. It turns out that given any positive value for ρ, the Bayes estimator of φdoes not depend on ρ; it equals the maximum‐likelihood estimator. This Bayes estimator has constant risk; because it minimizes average risk with respect to a proper prior, it is minimax. The same general concepts are applied to obtain confidence intervals. The prior distribution is used in two ways. The first way is to integrate out the nuisance parameter ωin the IV model. This gives an integrated likelihood function with two scalar parameters, φand ρ. Inverting a likelihood ratio test, based on the integrated likelihood function, provides a confidence interval for φ. This lacks finite sample optimality, but invariance arguments show that the risk function depends only on ρand not on φor ω. The second approach to confidence sets aims for finite sample optimality by setting up a loss function that trades off coverage against the length of the interval. The automatic uniform priors are used for φand ω, but a prior is also needed for the scalar ρ, and no guidance is offered on this choice. The Bayes rule is a highest posterior density set. Invariance arguments show that the risk function depends only on ρand not on φor ω. The optimality result combines average risk and maximum risk. The confidence set minimizes the average—with respect to the prior distribution for ρ—of the maximum risk, where the maximization is with respect to φand ω.  相似文献   

9.
Using predictive global sensitivity analysis, we develop a structural equations model to abstract from the details of a large‐scale mixed integer program (MIP) to capture essential design trade‐offs of global manufacturing and distribution networks. We provide a conceptual framework that describes a firm's network structure along three dimensions: market focus, plant focus, and network dispersion. Normalized dependent variables are specified that act as proxies for a company's placement into our conceptual network classification via the calculation of just a few key independent variables. We provide robust equation sets for eight cost structure clusters. Many different product types could be classified into one of these groups, which would allow managers to use the equations directly without needing to run the MIP for themselves. Our numerical tests suggest that the formulas representing the network structure drivers—economies of scale, complexity costs, transportation costs, and tariffs—may be sufficient for managers to design their strategic network structures, and perhaps more importantly, to monitor them over time to detect potential need for adjustment.  相似文献   

10.
This research studies the p‐robust supply chain network design with uncertain demand and cost scenarios. The optimal design integrates the supplier selection together with the facility location and capacity problem. We provide a new framework to obtain the relative regret limit, which is critical in the robust supply chain design but is assumed to be a known value in the existing literature. We obtain lower and upper bounds for relative regret limit and obtain a sequence of optimal solutions for series relative regret limits between the upper and lower bounds. An algorithm for p‐robust supply chain network design is provided. A series of numerical examples are designed to find the properties of the bottleneck scenarios. A scenario with low probability and a low optimal objective function value for the scenario has a greater chance of being a bottleneck. To focus only on the influence from the relative regret, we also introduce three separate new objective functions in p‐robust design. The proposed new theories and approaches provide a sequence of options for decision makers to reduce the marketing risks effectively in supply chain network design.  相似文献   

11.
In nonlinear panel data models, the incidental parameter problem remains a challenge to econometricians. Available solutions are often based on ingenious, model‐specific methods. In this paper, we propose a systematic approach to construct moment restrictions on common parameters that are free from the individual fixed effects. This is done by an orthogonal projection that differences out the unknown distribution function of individual effects. Our method applies generally in likelihood models with continuous dependent variables where a condition of non‐surjectivity holds. The resulting method‐of‐moments estimators are root‐N consistent (for fixed T) and asymptotically normal, under regularity conditions that we spell out. Several examples and a small‐scale simulation exercise complete the paper.  相似文献   

12.
This paper describes an experimentation methodology to measure how demand varies with price and the results of its application at a toy retailer. The same product is assigned different price‐points in different store panels and the resulting sales are used to estimate a demand curve. We use a variant of the k‐median problem to form store panels that control for differences between stores and produce results that are representative of the entire chain. We use the estimated demand curve to find a price that maximizes profit. Our experiment yielded the unexpected result that demand increases with price in some cases. We present likely reasons for this finding from our discussions with retail managers. Our methodology can be used to analyze the effect of several marketing and promotional levers employed in a retail store besides pricing.  相似文献   

13.
Improvements in information technologies provide new opportunities to control and improve business processes based on real‐time performance data. A class of data we call individualized trace data (ITD) identifies the real‐time status of individual entities as they move through execution processes, such as an individual product passing through a supply chain or a uniquely identified mortgage application going through an approval process. We develop a mathematical framework which we call the State‐Identity‐Time (SIT) Framework to represent and manipulate ITD at multiple levels of aggregation for different managerial purposes. Using this framework, we design a pair of generic quality measures—timeliness and correctness—for the progress of entities through a supply chain. The timeliness and correctness metrics provide behavioral visibility that can help managers to grasp the dynamics of supply chain behavior that is distinct from asset visibility such as inventory. We develop special quality control methods using this framework to address the issue of overreaction that is common among managers faced with a large volume of fast‐changing data. The SIT structure and its associated methods inform managers on if, when, and where to react. We illustrate our approach using simulations based on real RFID data from a Walmart RFID pilot project.  相似文献   

14.
The network choice revenue management problem models customers as choosing from an offer set, and the firm decides the best subset to offer at any given moment to maximize expected revenue. The resulting dynamic program for the firm is intractable and approximated by a deterministic linear program called the CDLP which has an exponential number of columns. However, under the choice‐set paradigm when the segment consideration sets overlap, the CDLP is difficult to solve. Column generation has been proposed but finding an entering column has been shown to be NP‐hard. In this study, starting with a concave program formulation called SDCP that is based on segment‐level consideration sets, we add a class of constraints called product constraints (σPC), that project onto subsets of intersections. In addition, we propose a natural direct tightening of the SDCP called , and compare the performance of both methods on the benchmark data sets in the literature. In our computational testing on the data sets, 2PC achieves the CDLP value at a fraction of the CPU time taken by column generation. For a large network our 2PC procedure runs under 70 seconds to come within 0.02% of the CDLP value, while column generation takes around 1 hour; for an even larger network with 68 legs, column generation does not converge even in 10 hours for most of the scenarios while 2PC runs under 9 minutes. Thus we believe our approach is very promising for quickly approximating CDLP when segment consideration sets overlap and the consideration sets themselves are relatively small.  相似文献   

15.
In recent years, the issue of water allocation among competing users has been of great concern for many countries due to increasing water demand from population growth and economic development. In water management systems, the inherent uncertainties and their potential interactions pose a significant challenge for water managers to identify optimal water-allocation schemes in a complex and uncertain environment. This paper thus proposes a methodology that incorporates optimization techniques and statistical experimental designs within a general framework to address the issues of uncertainty and risk as well as their correlations in a systematic manner. A water resources management problem is used to demonstrate the applicability of the proposed methodology. The results indicate that interval solutions can be generated for the objective function and decision variables, and a number of decision alternatives can be obtained under different policy scenarios. The solutions with different risk levels of constraint violation can help quantify the relationship between the economic objective and the system risk, which is meaningful for supporting risk management. The experimental data obtained from the Taguchi's orthogonal array design are useful for identifying the significant factors affecting the means of total net benefits. Then the findings from the mixed-level factorial experiment can help reveal the latent interactions between those significant factors at different levels and their effects on the modeling response.  相似文献   

16.
The scenario of established business sellers utilizing online auction markets to reach consumers and sell new products is becoming increasingly common. We propose a class of risk management tools, loosely based on the concept of financial options that can be employed by such sellers. While conceptually similar to options in financial markets, we empirically demonstrate that option instruments within auction markets cannot be developed employing similar methodologies, because the fundamental tenets of extant option pricing models do not hold within online auction markets. We provide a framework to analyze the value proposition of options to potential sellers, option‐holder behavior implications on auction processes, and seller strategies to write and price options that maximize potential revenues. We then develop an approach that enables a seller to assess the demand for options under different option price and volume scenarios. We compare option prices derived from our approach with those derived from the Black‐Scholes model ( Black & Scholes, 1973 ) and discuss the implications of the price differences. Experiments based on actual auction data suggest that options can provide significant benefits under a variety of option‐holder behavioral patterns.  相似文献   

17.
Management‐by‐walking‐around (MBWA) is a widely adopted technique in hospitals that involves senior managers directly observing frontline work. However, few studies have rigorously examined its impact on organizational outcomes. This study examines an improvement program based on MBWA in which senior managers observe frontline employees, solicit ideas about improvement opportunities, and work with staff to resolve the issues. We randomly selected hospitals to implement the 18‐month‐long, MBWA‐based improvement program; 56 work areas participated. We find that the program, on average, had a negative impact on performance. To explain this surprising finding, we use mixed methods to examine the impact of the work area's problem‐solving approach. Results suggest that prioritizing easy‐to‐solve problems was associated with improved performance. We believe this was because it resulted in greater action‐taking. A different approach was characterized by prioritizing high‐value problems, which was not successful in our study. We also find that assigning to senior managers responsibility for ensuring that identified problems get resolved resulted in better performance. Overall, our study suggests that senior managers' physical presence in their organizations' front lines was not helpful unless it enabled active problem solving.  相似文献   

18.
Abstract

Reliability determines, in large part, the operational productivity. Nevertheless, a frequent problem is the absence of effective mechanisms to support maintenance management. In particular, there is a need for methodologies focused on improving the detection and analysis of risks that affect reliability. This article presents a methodological proposal for the resolution of these problems, using a high-impact failure mode analysis. The methodology is based on four phases: identification of failure modes, ranking and criticality analysis of them, identification of the root cause(s) and search for highly effective solutions. Among the variety of tools that can be used, it is proposed the use of three specific tools: Criticality Analysis, which allows discrimination and ranking of phenomena and assets; Root Cause Analysis, which focuses on the identification of the real causes of the problems; and a tool for generation of effective and efficient solutions (TRIZ), which it is not usually applied to reliability problems. The proposal is applied in a mining filtration plant, identifying and classifying current problems and generating solutions.  相似文献   

19.
The variable‐route vehicle‐refueling problem (VRVRP) is a variant of the network‐flow problem which seeks, for a vehicle traveling from origin s to destination d, both the route and the refueling policy (sequence of fuel stations to use between s and d) that jointly minimize the fuel cost of operating the vehicle. Commercial‐grade decision support systems that solve the VRVRP are widely used by motor carriers, but they provide heuristic solutions only. Exact methods are available from the academic side, but because they focus on minimizing costs, they tend to cut fuel costs in exchange for increased vehicle miles (which can increase fuel consumptions and pollutants emission). We propose a new approach to the VRVRP that allows carriers to jointly seek the two possibly conflicting goals; minimizing fuel cost and vehicle miles. Computational testing shows that our approach (i) outperforms the commercial software products in both goals, and (ii) finds solutions that require significantly less vehicle miles than those given by the exact method proposed in the academic literature, without incurring unacceptable increases in fuel cost.  相似文献   

20.
We present the Integrated Preference Functional (IPF) for comparing the quality of proposed sets of near‐pareto‐optimal solutions to bi‐criteria optimization problems. Evaluating the quality of such solution sets is one of the key issues in developing and comparing heuristics for multiple objective combinatorial optimization problems. The IPF is a set functional that, given a weight density function provided by a decision maker and a discrete set of solutions for a particular problem, assigns a numerical value to that solution set. This value can be used to compare the quality of different sets of solutions, and therefore provides a robust, quantitative approach for comparing different heuristic, a posteriori solution procedures for difficult multiple objective optimization problems. We provide specific examples of decision maker preference functions and illustrate the calculation of the resulting IPF for specific solution sets and a simple family of combined objectives.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号