首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
We propose a tractable, data‐driven demand estimation procedure based on the use of maximum entropy (ME) distributions, and apply it to a stochastic capacity control problem motivated from airline revenue management. Specifically, we study the two fare class “Littlewood” problem in a setting where the firm has access to only potentially censored sales observations; this is also known as the repeated newsvendor problem. We propose a heuristic that iteratively fits an ME distribution to all observed sales data, and in each iteration selects a protection level based on the estimated distribution. When the underlying demand distribution is discrete, we show that the sequence of protection levels converges to the optimal one almost surely, and that the ME demand forecast converges to the true demand distribution for all values below the optimal protection level. That is, the proposed heuristic avoids the “spiral down” effect, making it attractive for problems of joint forecasting and revenue optimization problems in the presence of censored observations.  相似文献   

2.
Computation of typical statistical sample estimates such as the median or least squares fit usually require the solution of an unconstrained optimization problem with a convex objective function, that can be solved efficiently by various methods. The presence of outliers in the data dictates the computation of a robust estimate, which can be defined as the optimum statistical estimate for a subset that contains at least half of the observations. The resulting problem is now a combinatorial optimization problem which is often computationally intractable. Classical statistical methods for multivariate location \(\varvec{\mu }\) and scatter matrix \(\varvec{\varSigma }\) estimation are based on the sample mean vector and covariance matrix, which are very sensitive in the presence of outlier observations. We propose a new method for robust location and scatter estimation which is composed of two stages. In the first stage an unbiased multivariate \(L_{1}\)-median center for all the observations is attained by a novel procedure called the least trimmed Euclidean deviations estimator. This robust median defines a coverage set of observations which is used in the second stage to iteratively compute the set of outliers which violate the correlational structure of the data set. Extensive computational experiments indicate that the proposed method outperforms existing methods in accuracy, robustness and computational time.  相似文献   

3.
CJ Sanctuary  KR Nurse 《Omega》1981,9(5):469-480
The work described in this paper was commissioned by the DHSS policy branch responsible for family benefits. The problem is rather more specific than those addressed by the studies described in the paper by Holdaway and Partridge. It does however parallel at least one of the observations made there. The study started out with a relatively broad brief but was constrained by events to concentrate on one particular problem, namely forecasting the cost and caseload of Family Income Supplement.  相似文献   

4.
This article deals with a stochastic optimal control problem for a class of buffered multi-parts flow-shops manufacturing system. The involved machines are subject to random breakdowns and repairs. The flow-shop under consideration is not completely flexible and hence requires setup time and cost in order to switch the production from a part type to another, this changeover is carried on the whole line. Our objective is to find the production plan and the sequence of setups that minimise the cost function, which penalises inventories/backlogs and setups. A continuous dynamic programming formulation of the problem is presented. Then, a numerical scheme is adopted to solve the obtained optimality conditions equations for a two buffered serial machines two parts case. A complete heuristic policy, based on the numerical observations which describe the optimal policies in system states, is developed. It will be shown that the obtained policy is a combination of a KANBAN/CONWIP and a modified hedging corridor policy. Moreover, based on our observations and existent research studies extension to cover more complex flow-shops is henceforth possible. The robustness of such a policy is illustrated through sensitivity analysis.  相似文献   

5.
Multiprocessor job scheduling problem has become increasingly interesting, for both theoretical study and practical applications. Theoretical study of the problem has made significant progress recently, which, however, seems not to imply practical algorithms for the problem, yet. Practical algorithms have been developed only for systems with three processors and the techniques seem difficult to extend to systems with more than three processors. This paper offers new observations and introduces new techniques for the multiprocessor job scheduling problem on systems with four processors. A very simple and practical linear time approximation algorithm of ratio bounded by 1.5 is developed for the multi-processor job scheduling problem P 4|fix|C max, which significantly improves previous results. Our techniques are also useful for multiprocessor job scheduling problems on systems with more than four processors.  相似文献   

6.
Biplane projection imaging is one of the primary methods for imaging and visualizing the cardiovascular system in medicine. A key problem in such a technique is to determine the imaging geometry (i.e., the relative rotation and translation) of two projections so that the 3-D structure can be accurately reconstructed. Based on interesting observations and efficient geometric techniques, we present in this paper new algorithmic solutions for this problem. Comparing with existing optimization-based approaches, our techniques yield better accuracy and have bounded execution time, thus is more suitable for on-line applications. Our techniques can easily detect outliers to further improve the accuracy.This research was supported in part by NIH under USPHS grant numbers HL52567.  相似文献   

7.
The curse of dimensionality problem arises when a limited number of observations are used to estimate a high-dimensional frontier, in particular, by data envelopment analysis (DEA). The study conducts a data generating process (DGP) to argue the typical “rule of thumb” used in DEA, e.g. the required number of observations should be at least larger than twice of the number of inputs and outputs, is ambiguous and will produce large deviations in estimating the technical efficiency. To address this issue, we propose a Least Absolute Shrinkage and Selection Operator (LASSO) variable selection technique, which is usually used in data science for extracting significant factors, and combine it in a sign-constrained convex nonparametric least squares (SCNLS), which can be regarded as DEA estimator. Simulation results demonstrate that the proposed LASSO-SCNLS method and its variants provide useful guidelines for the DEA with small datasets.  相似文献   

8.
Through observations from real life hub networks, we introduce the multimodal hub location and hub network design problem. We approach the hub location problem from a network design perspective. In addition to the location and allocation decisions, we also study the decision on how the hub networks with different possible transportation modes must be designed. In this multimodal hub location and hub network design problem, we jointly consider transportation costs and travel times, which are studied separately in most hub location problems presented in the literature. We allow different transportation modes between hubs and different types of service time promises between origin–destination pairs while designing the hub network in the multimodal problem. We first propose a linear mixed integer programming model for this problem and then derive variants of the problem that might arise in certain applications. The models are enhanced via a set of effective valid inequalities and an efficient heuristic is developed. Computational analyses are presented on the various instances from the Turkish network and CAB data set.  相似文献   

9.
This paper considers tests for structural instability of short duration, such as at the end of the sample. The key feature of the testing problem is that the number, m, of observations in the period of potential change is relatively small—possibly as small as one. The well‐known F test of Chow (1960) for this problem only applies in a linear regression model with normally distributed iid errors and strictly exogenous regressors, even when the total number of observations, n+m, is large. We generalize the F test to cover regression models with much more general error processes, regressors that are not strictly exogenous, and estimation by instrumental variables as well as least squares. In addition, we extend the F test to nonlinear models estimated by generalized method of moments and maximum likelihood. Asymptotic critical values that are valid as n→∞ with m fixed are provided using a subsampling‐like method. The results apply quite generally to processes that are strictly stationary and ergodic under the null hypothesis of no structural instability.  相似文献   

10.
11.
We study the deterministic time‐varying demand lot‐sizing problem in which learning and forgetting in setups and production are considered simultaneously. It is an extension of Chiu's work. We propose a near‐optimal forward dynamic programming algorithm and suggest the use of a good heuristic method in a situation in which the computational effort is extremely intolerable. Several important observations obtained from a two‐phase experiment verify the goodness of the proposed algorithm and the chosen heuristic method.  相似文献   

12.
Industrial robots are increasingly used by many manufacturing firms. The number of robot manufacturers has also increased with many of these firms now offering a wide range of models. A potential user is thus faced with many options in both performance and cost. This paper proposes a decision model for the robot selection problem. The proposed model uses robust regression to identify, based on manufacturers' specifications, the robots that are the better performers for a given cost. Robust regression is used because it identifies and is resistant to the effects of outlying observations, key components in the proposed model. The robots selected by the model become candidates for testing to verify manufacturers' specifications. The model is tested on a real data set and an example is presented.  相似文献   

13.
Leadership has traditionally been seen as a distinctly interpersonal phenomenon demonstrated in the interactions between leaders and subordinates. The theory of leadership presented in this article proposes that effective leadership behavior fundamentally depends upon the leader's ability to solve the kinds of complex social problems that arise in organizations. The skills that make this type of complex social problem solving possible are discussed. The differential characteristics and career experiences likely to influence the development of these skills also are considered along with the implications of these observations for leadership theory and for the career development of organizational leaders.  相似文献   

14.
Sourcing from multiple suppliers with different characteristics is common in practice for various reasons. This paper studies a dynamic procurement planning problem in which the firm can replenish inventory from a fast and a slow supplier, both with uncertain capacities. The optimal policy is characterized by two reorder points, one for each supplier. Whenever the pre‐order inventory level is below the reorder point, a replenishment order is issued to the corresponding supplier. Interestingly, the reorder point for the slow supplier can be higher than that of the fast even if the former has a higher cost, lower reliability, and smaller capacity than the latter, suggesting the possibility of ordering exclusively from an inferior slow supplier in the short term. Moreover, the firm may allocate a larger portion of the long‐term total order quantity to the slow supplier than to the fast, even if the former does not possess any cost or reliability advantage over the latter. Such phenomena, different from the observations made in previous studies, happen when the demand is uncertain and the supply is limited or unreliable. Our observations highlight the importance of incorporating both demand uncertainty and supplier characteristics (i.e., cost, lead time, capacity and uncertainty) in a unified framework when formulating supplier selection and order allocation strategies.  相似文献   

15.
J.M. Wilson 《Omega》1996,24(6):681-688
A series of approaches is presented to formulate statistical classification problems using integer programming. The formulations attempt to maximize the number of observations that can be properly classified and utilize single function, multiple function and hierarchical multiple function approaches to the problems. The formulations are tested using standard software on a sample problem and new approaches are compared to those of other authors. As the solution of such problems gives rise to various awkward features in an integer programming framework, it is demonstrated that new approaches to formulation will not be completely successful in avoiding the difficulties of existing methods, but demonstrate certain gains.  相似文献   

16.
Celik Parkan 《决策科学》1979,10(3):487-492
This note is an extension of the approach to the problem of reneging introduced in Parkan and Warren [1]. It is assumed customers consider joining an M/M/1 queuing system with a prior gamma distribution over the values of the mean service time. Thus, each customer has an initial estimate of his total waiting time in the system. The customers associate the same sunk value with the waiting time and obtain the same reward at service completion. Having joined a system each customer may consider reneging in view of his revision of his initial service time estimate based on service observations. The bounds on the stationary state probabilities for such a system are obtained and examples are provided to compare the cases with and without reneging.  相似文献   

17.
The effectiveness of the joint estimation (JE) outlier detection method as a process control technique for short autocorrelated time series is investigated and compared with exponentially weighted moving average (EWMA). The research goal is to determine the effectiveness of the method for detecting out-of-control observations when they are the last observation in a short autocorrelated time series. This is an important problem because detecting an outlier in the period when it occurs, rather than several periods after it occurs, will preclude the production of more defective units. Two cases are investigated: short simulated time series when normality is assumed, and short real time series when the assumption is violated. The results show that JE is effective for short time series, particularly for autoregressive series when normality is violated. Joint estimation is also effective for moving average series under the normality assumption and less effective when the assumption is violated. In all cases, JE is found to be more effective than EWMA.  相似文献   

18.
Optimal knowledge outsourcing model   总被引:1,自引:0,他引:1  
Every organization controls its investments in the development and maintenance of internal knowledge (IK) as opposed to outsourcing this effort, namely, consuming external knowledge (EK). A number of factors involved in this decision, such as the IK learning curve, its associated holding cost, value deterioration rate, value of future IK or cost of purchasing EK. This study proposes a dynamic optimal control model for examining the properties of this problem. Optimal control strategies and steady-state conditions are identified for a number of special cases. Some insightful observations are obtained by studying the solution sensitivity to the underlying assumptions.  相似文献   

19.
Choice models with nonlinear budget sets provide a precise way of accounting for the nonlinear tax structures present in many applications. In this paper we propose a nonparametric approach to estimation of these models. The basic idea is to think of the choice, in our case hours of labor supply, as being a function of the entire budget set. Then we can do nonparametric regression where the variable in the regression is the budget set. We reduce the dimensionality of this problem by exploiting structure implied by utility maximization with piecewise linear convex budget sets. This structure leads to estimators where the number of segments can differ across observations and does not affect accuracy. We give consistency and asymptotic normality results for these estimators. The usefulness of the estimator is demonstrated in an empirical example, where we find it has a large impact on estimated effects of the Swedish tax reform.  相似文献   

20.
A method for combining multiple expert opinions that are encoded in a Bayesian Belief Network (BBN) model is presented and applied to a problem involving the cleanup of hazardous chemicals at a site with contaminated groundwater. The method uses Bayes Rule to update each expert model with the observed evidence, then uses it again to compute posterior probability weights for each model. The weights reflect the consistency of each model with the observed evidence, allowing the aggregate model to be tailored to the particular conditions observed in the site-specific application of the risk model. The Bayesian update is easy to implement, since the likelihood for the set of evidence (observations for selected nodes of the BBN model) is readily computed by sequential execution of the BBN model. The method is demonstrated using a simple pedagogical example and subsequently applied to a groundwater contamination problem using an expert-knowledge BBN model. The BBN model in this application predicts the probability that reductive dechlorination of the contaminant trichlorethene (TCE) is occurring at a site--a critical step in the demonstration of the feasibility of monitored natural attenuation for site cleanup--given information on 14 measurable antecedent and descendant conditions. The predictions for the BBN models for 21 experts are weighted and aggregated using examples of hypothetical and actual site data. The method allows more weight for those expert models that are more reflective of the site conditions, and is shown to yield an aggregate prediction that differs from that of simple model averaging in a potentially significant manner.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号