首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a distribution‐free entropy‐based methodology to calculate the expected value of an uncertainty reduction effort and present our results within the context of reducing demand uncertainty. In contrast to existing techniques, the methodology does not require a priori assumptions regarding the underlying demand distribution, does not require sampled observations to be the mechanism by which uncertainty is reduced, and provides an expectation of information value as opposed to an upper bound. In our methodology, a decision maker uses his existing knowledge combined with the maximum entropy principle to model both his present and potential future states of uncertainty as probability densities over all possible demand distributions. Modeling uncertainty in this way provides for a theoretically justified and intuitively satisfying method of valuing an uncertainty reduction effort without knowing the information to be revealed. We demonstrate the methodology's use in three different settings: (i) a newsvendor valuing knowledge of expected demand, (ii) a short life cycle product supply manager considering the adoption of a quick response strategy, and (iii) a revenue manager making a pricing decision with limited knowledge of the market potential for his product.  相似文献   

2.
Abstract. In this paper a new algorithm is proposed in order to produce an automatic dynamic compound estimator of the labour force based on an interactive scheme. The proposed algorithm, JARES, is based on the probability estimator of Jaynes based on the notion of maximum entropy of a given probability distribution with a constraint on the average of an external information. The iterative scheme is based on the solution of a set of linear equations which represent the algebraic relationships between the weights and the estimates.  相似文献   

3.
本文提出了一种从关联角度出发将主观先验信息与客观信息纳入约束条件从而求解综合权重的方法。在利用灰关联深度系数对实际决策问题进行客观权重判断研究的基础之上,构建了源于专家判断信息的权重势比的主观约束条件,将主客观因素同时反映在优化模型的约束条件中,并将权重的极大熵作为目标函数,保证权重判断的可信度,从而构建了确定评价指标综合权重的极大熵优化模型。该方法克服了将主客观条件直接通过线性组合作为目标函数时,主客观参数选取导致的权重大小的不确定性,并同其他赋权方法进行案例结果比较,表明了该方法的有效性。  相似文献   

4.
In decision-making under uncertainty, a decision-maker is required to specify, possibly with the help of decision analysts, point estimates of the probabilities of uncertain events. In this setting, it is often difficult to obtain very precise measurements of the decision-maker׳s probabilities on the states of nature particularly when little information is available to evaluate probabilities, available information is not specific enough, or we have to model the conflict case where several information sources are available.In this paper, imprecise probabilities are considered for representing the decision-maker׳s perception or past experience about the states of nature, to be specific, interval probabilities, which can be further categorized as (a) intervals of individual probabilities, (b) intervals of probability differences, and (c) intervals of ratio probabilities. We present a heuristic approach to modeling a wider range of types of probabilities as well as three types of interval probabilities. In particular, the intervals of ratio probabilities, which are widely used in the uncertain AHP context, are analyzed to find extreme points by the use of change of variables, not to mention the first two types of interval probabilities. Finally, we examine how these extreme points can be used to determine an ordering or partial ordering of the expected values of strategies.  相似文献   

5.
ZW Kmietowicz  AD Pearman 《Omega》1984,12(4):391-399
The paper deals with decision making under conditions of linear partial information, i.e. when probabilities of states of nature are not known precisely, but are subject to linear constraints. Conditions ensuring strict and weak statistical dominance of one strategy over another are derived. It is also shown that weak dominance in terms of payoffs is equivalent to weak dominance in terms of regrets. The new results are more general than those obtained by Fishburn and by Kmietowicz and Pearman for weak and strict ranking of probabilities, and include them as special cases. The new results can be employed in practical decision making in several ways.  相似文献   

6.
We consider multi-criteria group decision-making problems, where the decision makers (DMs) want to identify their most preferred alternative(s) based on uncertain or inaccurate criteria measurements. In many real-life problems the uncertainties may be dependent. In this paper, we focus on multicriteria decision-making (MCDM) problems where the criteria and their uncertainties are computed using a stochastic simulation model. The model is based on decision variables and stochastic parameters with given distributions. The simulation model determines for the criteria a joint probability distribution, which quantifies the uncertainties and their dependencies. We present and compare two methods for treating the uncertainty and dependency information within the SMAA-2 multi-criteria decision aid method. The first method applies directly the discrete sample generated by the simulation model. The second method is based on using a multivariate Gaussian distribution. We demonstrate the methods using a decision support model for a retailer operating in the deregulated European electricity market.  相似文献   

7.
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio‐scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low‐probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real‐world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats.  相似文献   

8.
The rise in economic disparity presents significant risks to global social order and the resilience of local communities. However, existing measurement science for economic disparity (e.g., the Gini coefficient) does not explicitly consider a probability distribution with information, deficiencies, and uncertainties associated with the underlying income distribution. This article introduces the quantification of Shannon entropy for income inequality across scales, including national‐, subnational‐, and city‐level data. The probabilistic principles of Shannon entropy provide a new interpretation for uncertainty and risk related to economic disparity. Entropy and information‐based conflict rise as world incomes converge. High‐entropy instances can resemble both happy and prosperous societies as well as a socialist–communist social structure. Low entropy signals high‐risk tipping points for anomaly and conflict detection with higher confidence. Finally, spatial–temporal entropy maps for U.S. cities offer a city risk profiling framework. The results show polarization of household incomes within and across Baltimore, Washington, DC, and San Francisco. Entropy produces reliable results at significantly reduced computational costs than Gini coefficients.  相似文献   

9.
One of the main steps in an uncertainty analysis is the selection of appropriate probability distribution functions for all stochastic variables. In this paper, criteria for such selections are reviewed, the most important among them being any a priori knowledge about the nature of a stochastic variable, and the Central Limit Theorem of probability theory applied to sums and products of stochastic variables. In applications of these criteria, it is shown that many of the popular selections, such as the uniform distribution for a poorly known variable, require far more knowledge than is actually available. However, the knowledge available is usually sufficient to make use of other, more appropriate distributions. Next, functions of stochastic variables and the selection of probability distributions for their arguments as well as the use of different methods of error propagation through these functions are discussed. From these evaluations, priorities can be assigned to determine which of the stochastic variables in a function need the most care in selecting the type of distribution and its parameters. Finally, a method is proposed to assist in the assignment of an appropriate distribution which is commensurate with the total information on a particular stochastic variable, and is based on the scientific method. Two examples are given to elucidate the method for cases of little or almost no information.  相似文献   

10.
Belief disagreements have been suggested as a major contributing factor to the recent subprime mortgage crisis. This paper theoretically evaluates this hypothesis. I assume that optimists have limited wealth and take on leverage so as to take positions in line with their beliefs. To have a significant effect on asset prices, they need to borrow from traders with pessimistic beliefs using loans collateralized by the asset itself. Since pessimists do not value the collateral as much as optimists do, they are reluctant to lend, which provides an endogenous constraint on optimists' ability to borrow and to influence asset prices. I demonstrate that the tightness of this constraint depends on the nature of belief disagreements. Optimism concerning the probability of downside states has no or little effect on asset prices because these types of optimism are disciplined by this constraint. Instead, optimism concerning the relative probability of upside states could have significant effects on asset prices. This asymmetric disciplining effect is robust to allowing for short selling because pessimists that borrow the asset face a similar endogenous constraint. These results emphasize that what investors disagree about matters for asset prices, to a greater extent than the level of disagreements. When richer contracts are available, relatively complex contracts that resemble some of the recent financial innovations in the mortgage market endogenously emerge to facilitate betting.  相似文献   

11.
This paper considers the question of how much time and effort should be spent in preparing a bid for a single item of known value sold at a first-price sealed-bid auction. A decision-theoretic approach to this bid decision summarizes the decision maker's knowledge of the competitive environment through his or her subjective probability distribution of the highest competing bid. Research activities such as collecting and analyzing bid histories are efforts to obtain additional information that reduces the uncertainty in the highest competing bid. The decision-theoretic concepts of expected value of perfect and imperfect information are used to place an economic value on such research activities. The results presented allow the decision maker to quantify the expected value of imperfect information when the uncertainty is normally distributed. The results show that additional research is most valuable prior to auctions the bidder expects to win.  相似文献   

12.
We consider a dynamic pricing problem that involves selling a given inventory of a single product over a short, two‐period selling season. There is insufficient time to replenish inventory during this season, hence sales are made entirely from inventory. The demand for the product is a stochastic, nonincreasing function of price. We assume interval uncertainty for demand, that is, knowledge of upper and lower bounds but not a probability distribution, with no correlation between the two periods. We minimize the maximum total regret over the two periods that results from the pricing decisions. We consider a dynamic model where the decision maker chooses the price for each period contingent on the remaining inventory at the beginning of the period, and a static model where the decision maker chooses the prices for both periods at the beginning of the first period. Both models can be solved by a polynomial time algorithm that solves systems of linear inequalities. Our computational study demonstrates that the prices generated by both our models are insensitive to errors in estimating the demand intervals. Our dynamic model outperforms our static model and two classical approaches that do not use demand probability distributions, when evaluated by maximum regret, average relative regret, variability, and risk measures. Further, our dynamic model generates a total expected revenue which closely approximates that of a maximum expected revenue approach which requires demand probability distributions.  相似文献   

13.
研究了属性值是区间数并且已知方案偏好信息的多属性群决策问题。建立了每个方案客观偏好值与主观偏好值偏差的相对熵测度矩阵;基于客观信息和方案偏好信息的相对熵建立了属性权重模型;建立了一个新的区间数比较的可能度公式,基于可能度公式给出了方案排序方法,算例说明方法可行性。  相似文献   

14.
If voter preferences depend on a noisy state variable, under what conditions do large elections deliver outcomes “as if” the state were common knowledge? While the existing literature models elections using the jury metaphor where a change in information regarding the state induces all voters to switch in favor of only one alternative, we allow for more general preferences where a change in information can induce a switch in favor of either alternative. We show that information is aggregated for any voting rule if, for a randomly chosen voter, the probability of switching in favor of one alternative is strictly greater than the probability of switching away from that alternative for any given change in belief over states. If the preference distribution violates this condition, there exist equilibria that produce outcomes different from the full information outcome with high probability for large classes of voting rules. In other words, unless preferences closely conform to the jury metaphor, information aggregation is not guaranteed to obtain.  相似文献   

15.
Moment‐matching discrete distributions were developed by Miller and Rice (1983) as a method to translate continuous probability distributions into discrete distributions for use in decision and risk analysis. Using gaussian quadrature, they showed that an n‐point discrete distribution can be constructed that exactly matches the first 2n ‐ 1 moments of the underlying distribution. These moment‐matching discrete distributions offer several theoretical advantages over the typical discrete approximations as shown in Smith (1993), but they also pose practical problems. In particular, how does the analyst estimate the moments given only the subjective assessments of the continuous probability distribution? Smith suggests that the moments can be estimated by fitting a distribution to the assessments. This research note shows that the quality of the moment estimates cannot be judged solely by how close the fitted distribution is to the true distribution. Examples are used to show that the relative errors in higher order moment estimates can be greater than 100%, even though the cumulative distribution function is estimated within a Kolmogorov‐Smirnov distance less than 1%.  相似文献   

16.
This paper focuses on qualitative multi-attribute group decision making (MAGDM) with linguistic information in terms of single linguistic terms and/or flexible linguistic expressions. To do so, we propose a new linguistic decision rule based on the concepts of random preference and stochastic dominance, by a probability based interpretation of weight information. The importance weights and the concept of fuzzy majority are incorporated into both the multi-attribute and collective decision rule by the so-called weighted ordered weighted averaging operator with the input parameters expressed as probability distributions over a linguistic term set. Moreover, a probability based method is proposed to measure the consensus degree between individual and collective overall random preferences based on the concept of stochastic dominance, which also takes both the importance weights and the fuzzy majority into account. As such, our proposed approaches are based on the ordinal semantics of linguistic terms and voting statistics. By this, on one hand, the strict constraint of the uniform linguistic term set in linguistic decision making can be released; on the other hand, the difference and variation of individual opinions can be captured. The proposed approaches can deal with qualitative MAGDM with single linguistic terms and flexible linguistic expressions. Two application examples taken from the literature are used to illuminate the proposed techniques by comparisons with existing studies. The results show that our proposed approaches are comparable with existing studies.  相似文献   

17.
We propose a tractable, data‐driven demand estimation procedure based on the use of maximum entropy (ME) distributions, and apply it to a stochastic capacity control problem motivated from airline revenue management. Specifically, we study the two fare class “Littlewood” problem in a setting where the firm has access to only potentially censored sales observations; this is also known as the repeated newsvendor problem. We propose a heuristic that iteratively fits an ME distribution to all observed sales data, and in each iteration selects a protection level based on the estimated distribution. When the underlying demand distribution is discrete, we show that the sequence of protection levels converges to the optimal one almost surely, and that the ME demand forecast converges to the true demand distribution for all values below the optimal protection level. That is, the proposed heuristic avoids the “spiral down” effect, making it attractive for problems of joint forecasting and revenue optimization problems in the presence of censored observations.  相似文献   

18.
In this paper, composite forecasting is considered from a Bayesian perspective. A forecast user combines two or more forecasts of an operationally relevant random variable. We consider the case where outperformance is modeled as a realization from a multinomial process. The user has prior beliefs about the probability that a particular method outperforms all others, information which is summarized by the Dirichlet distribution. An empirical example with hog prices in the United States illustrates the method.  相似文献   

19.
In this article, we analyze a location model where facilities may be subject to disruptions. Customers do not have advance information about whether a given facility is operational or not, and thus may have to visit several facilities before finding an operational one. The objective is to locate a set of facilities to minimize the total expected cost of customer travel. We decompose the total cost into travel, reliability, and information components. This decomposition allows us to put a value on the advance information about the states of facilities and compare it to the reliability and travel cost components, which allows a decision maker to evaluate which part of the system would benefit the most from improvements. The structure of optimal solutions is analyzed, with two interesting effects identified: facility centralization and co‐location; both effects appear to be stronger than in the complete information case, where the status of each facility is known in advance.  相似文献   

20.
Consider a group of individuals with unobservable perspectives (subjective prior beliefs) about a sequence of states. In each period, each individual receives private information about the current state and forms an opinion (a posterior belief). She also chooses a target individual and observes the target's opinion. This choice involves a trade‐off between well‐informed targets, whose signals are precise, and well‐understood targets, whose perspectives are well known. Opinions are informative about the target's perspective, so observed individuals become better understood over time. We identify a simple condition under which long‐run behavior is history independent. When this fails, each individual restricts attention to a small set of experts and observes the most informed among these. A broad range of observational patterns can arise with positive probability, including opinion leadership and information segregation. In an application to areas of expertise, we show how these mechanisms generate own field bias and large field dominance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号