首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In expected utility theory, risk attitudes are modeled entirely in terms of utility. In the rank‐dependent theories, a new dimension is added: chance attitude, modeled in terms of nonadditive measures or nonlinear probability transformations that are independent of utility. Most empirical studies of chance attitude assume probabilities given and adopt parametric fitting for estimating the probability transformation. Only a few qualitative conditions have been proposed or tested as yet, usually quasi‐concavity or quasi‐convexity in the case of given probabilities. This paper presents a general method of studying qualitative properties of chance attitude such as optimism, pessimism, and the “inverse‐S shape” pattern, both for risk and for uncertainty. These qualitative properties can be characterized by permitting appropriate, relatively simple, violations of the sure‐thing principle. In particular, this paper solves a hitherto open problem: the preference axiomatization of convex (“pessimistic” or “uncertainty averse”) nonadditive measures under uncertainty. The axioms of this paper preserve the central feature of rank‐dependent theories, i.e. the separation of chance attitude and utility.  相似文献   

2.
Anscombe and Aumann (1963) wrote a classic characterization of subjective expected utility theory. This paper employs the same domain for preference and a closely related (but weaker) set of axioms to characterize preferences that use second‐order beliefs (beliefs over probability measures). Such preferences are of interest because they accommodate Ellsberg‐type behavior.  相似文献   

3.
采用期权及标的资产价格数据, 基于离散时间EGARCH模型和连续时间GARCH扩散模型分别估计了客观与风险中性密度, 进而推导了经验定价核. 在此基础上, 基于等级依赖期望效用模型, 在标准的效应函数形式下构建了相应的概率权重函数. 采用香港恒生指数及其指数权证价格数据进行实证研究, 结果表明: (1) 经验定价核不是单调递减的, 而是展现出驼峰(非单调性), 即“定价核之谜”;(2) 经验概率权重函数展现S型, 表明市场投资者低估尾部概率事件, 高估中、高概率事件;(3) “定价核之谜”可以由具有标准效用函数与S型概率权重函数的等级依赖期望效用模型解释。  相似文献   

4.
The extant supply chain management literature has not addressed the issue of coordination in supply chains involving risk‐averse agents. We take up this issue and begin with defining a coordinating contract as one that results in a Pareto‐optimal solution acceptable to each agent. Our definition generalizes the standard one in the risk‐neutral case. We then develop coordinating contracts in three specific cases: (i) the supplier is risk neutral and the retailer maximizes his expected profit subject to a downside risk constraint; (ii) the supplier and the retailer each maximizes his own mean‐variance trade‐off; and (iii) the supplier and the retailer each maximizes his own expected utility. Moreover, in case (iii), we show that our contract yields the Nash Bargaining solution. In each case, we show how we can find the set of Pareto‐optimal solutions, and then design a contract to achieve the solutions. We also exhibit a case in which we obtain Pareto‐optimal sharing rules explicitly, and outline a procedure to obtain Pareto‐optimal solutions.  相似文献   

5.
6.
User‐generated contents (UGC) in social media such as online reviews are inherently incomplete since we do not capture the opinions of users who do not write a review. These silent users may be systematically different than those who speak up. Such differences can be driven by users’ differing sentiments toward their shopping experiences as well as their disposition to generate UGC. Overlooking silent users’ opinions can result in a reporting bias. We develop a method to model users’ UGC generating process and then rectify this bias through an inverse probability weighting (IPW) approach. In the context of users’ movie review activities at Blockbuster.com, our results show that the average probability for a customer to post a review is 0.06 when the customer is unsatisfied with a movie, 0.23 when indifferent, and 0.32 when satisfied. The distribution of user's reporting probability with positive experience first‐order stochastically dominates the one with negative experience. Our approach provides a realistic solution for business managers to properly utilize incomplete UGC.  相似文献   

7.
This paper develops a generalization of the widely used difference‐in‐differences method for evaluating the effects of policy changes. We propose a model that allows the control and treatment groups to have different average benefits from the treatment. The assumptions of the proposed model are invariant to the scaling of the outcome. We provide conditions under which the model is nonparametrically identified and propose an estimator that can be applied using either repeated cross section or panel data. Our approach provides an estimate of the entire counterfactual distribution of outcomes that would have been experienced by the treatment group in the absence of the treatment and likewise for the untreated group in the presence of the treatment. Thus, it enables the evaluation of policy interventions according to criteria such as a mean–variance trade‐off. We also propose methods for inference, showing that our estimator for the average treatment effect is root‐N consistent and asymptotically normal. We consider extensions to allow for covariates, discrete dependent variables, and multiple groups and time periods.  相似文献   

8.
Due to the growing concern over environmental issues, regardless of whether companies are going to voluntarily incorporate green policies in practice, or will be forced to do so in the context of new legislation, change is foreseen in the future of transportation management. Assigning and scheduling vehicles to service a pre‐determined set of clients is a common distribution problem. Accounting for time‐dependent travel times between customers, we present a model that considers travel time, fuel, and CO2 emissions costs. Specifically, we propose a framework for modeling CO2 emissions in a time‐dependent vehicle routing context. The model is solved via a tabu search procedure. As the amount of CO2 emissions is correlated with vehicle speed, our model considers limiting vehicle speed as part of the optimization. The emissions per kilometer as a function of speed are minimized at a unique speed. However, we show that in a time‐dependent environment this speed is sub‐optimal in terms of total emissions. This occurs if vehicles are able to avoid running into congestion periods where they incur high emissions. Clearly, considering this trade‐off in the vehicle routing problem has great practical potential. In the same line, we construct bounds on the total amount of emissions to be saved by making use of the standard VRP solutions. As fuel consumption is correlated with CO2 emissions, we show that reducing emissions leads to reducing costs. For a number of experimental settings, we show that limiting vehicle speeds is desired from a total cost perspective. This namely stems from the trade‐off between fuel and travel time costs.  相似文献   

9.
We create an analytical structure that reveals the long‐run risk‐return relationship for nonlinear continuous‐time Markov environments. We do so by studying an eigenvalue problem associated with a positive eigenfunction for a conveniently chosen family of valuation operators. The members of this family are indexed by the elapsed time between payoff and valuation dates, and they are necessarily related via a mathematical structure called a semigroup. We represent the semigroup using a positive process with three components: an exponential term constructed from the eigenvalue, a martingale, and a transient eigenfunction term. The eigenvalue encodes the risk adjustment, the martingale alters the probability measure to capture long‐run approximation, and the eigenfunction gives the long‐run dependence on the Markov state. We discuss sufficient conditions for the existence and uniqueness of the relevant eigenvalue and eigenfunction. By showing how changes in the stochastic growth components of cash flows induce changes in the corresponding eigenvalues and eigenfunctions, we reveal a long‐run risk‐return trade‐off.  相似文献   

10.
《Risk analysis》2018,38(4):694-709
Subsurface energy activities entail the risk of induced seismicity including low‐probability high‐consequence (LPHC) events. For designing respective risk communication, the scientific literature lacks empirical evidence of how the public reacts to different written risk communication formats about such LPHC events and to related uncertainty or expert confidence. This study presents findings from an online experiment (N = 590) that empirically tested the public's responses to risk communication about induced seismicity and to different technology frames, namely deep geothermal energy (DGE) and shale gas (between‐subject design). Three incrementally different formats of written risk communication were tested: (i) qualitative, (ii) qualitative and quantitative, and (iii) qualitative and quantitative with risk comparison. Respondents found the latter two the easiest to understand, the most exact, and liked them the most. Adding uncertainty and expert confidence statements made the risk communication less clear, less easy to understand and increased concern. Above all, the technology for which risks are communicated and its acceptance mattered strongly: respondents in the shale gas condition found the identical risk communication less trustworthy and more concerning than in the DGE conditions. They also liked the risk communication overall less. For practitioners in DGE or shale gas projects, the study shows that the public would appreciate efforts in describing LPHC risks with numbers and optionally risk comparisons. However, there seems to be a trade‐off between aiming for transparency by disclosing uncertainty and limited expert confidence, and thereby decreasing clarity and increasing concern in the view of the public.  相似文献   

11.
Two commonly used elicitation modes on strength of preference, equivalence and ratio judgments, were compared in an experiment. The result from the experiment showed that ratio judgments were less effective than equivalence judgments. Based on an iterative design for eliciting multiattribute preference structures, equivalence judgments outperformed ratio judgments in estimating single‐attribute measurable value functions, while being nearly more effective than ratio judgments in assessing multiattribute preference structures. The implications of the results from the experiment are that multiattribute decision‐making techniques should take advantage of the decision maker's inclination of making effective equivalence trade‐off judgments, and that useful techniques should be devised to incorporate different commonly used techniques, such as multiattribute utility theory and the Analytic Hierarchy Process, to elicit and consolidate equivalence trade‐off judgments.  相似文献   

12.
To entice customers to purchase both current and new generation products over time, many firms offer different trade‐in programs including programs that require customers to pay an up‐front fee. To examine the effectiveness of the trade‐in programs, we develop a two‐period model in which a firm sells the first generation product in the first period and the second generation product in the second period; however, the firm offers a trade‐in program that customers can participate in when purchasing the first generation product in the first period. To participate, each customer has to pay a nonrefundable fee in the first period so that she has the option to trade‐in her first generation product and receive a prespecified trade‐in value to be used for the purchase of the second generation product in the second period. To capture market heterogeneity and market uncertainty, we examine the case when the valuation of the first generation product varies among customers and the valuation of the second generation product is uncertain a priori. By analyzing a two‐period game, we determine the optimal purchasing behavior of each rational customer, and we show that the firm is always better off by offering its own trade‐in programs. Also, our numerical analysis reveals that trade‐in programs can benefit the firm significantly especially when (i) the residual value of the first generation product is high; (ii) the expected incremental value of the second generation product is high; or (iii) the valuation of the second generation product is highly uncertain.  相似文献   

13.
In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time‐dependent critical risk level within a finite‐time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented.  相似文献   

14.
This study analyzes the trade‐off between funding strategies and operational performance in humanitarian operations. If a Humanitarian Organization (HO) offers donors the option of earmarking their donations, HO should expect an increase in total donations. However, earmarking creates constraints in resource allocation that negatively affect HO's operational performance. We study this trade‐off from the perspective of a single HO that maximizes its expected utility as a function of total donations and operational performance. HO implements disaster response and development programs and it operates in a multi‐donor market with donation uncertainty. Using a model inspired by Scarf's minimax approach and the newsvendor framework, we analyze the strategic interaction between HO and its donors. The numerical section is based on real data from 15 disasters during the period 2012–2013. We find that poor operational performance has a larger effect on HO's utility function when donors are more uncertain about HO's expected needs for disaster response. Interestingly, increasing the public awareness of development programs helps HO to get more non‐earmarked donations for disaster response. Increasing non‐earmarked donations improves HO's operational efficiency, which mitigates the impact of donation uncertainty on HO's utility function.  相似文献   

15.
This paper suggests a behavioral definition of (subjective) ambiguity in an abstract setting where objects of choice are Savage‐style acts. Then axioms are described that deliver probabilistic sophistication of preference on the set of unambiguous acts. In particular, both the domain and the values of the decision‐maker's probability measure are derived from preference. It is argued that the noted result also provides a decision‐theoretic foundation for the Knightian distinction between risk and ambiguity.  相似文献   

16.
We consider a make‐to‐order manufacturer that serves two customer classes: core customers who pay a fixed negotiated price, and “fill‐in” customers who make submittal decisions based on the current price set by the firm. Using a Markovian queueing model, we determine how much the firm can gain by explicitly accounting for the status of its production facility in making pricing decisions. Specifically, we examine three pricing policies: (1) static, state‐independent pricing, (2) constant pricing up to a cutoff state, and (3) general state‐dependent pricing. We determine properties of each policy, and illustrate numerically the financial gains that the firm can achieve by following each policy as compared with simpler policies. Our main result is that constant pricing up to a cutoff state can dramatically outperform a state‐independent policy, while at the same time achieving most of the increase in revenue achievable from general state‐dependent pricing. Thus, we find that constant pricing up to a cutoff state presents an attractive tradeoff between ease of implementation and revenue gain. When the costs of policy design and implementation are taken into account, this simple heuristic may actually out‐perform general state‐dependent pricing in some settings.  相似文献   

17.
Willingness To Pay (WTP) of customers plays an anchoring role in pricing. This study proposes a new choice model based on WTP, incorporating sequential decision making, where the products with positive utility of purchase are considered in the order of customer preference. We compare WTP‐choice model with the commonly used (multinomial) Logit model with respect to the underlying choice process, information requirement, and independence of irrelevant alternatives. Using WTP‐choice model, we find and compare equilibrium and centrally optimal prices and profits without considering inventory availability. In addition, we compare equilibrium prices and profits in two contexts: without considering inventory availability and under lost sales. One of the interesting results with WTP‐choice model is the “loose coupling” of retailers in competition; prices are not coupled but profits are. That is, each retailer should charge the monopoly price as the collection of these prices constitute an equilibrium but each retailer's profit depends on other retailers' prices. Loose coupling fails with dependence of WTPs or dependence of preference on prices. Also, we show that competition among retailers facing dependent WTPs can cause price cycles under some conditions. We consider real‐life data on sales of yogurt, ketchup, candy melt, and tuna, and check if a version of WTP‐choice model (with uniform, triangle, or shifted exponential WTP distribution), standard or mixed Logit model fits better and predicts the sales better. These empirical tests establish that WTP‐choice model compares well and should be considered as a legitimate alternative to Logit models for studying pricing for products with low price and high frequency of purchase.  相似文献   

18.
This article proposes an intertemporal risk‐value (IRV) model that integrates probability‐time tradeoff, time‐value tradeoff, and risk‐value tradeoff into one unified framework. We obtain a general probability‐time tradeoff, which yields a formal representation form to reflect the psychological distance of a decisionmaker in evaluating a temporal lottery. This intuition of probability‐time tradeoff is supported by robust empirical findings as well as by psychological theory. Through an explicit formalization of probability‐time tradeoff, an IRV model taking into account three fundamental dimensions, namely, value, probability, and time, is established. The object of evaluation in our framework is a complex lottery. We also give some insights into the structure of the IRV model using a wildcatter problem.  相似文献   

19.
Cross‐training workers is one of the most efficient ways of achieving flexibility in manufacturing and service systems for increasing responsiveness to demand variability. However, it is generally the case that cross‐trained employees are not as productive on a specific task as employees who were originally trained for that task. Also, the productivity of the cross‐trained workers depends on when they are cross‐trained. In this work, we consider a two‐stage model to analyze the effects of variations in productivity levels on cross‐training policies. We define a new metric called achievable capacity and show that it plays a key role in determining the structure of the problem. If cross‐training can be done in a consistent manner, the achievable capacity is not affected when the training is done, which implies that the cross‐training decisions are independent of the opportunity cost of lost demand and are based on a trade‐off between cross‐training costs at different times. When the productivities of workers trained at different times differ, there is a three‐way trade‐off between cross‐training costs at different times and the opportunity cost of lost demand due to lost achievable capacity. We analyze the effects of variability and show that if the productivity levels of workers trained at different times are consistent, the decision maker is inclined to defer the cross‐training decisions as the variability of demand or productivity levels increases. However, when the productivities of workers trained at different times differ, an increase in the variability may make investing more in cross‐training earlier more preferable.  相似文献   

20.
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic‐possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility‐probability (probability‐possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号