首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Sometimes we believe that others receive harmful information. However, Marschak’s value of information framework always assigns non-negative value under expected utility: it starts from the decision maker’s beliefs – and one can never anticipate information’s harmfulness for oneself. The impact of decision makers’ capabilities to process information and of their expectations remains hidden behind the individual and subjective perspective Marschak’s framework assumes. By introducing a second decision maker as a point of reference, this paper introduces a way for evaluating others’ information from a cross-individual, imperfect expectations perspective for agents maximising expected utility. We define the cross-value of information that can become negative – then the information is “harmful” from a cross-individual perspective – and we define (mutual) cost of limited information processing capabilities and imperfect expectations as an opportunity cost from this same point of reference. The simple relationship between these two expected utility-based concepts and Marschak’s framework is shown, and we discuss evaluating short-term reactions of stock market prices to new information as an important domain of valuing others’ information.   相似文献   

2.
Two-sided intergenerational moral hazard occurs (i) if the parent’s decision to purchase long-term care (LTC) coverage undermines the child’s incentive to exert effort because the insurance protects the bequest from the cost of nursing home care, and (ii) when the parent purchases less LTC coverage, relying on child’s effort to keep him out of the nursing home. However, a “net” moral hazard effect obtains only if the two players’ responses to exogenous shocks fail to neutralize each other, entailing a negative relationship between child’s effort and parental LTC coverage. We focus on outcomes out of equilibrium, interpreting them as a break in the relationship resulting in no informal care provided and hence high probability nursing home admission. Changes in the parent’s initial wealth, LTC subsidy received, and child’s expected inheritance are shown to induce “net” moral hazard, in contradistinction to changes in child’s opportunity cost and share in the bequest.  相似文献   

3.
We study ultimatum and dictator variants of the generosity game. In this game, the first mover chooses the amount of money to be distributed between the players within a given interval, knowing that her own share is fixed. Thus, the first mover is not confronted with the typical trade-off between her own and the other’s payoff. For each variant of the game, we study three treatments that vary the range of potential pie sizes so as to assess the influence of these changes on the first movers’ generosity. We find that removing the trade-off inspires significant generosity, which is not always affected by the second mover’s veto power. Moreover, the manipulation of the choice set indicates that choices are influenced by the available alternatives.  相似文献   

4.
On Decomposing Net Final Values: Eva,Sva and Shadow Project   总被引:1,自引:0,他引:1  
A decomposition model of Net Final Values (NFV), named Systemic Value Added (SVA), is proposed for decision-making purposes, based on a systemic approach introduced in Magni [Magni, C. A. (2003), Bulletin of Economic Research 55(2), 149–176; Magni, C. A. (2004) Economic Modelling 21, 595–617]. The model translates the notion of excess profit giving formal expression to a counterfactual alternative available to the decision maker. Relations with other decomposition models are studied, among which Stewart’s [Stewart, G.B. (1991), The Quest for Value: The EVA™ Management Guide, Harper Collins, Publishers Inc]. The index here introduced differs from Stewart’s Economic Value Added (EVA) in that it rests on a different interpretation of the notion of excess profit and is formally connected with the EVA model by means of a shadow project. The SVA is formally and conceptually threefold, in that it is economic, financial, accounting-flavoured. Some results are offered, providing sufficient and necessary conditions for decomposing NFV. Relations between a project’s SVA and its shadow project’s EVA are shown, all results of Pressacco and Stucchi [Pressacco, F. and Stucchi, P. (1997), Rivista di Matematica per le Scienze Economiche e Sociali 20, 165–185] are proved by making use of the systemic approach and the shadow counterparts of those results are also shown.  相似文献   

5.
An extensive literature overlapping economics, statistical decision theory and finance, contrasts expected utility [EU] with the more recent framework of mean–variance (MV). A basic proposition is that MV follows from EU under the assumption of quadratic utility. A less recognized proposition, first raised by Markowitz, is that MV is fully justified under EU, if and only if utility is quadratic. The existing proof of this proposition relies on an assumption from EU, described here as “Buridan’s axiom” after the French philosopher’s fable of the ass that starved out of indifference between two bales of hay. To satisfy this axiom, MV must represent not only “pure” strategies, but also their probability mixtures, as points in the (σ, μ) plane. Markowitz and others have argued that probability mixtures are represented sufficiently by (σ, μ) only under quadratic utility, and hence that MV, interpreted as a mathematical re-expression of EU, implies quadratic utility. We prove a stronger form of this theorem, not involving or contradicting Buridan’s axiom, nor any more fundamental axiom of utility theory.  相似文献   

6.
We start by considering the Alternate Strike (AS) scheme, a real-life arbitration scheme where two parties select an arbitrator by alternately crossing off at each round one name from a given panel of arbitrators. We find out that the AS scheme is not invariant to “bad” alternatives. We then consider another alternating-move scheme, the Voting by Alternating Offers and Vetoes (VAOV) scheme, which is invariant to bad alternatives. We fully characterize the subgame perfect equilibrium outcome sets of these above two schemes in terms of the rankings of the parties over the alternatives only. We also identify some of the typical equilibria of these above two schemes. We then analyze two additional alternating-move schemes in which players’ current proposals have to either honor or enhance their previous proposals. We show that the first scheme’s equilibrium outcome set coincides with that of the AS scheme, and the equilibrium outcome set of the second scheme coincides with that of the VAOV scheme. Finally, it turns out that all schemes’ equilibrium outcome sets converge to the Equal Area solution’s outcome of cooperative bargaining problem, if the alternatives are distributed uniformly over the comprehensive utility possibility set and as the number of alternatives tends to infinity. Journal of Economic Literature Classification Number: C72.  相似文献   

7.
This paper uses duality to elaborate Slutzky equations of risks in quasi-linear decision models extended by independent background risks. Wealth, substitution and total effects are characterized in terms of mean-variance preferences. It is shown that both Pratt and Zeckhauser’s proper risk aversion and Kimball’s standard risk aversion are sufficient for negative substitution effects.  相似文献   

8.
9.
Axelord’s [(1970), Conflict of Interest, Markham Publishers, Chicago] index of conflict in 2 × 2 games with two pure strategy equilibria has the property that a reduction in the cost of holding out corresponds to an increase in conflict. This article takes the opposite view, arguing that if losing becomes less costly, a player is less likely to gamble to win, which means that conflict will be less frequent. This approach leads to a new power index and a new measure of stubbornness, both anchored in strategic reasoning. The win probability defined as power constitutes an equilibrium refinement which differs from Harsanyi and Selten’s [(1988), A General Theory of Equilibrium Selection in Games, MIT Press, Cambridge] refinement. In contrast, Axelrod’s approach focuses on preferences regarding divergences from imaginary outmost rewards that cannot be obtained jointly. The player who is less powerful in an asymmetric one-shot game becomes more powerful in the repeated game, provided he or she values the future sufficiently more than the opponent. This contrasts with the view that repetition induces cooperation, but conforms with the expectation that a more patient player receives a larger share of the pie.   相似文献   

10.
In cooperative Cournot oligopoly games, it is known that the β-core is equal to the α-core, and both are non-empty if every individual profit function is continuous and concave (Zhao, Games Econ Behav 27:153–168, 1999b). Following Chander and Tulkens (Int J Game Theory 26:379–401, 1997), we assume that firms react to a deviating coalition by choosing individual best reply strategies. We deal with the problem of the non-emptiness of the induced core, the γ-core, by two different approaches. The first establishes that the associated Cournot oligopoly Transferable Utility (TU)-games are balanced if the inverse demand function is differentiable and every individual profit function is continuous and concave on the set of strategy profiles, which is a step forward beyond Zhao’s core existence result for this class of games. The second approach, restricted to the class of Cournot oligopoly TU-games with linear cost functions, provides a single-valued allocation rule in the γ-core called Nash Pro rata (NP)-value. This result generalizes Funaki and Yamato’s (Int J Game Theory 28:157–171, 1999) core existence result from no capacity constraint to asymmetric capacity constraints. Moreover, we provide an axiomatic characterization of this solution by means of four properties: efficiency, null firm, monotonicity, and non-cooperative fairness.  相似文献   

11.
In ‘Semantic Foundations for the Logic of Preference’ (Rescher, ed.,The Logic of Decision and Action, University Press, Pittsburgh, 1967), Nicholas Rescher claims that, on the semantics developed in that paper, a certain principle - call it ‘Q’ turns out to be ‘unacceptable’. I argue, however, that, given certain assumptions that Rescher invokes in that same paper,Q can in fact be shown to be a ‘preference-tautology’, and henceQ should be classified as ‘acceptable’ on Rescher's theory.  相似文献   

12.
Ellsberg (The Quarterly Journal of Economics 75, 643–669 (1961); Risk, Ambiguity and Decision, Garland Publishing (2001)) argued that uncertainty is not reducible to risk. At the center of Ellsberg’s argument lies a thought experiment that has come to be known as the three-color example. It has been observed that a significant number of sophisticated decision makers violate the requirements of subjective expected utility theory when they are confronted with Ellsberg’s three-color example. More generally, such decision makers are in conflict with either the ordering assumption or the independence assumption of subjective expected utility theory. While a clear majority of the theoretical responses to these violations have advocated maintaining ordering while relaxing independence, a persistent minority has advocated abandoning the ordering assumption. The purpose of this paper is to consider a similar dilemma that exists within the context of multiattribute models, where it arises by considering indeterminacy in the weighting of attributes rather than indeterminacy in the determination of probabilities as in Ellsberg’s example.   相似文献   

13.
This article analyzes investors’ portfolio selection problems in a two-period dynamic model of Knightian uncertainty. We account for the existence of portfolio inertia in this two-period framework. Furthermore, by incorporating investors’ updating behavior, we analyze how observing new information in the first period will affect investors’ behavior. By this analysis, we show that observing new information in the first period will expand portfolio inertia in the second period compared with the case in which observing new information has not been gained in the first period if the degree of Knightian uncertainty is sufficiently large.  相似文献   

14.
We propose a generalization of expected utility that we call generalized EU (GEU), where a decision maker’s beliefs are represented by plausibility measures and the decision maker’s tastes are represented by general (i.e., not necessarily real-valued) utility functions. We show that every agent, “rational” or not, can be modeled as a GEU maximizer. We then show that we can customize GEU by selectively imposing just the constraints we want. In particular, we show how each of Savage’s postulates corresponds to constraints on GEU.  相似文献   

15.
There are narrowest bounds for P(h) when P(e)  =  y and P(h/e)  =  x, which bounds collapse to x as y goes to 1. A theorem for these bounds – Bounds for Probable Modus Ponens – entails a principle for updating on possibly uncertain evidence subject to these bounds that is a generalization of the principle for updating by conditioning on certain evidence. This way of updating on possibly uncertain evidence is appropriate when updating by ‘probability kinematics’ or ‘Jeffrey-conditioning’ is, and apparently in countless other cases as well. A more complicated theorem due to Karl Wagner – Bounds for Probable Modus Tollens – registers narrowest bounds for P(∼h) when P(∼e) =  y and P(e/h)  =  x. This theorem serves another principle for updating on possibly uncertain evidence that might be termed ‘contraditioning’, though it is for a way of updating that seems in practice to be frequently not appropriate. It is definitely not a way of putting down a theory – for example, a random-chance theory of the apparent fine-tuning for life of the parameters of standard physics – merely on the ground that the theory made extremely unlikely conditions of which we are now nearly certain. These theorems for bounds and updating are addressed to standard conditional probabilities defined as ratios of probabilities. Adaptations for Hosiasson-Lindenbaum ‘free-standing’ conditional probabilities are provided. The extended on-line version of this article (URL: ) includes appendices and expansions of several notes. Appendix A contains demonstrations and confirmations of elements of those adaptations. Appendix B discusses and elaborates analogues of modus ponens and modus tollens for probabilities and conditional probabilities found in Elliott Sober’s “Intelligent Design and Probability Reasoning.” Appendix C adds to observations made below regarding relations of Probability Kinematics and updating subject to Bounds for Probable Modus Ponens.   相似文献   

16.
In standard belief models, priors are always common knowledge. This prevents such models from representing agents’ probabilistic beliefs about the origins of their priors. By embedding standard models in a larger standard model, however, pre-priors can describe such beliefs. When an agent’s prior and pre-prior are mutually consistent, he must believe that his prior would only have been different in situations where relevant event chances were different, but that variations in other agents’ priors are otherwise completely unrelated to which events are how likely. Due to this, Bayesians who agree enough about the origins of their priors must have the same priors.  相似文献   

17.
In the context of indivisible public objects problems (e.g., candidate selection or qualification) with “separable” preferences, unanimity rule accepts each object if and only if the object is in everyone’s top set. We establish two axiomatizations of unanimity rule. The main axiom is resource monotonicity, saying that resource increase should affect all agents in the same direction. This axiom is considered in combination with simple Pareto (there is no Pareto improvement by addition or subtraction of a single object), independence of irrelevant alternatives, and either path independence or strategy-proofness.  相似文献   

18.
The main goal of the experimental study described in this paper is to investigate the sensitivity of probability weighting to the payoff structure of the gambling situation—namely the level of consequences at stake and the spacing between them—in the loss domain. For that purpose, three kinds of gambles are introduced: two kinds of homogeneous gambles (involving either small or large losses), and heterogeneous gambles involving both large and small losses. The findings suggest that at least for moderate/high probability of loss do both ‘level’ and ‘spacing’ effects reach significance, with the impact of ‘spacing’ being both opposite to and stronger than the impact of ‘level’. As compared to small-loss gambles, large-loss gambles appear to enhance probabilistic optimism, while heterogeneous gambles tend to increase pessimism.
Nathalie Etchart-VincentEmail:
  相似文献   

19.
The problem of asymmetric information causes a winner’s curse in many environments. Given many unsuccessful attempts to eliminate it, we hypothesize that some people ‘prefer’ the lotteries underlying the winner’s curse. Study 1 shows that after removing the hypothesized cause of error, asymmetric information, half the subjects still prefer winner’s curse lotteries, implying past efforts to de-bias the winner’s curse may have been more successful than previously recognized since subjects prefer these lotteries. Study 2 shows risk-seeking preferences only partially explain lottery preferences, while non-monetary sources of utility may explain the rest. Study 2 suggests lottery preferences are not independent of context, and offers methods to reduce the winner’s curse.
Robert Slonim (Corresponding author)Email:
  相似文献   

20.
This paper reports estimates for the ex ante tradeoffs for three specific homeland security policies that all address a terrorist attack on commercial aircraft with shoulder mounted missiles. Our analysis focuses on the willingness to pay for anti-missile laser jamming countermeasures mounted on commercial aircraft compared with two other policies as well as the prospect of remaining with the status quo. Our findings are based a stated preference conjoint survey conducted in 2006 and administered to a sample from Knowledge Networks’ national internet panel. The estimates range from $100 to $220 annually per household. Von Winterfeldt and O’Sullivan’s (2006) analysis of the same laser jamming plan suggests that the countermeasures would be preferred if economic losses are above $74 billion, the probability of attack is larger than 0.37 in 10 years, and if the cost of the measures is less than about $14 billion. Our results imply that, using the most conservative of our estimates, a program with a cost consistent with their thresholds would yield significant aggregate net benefits.
V. Kerry SmithEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号