首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In cooperative Cournot oligopoly games, it is known that the β-core is equal to the α-core, and both are non-empty if every individual profit function is continuous and concave (Zhao, Games Econ Behav 27:153–168, 1999b). Following Chander and Tulkens (Int J Game Theory 26:379–401, 1997), we assume that firms react to a deviating coalition by choosing individual best reply strategies. We deal with the problem of the non-emptiness of the induced core, the γ-core, by two different approaches. The first establishes that the associated Cournot oligopoly Transferable Utility (TU)-games are balanced if the inverse demand function is differentiable and every individual profit function is continuous and concave on the set of strategy profiles, which is a step forward beyond Zhao’s core existence result for this class of games. The second approach, restricted to the class of Cournot oligopoly TU-games with linear cost functions, provides a single-valued allocation rule in the γ-core called Nash Pro rata (NP)-value. This result generalizes Funaki and Yamato’s (Int J Game Theory 28:157–171, 1999) core existence result from no capacity constraint to asymmetric capacity constraints. Moreover, we provide an axiomatic characterization of this solution by means of four properties: efficiency, null firm, monotonicity, and non-cooperative fairness.  相似文献   

2.
Luce and Narens (Journal of Mathematical Psychology, 29:1–72, 1985) showed that rank-dependent utility (RDU) is the most general interval scale utility model for binary lotteries. It can be easily established that this result cannot be generalized to lotteries with more than two outcomes. This article suggests several additional conditions to ensure RDU as the only utility model with the desired property of interval scalability in the general case. The related axiomatizations of some special cases of RDU of independent interest (the quantile utility, expected utility, and Yaari’s dual expected utility) are also given.  相似文献   

3.
We deal with the ranking problem of the nodes in a directed graph. The bilateral relationships specified by a directed graph may reflect the outcomes of a sport competition, the mutual reference structure between websites, or a group preference structure over alternatives. We introduce a class of scoring methods for directed graphs, indexed by a single nonnegative parameter α. This parameter reflects the internal slackening of a node within an underlying iterative process. The class of so-called internal slackening scoring methods, denoted by λ α , consists of the limits of these processes. It is seen that λ0 extends the invariant scoring method, while λ extends the fair bets scoring method. Method λ1 corresponds with the existing λ-scoring method of Borm et al. (Ann Oper Res 109(1):61–75, 2002) and can be seen as a compromise between λ0 and λ . In particular, an explicit proportionality relation between λ α and λ1 is derived. Moreover, the internal slackening scoring methods are applied to the setting of social choice situations where they give rise to a class of social choice correspondences that refine both the Top cycle correspondence and the Uncovered set correspondence.  相似文献   

4.
We investigate how choices for uncertain gain and loss prospects are affected by the decision maker’s perceived level of knowledge about the underlying domain of uncertainty. Specifically, we test whether Heath and Tversky’s (J Risk Uncertain 4:5–28, 1991) competence hypothesis extends from gains to losses. We predict that the commonly-observed preference for high knowledge over low knowledge prospects for gains reverses for losses. We employ an empirical setup in which participants make hypothetical choices between gain or loss prospects in which the outcome depends on whether a high or low knowledge event occurs. We infer decision weighting functions for high and low knowledge events from choices using a representative agent preference model. For gains, we replicate the results of Kilka and Weber (Manage Sci 47:1712–1726, 2001), finding that decision makers are more attracted to choices that they feel more knowledgeable about. However, for losses, we find limited support for our extension of the competence effect.  相似文献   

5.
Risk aversion and expected-utility theory: A calibration exercise   总被引:1,自引:0,他引:1  
Rabin (Econometrica 68(5):1281–1292, 2000) argues that, under expected-utility, observed risk aversion over modest stakes implies extremely high risk aversion over large stakes. Cox and Sadiraj (Games Econom. Behav. 56(1):45–60, 2006) have replied that this is a problem of expected-utility of wealth, but that expected-utility of income does not share that problem. We combine experimental data on moderate-scale risky choices with survey data on income to estimate coefficients of relative risk aversion using expected-utility of consumption. Assuming individuals cannot save implies an average coefficient of relative risk aversion of 1.92. Assuming they can decide between consuming today and saving for the future, a realistic assumption, implies quadruple-digit coefficients. This gives empirical evidence for narrow bracketing.
Laura SchechterEmail:
  相似文献   

6.
This article explores rationalizability issues for finite sets of observations of stochastic choice in the framework introduced by Bandyopadhyay et al. (Journal of Economic Theory, 84(1), 95–110, 1999). It is argued that a useful approach is to consider indirect preferences on budgets instead of direct preferences on commodity bundles. A new rationalizability condition for stochastic choices, “rationalizable in terms of stochastic orderings on the normalized price space” (rsop), is defined. rsop is satisfied if and only if there exists a solution to a linear feasibility problem. The existence of a solution also implies rationalizability in terms of stochastic orderings on the commodity space. Furthermore it is shown that the problem of finding sufficiency conditions for binary choice probabilities to be rationalizable bears similarities to the problem considered here.  相似文献   

7.
This paper contributes to the understanding of economic strategic behaviors in inter-temporal settings. Comparing the MPE and the OLNE of a widely used class of differential games it is shown: (i) what qualifications on behaviors a markov (dynamic) information structure brings about compared with an open-loop (static) information structure, (ii) what is the reason leading to intensified or reduced competition between the agents in the long run. It depends on whether agents’ interactions are characterized by markov substitutability or markov complementarity, which can be seen as dynamic translations of the ideas of strategic substitutability and strategic complementarity (Bulow et al. 1985, Journal of Political Economy 93:488–511). In addition, an important practical contribution of the paper for modelers is to show that these results can be directly deduced from the payoff structure, with no need to compute equilibria first. I dedicate this paper to Philippe Michel, who introduced me to the literature on differential games.  相似文献   

8.
Empirical studies such as Goyal et al. (J Polit Econ 114(2):403–412, 2006) or Newman (Proc Natl Acad Sci USA 101(Suppl. 1):5200–5205, 2004) show that scientific collaboration networks present a highly unequal and hierarchical distribution of links. This implies that some researchers can be much more active and productive than others and, consequently, they can enjoy a much better scientific reputation. One may think that big intrinsical differences among researchers can constitute the main driving force behind these inequalities. Nevertheless, this model shows that, under specific circumstances, very similar individuals may self-organize themselves forming unequal and hierarchical structures.  相似文献   

9.
This paper examines the existence of strategic solutions to finite normal form games under the assumption that strategy choices can be described as choices among lotteries where players have security- and potential level preferences over lotteries (e.g., Cohen, Theory and Decision, 33, 101–104, 1992, Gilboa, Journal of Mathematical Psychology, 32, 405–420, 1988, Jaffray, Theory and Decision, 24, 169–200, 1988). Since security- and potential level preferences require discontinuous utility representations, standard existence results for Nash equilibria in mixed strategies (Nash, Proceedings of the National Academy of Sciences, 36, 48–49, 1950a, Non-Cooperative Games, Ph.D. Dissertation, Princeton University Press, 1950b) or for equilibria in beliefs (Crawford, Journal of Economic Theory, 50, 127–154, 1990) do not apply. As a key insight this paper proves that non-existence of equilibria in beliefs, and therefore non-existence of Nash equilibria in mixed strategies, is possible in finite games with security- and potential level players. But, as this paper also shows, rationalizable strategies (Bernheim, Econometrica, 52, 1007–1028, 1984, Moulin, Mathematical Social Sciences, 7, 83–102, 1984, Pearce, Econometrica, 52, 1029–1050, 1984) exist for such games. Rationalizability rather than equilibrium in beliefs therefore appears to be a more favorable solution concept for games with security- and potential level players.   相似文献   

10.
Ranking finite subsets of a given set X of elements is the formal object of analysis in this article. This problem has found a wide range of economic interpretations in the literature. The focus of the article is on the family of rankings that are additively representable. Existing characterizations are too complex and hard to grasp in decisional contexts. Furthermore, Fishburn (1996), Journal of Mathematical Psychology 40, 64–77 showed that the number of sufficient and necessary conditions that are needed to characterize such a family has no upper bound as the cardinality of X increases. In turn, this article proposes a way to overcome these difficulties and allows for the characterization of a meaningful (sub)family of additively representable rankings of sets by means of a few simple axioms. Pattanaik and Xu’s (1990), Recherches Economiques de Louvain 56, 383–390) characterization of the cardinality-based rule will be derived from our main result, and other new rules that stem from our general proposal are discussed and characterized in even simpler terms. In particular, we analyze restricted-cardinality based rules, where the set of “focal” elements is not given ex-ante; but brought out by the axioms.   相似文献   

11.
There are narrowest bounds for P(h) when P(e)  =  y and P(h/e)  =  x, which bounds collapse to x as y goes to 1. A theorem for these bounds – Bounds for Probable Modus Ponens – entails a principle for updating on possibly uncertain evidence subject to these bounds that is a generalization of the principle for updating by conditioning on certain evidence. This way of updating on possibly uncertain evidence is appropriate when updating by ‘probability kinematics’ or ‘Jeffrey-conditioning’ is, and apparently in countless other cases as well. A more complicated theorem due to Karl Wagner – Bounds for Probable Modus Tollens – registers narrowest bounds for P(∼h) when P(∼e) =  y and P(e/h)  =  x. This theorem serves another principle for updating on possibly uncertain evidence that might be termed ‘contraditioning’, though it is for a way of updating that seems in practice to be frequently not appropriate. It is definitely not a way of putting down a theory – for example, a random-chance theory of the apparent fine-tuning for life of the parameters of standard physics – merely on the ground that the theory made extremely unlikely conditions of which we are now nearly certain. These theorems for bounds and updating are addressed to standard conditional probabilities defined as ratios of probabilities. Adaptations for Hosiasson-Lindenbaum ‘free-standing’ conditional probabilities are provided. The extended on-line version of this article (URL: ) includes appendices and expansions of several notes. Appendix A contains demonstrations and confirmations of elements of those adaptations. Appendix B discusses and elaborates analogues of modus ponens and modus tollens for probabilities and conditional probabilities found in Elliott Sober’s “Intelligent Design and Probability Reasoning.” Appendix C adds to observations made below regarding relations of Probability Kinematics and updating subject to Bounds for Probable Modus Ponens.   相似文献   

12.
A reasonable level of risk aversion with respect to small gambles leads to a high, and absurd, level of risk aversion with respect to large gambles. This was demonstrated by Rabin (Econometrica 68:1281–1292, 2000) for expected utility theory. Later, Safra and Segal (Econometrica, 2008) extended this result by showing that similar arguments apply to many non-expected utility theories, provided they are Gateaux differentiable. In this paper we drop the differentiability assumption and by restricting attention to betweenness theories we show that much weaker conditions are sufficient for the derivation of similar calibration results.
Uzi Segal (Corresponding author)Email:
  相似文献   

13.
On Decomposing Net Final Values: Eva,Sva and Shadow Project   总被引:1,自引:0,他引:1  
A decomposition model of Net Final Values (NFV), named Systemic Value Added (SVA), is proposed for decision-making purposes, based on a systemic approach introduced in Magni [Magni, C. A. (2003), Bulletin of Economic Research 55(2), 149–176; Magni, C. A. (2004) Economic Modelling 21, 595–617]. The model translates the notion of excess profit giving formal expression to a counterfactual alternative available to the decision maker. Relations with other decomposition models are studied, among which Stewart’s [Stewart, G.B. (1991), The Quest for Value: The EVA™ Management Guide, Harper Collins, Publishers Inc]. The index here introduced differs from Stewart’s Economic Value Added (EVA) in that it rests on a different interpretation of the notion of excess profit and is formally connected with the EVA model by means of a shadow project. The SVA is formally and conceptually threefold, in that it is economic, financial, accounting-flavoured. Some results are offered, providing sufficient and necessary conditions for decomposing NFV. Relations between a project’s SVA and its shadow project’s EVA are shown, all results of Pressacco and Stucchi [Pressacco, F. and Stucchi, P. (1997), Rivista di Matematica per le Scienze Economiche e Sociali 20, 165–185] are proved by making use of the systemic approach and the shadow counterparts of those results are also shown.  相似文献   

14.
Ellsberg (The Quarterly Journal of Economics 75, 643–669 (1961); Risk, Ambiguity and Decision, Garland Publishing (2001)) argued that uncertainty is not reducible to risk. At the center of Ellsberg’s argument lies a thought experiment that has come to be known as the three-color example. It has been observed that a significant number of sophisticated decision makers violate the requirements of subjective expected utility theory when they are confronted with Ellsberg’s three-color example. More generally, such decision makers are in conflict with either the ordering assumption or the independence assumption of subjective expected utility theory. While a clear majority of the theoretical responses to these violations have advocated maintaining ordering while relaxing independence, a persistent minority has advocated abandoning the ordering assumption. The purpose of this paper is to consider a similar dilemma that exists within the context of multiattribute models, where it arises by considering indeterminacy in the weighting of attributes rather than indeterminacy in the determination of probabilities as in Ellsberg’s example.   相似文献   

15.
16.
The choice of the proper discount rate is important in the analysis of projects whose costs and benefits extend into the future, a particularly striking feature of policies directed at climate change. Much of the literature, including prominent work by Arrow et al. (1996), Stern (2007, 2008), and Dasgupta (2008), employs a reduced-form approach that conflates social value judgments and individuals’ risk preferences, the latter raising an empirical question about choices under uncertainty rather than a matter for ethical reflection. This article offers a simple, explicit decomposition that clarifies the distinction, reveals unappreciated difficulties with the reduced-form approach, and relates them to the literature. In addition, it explores how significant uncertainty about future consumption, another central factor in climate policy assessment, raises further complications regarding the relationship between social judgments and individuals’ risk preferences.  相似文献   

17.
We provide an economic interpretation of the practice consisting in incorporating risk measures as constraints in an expected prospect maximization problem. For what we call the infimum of expectations class of risk measures, we show that if the decision maker (DM) maximizes the expectation of a random prospect under constraint that the risk measure is bounded above, he then behaves as a “generalized expected utility maximizer” in the following sense. The DM exhibits ambiguity with respect to a family of utility functions defined on a larger set of decisions than the original one; he adopts pessimism and performs first a minimization of expected utility over this family, then performs a maximization over a new decisions set. This economic behaviour is called “maxmin under risk” and studied by Maccheroni (Econ Theory 19:823–831, 2002). As an application, we make the link between an expected prospect maximization problem, subject to conditional value-at-risk being less than a threshold value, and a non-expected utility economic formulation involving “loss aversion”-type utility functions.  相似文献   

18.
(1) This paper uses the following binary relations: > (is preferred to); ⩽ (is not preferred to); < (is less preferred than); ~ (is indifferent to). (2) Savage used primitive ⩾, postulated to be connected and transitive onA (the set of acts), to define the others: [x ~ y ⇔ (x ⩽ y and y ⩽ x)]; [y < x ⇔ notx ⩽ y]; [x > y ⇔ y < x]. Independently of the axioms, this definition implies that ⩽ and > are complementary relations onA: [x < y ⇔ notx > y]. (3) Pratt, Raiffa and Schlaifer used primitive ⩽, postulated to be transitive onL (the set of lotteries), to define the others with a different expression for <: [x < y ⇔ (x ⩽ y and noty ⩽ x)]. Thus, ⩽ and > are not necessarily complementary onL; since ⩽ is not postulated to be connected onL, but connected ⩽ is necessary and sufficient for such complementarity. Since the restriction of ⩽ to the subsetA ofL is connected, ⩽ and > are complementary onA. (4) Fishburn used primitive < onA to define the others with different expression for ~ and ⩽: [x ~ y ⇔ (notx < y and noty < x)]; [x ⩽ y ⇔ (x < y orx ~ y)]. His version of Savage's theory then assumed that < is asymmetric and negatively transitive onA. Thus, ⩽ and > are complementary, since asymmetric < is necessary and sufficient for such complementarity. (5) This analysis provides a new proof that the same list of elementary properties of binary relations onA applies to all three theories: ⩽ is connected, transitive, weakly connected, reflexive, and negatively transitive; while both < and > are asymmetric, negatively transitive, antisymmetric, irreflexive, and transitive; but only ~ is symmetric.  相似文献   

19.
Previous work by Diffo Lambo and Moulen [Theory and Decision 53, 313–325 (2002)] and Felsenthal and Machover [The Measurement of Voting Power, Edward Elgar Publishing Limited (1998)], shows that all swap preserving measures of voting power are ordinally equivalent on any swap robust simple voting game. Swap preserving measures include the Banzhaf, the Shapley–Shubik and other commonly used measures of a priori voting power. In this paper, we completely characterize the achievable hierarchies for any such measure on a swap robust simple voting game. Each possible hierarchy can be induced by a weighted voting game and we provide a constructive proof of this result. In particular, the strict hierarchy is always achievable as long as there are at least five players.  相似文献   

20.
The Value of a Probability Forecast from Portfolio Theory   总被引:1,自引:0,他引:1  
A probability forecast scored ex post using a probability scoring rule (e.g. Brier) is analogous to a risky financial security. With only superficial adaptation, the same economic logic by which securities are valued ex ante – in particular, portfolio theory and the capital asset pricing model (CAPM) – applies to the valuation of probability forecasts. Each available forecast of a given event is valued relative to each other and to the “market” (all available forecasts). A forecast is seen to be more valuable the higher its expected score and the lower the covariance of its score with the market aggregate score. Forecasts that score highly in trials when others do poorly are appreciated more than those with equal success in “easy” trials where most forecasts score well. The CAPM defines economically rational (equilibrium) forecast prices at which forecasters can trade shares in each other’s ex post score – or associated monetary payoff – thereby balancing forecast risk against return and ultimately forming optimally hedged portfolios. Hedging this way offers risk averse forecasters an “honest” alternative to the ruse of reporting conservative probability assessments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号