首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Aumann's (1987) theorem shows that correlated equilibrium is an expression of Bayesian rationality. We extend this result to games with incomplete information.First, we rely on Harsanyi's (1967) model and represent the underlying multiperson decision problem as a fixed game with imperfect information. We survey four definitions of correlated equilibrium which have appeared in the literature. We show that these definitions are not equivalent to each other. We prove that one of them fits Aumann's framework; the agents normal form correlated equilibrium is an expression of Bayesian rationality in games with incomplete information.We also follow a universal Bayesian approach based on Mertens and Zamir's (1985) construction of the universal beliefs space. Hierarchies of beliefs over independent variables (states of nature) and dependent variables (actions) are then constructed simultaneously. We establish that the universal set of Bayesian solutions satisfies another extension of Aumann's theorem.We get the following corollary: once the types of the players are not fixed by the model, the various definitions of correlated equilibrium previously considered are equivalent.  相似文献   

2.
Focal points in pure coordination games: An experimental investigation   总被引:2,自引:0,他引:2  
This paper reports an experimental investigation of the hypothesis that in coordination games, players draw on shared concepts of salience to identify focal points on which they can coordinate. The experiment involves games in which equilibria can be distinguished from one another only in terms of the way strategies are labelled. The games are designed to test a number of specific hypotheses about the determinants of salience. These hypotheses are generally confirmed by the results of the experiment.  相似文献   

3.
Nash's solution of a two-person cooperative game prescribes a coordinated mixed strategy solution involving Pareto-optimal outcomes of the game. Testing this normative solution experimentally presents problems in as much as rather detailed explanations must be given to the subjects of the meaning of threat strategy, strategy mixture, expected payoff, etc. To the extent that it is desired to test the solution using naive subjects, the problem arises of imparting to them a minimal level of understanding about the issue involved in the game without actually suggesting the solution.Experiments were performed to test the properties of the solution of a cooperative two-person game as these are embodied in three of Nash's four axioms: Symmetry, Pareto-optimality, and Invariance with respect to positive linear transformations. Of these, the last was definitely discorroborated, suggesting that interpersonal comparison of utilities plays an important part in the negotiations.Some evidence was also found for a conjecture generated by previous experiments, namely that an externally imposed threat (penalty for non-cooperation) tends to bring the players closer together than the threats generated by the subjects themselves in the process of negotiation.  相似文献   

4.
Dore  Mohammed 《Theory and Decision》1997,43(3):219-239
This paper critically reviews Ken Binmores non- utilitarian and game theoretic solution to the Arrow problem. Binmores solution belongs to the same family as Rawls maximin criterion and requires the use of Nash bargaining theory, empathetic preferences, and results in evolutionary game theory. Harsanyi has earlier presented a solution that relies on utilitarianism, which requires some exogenous valuation criterion and is therefore incompatible with liberalism. Binmores rigorous demonstration of the maximin principle for the first time presents a real alternative to a utilitarian solution.  相似文献   

5.
Two institutions that are often implicit or overlooked in noncooperative games are the assumption of Nash behavior to solve a game, and the ability to correlate strategies. We consider two behavioral paradoxes; one in which maximin behavior rules out all Nash equilibria (Chicken), and another in which minimax supergame behavior leads to an inefficient outcome in comparison to the unique stage game equilibrium (asymmetric Deadlock). Nash outcomes are achieved in both paradoxes by allowing for correlated strategies, even when individual behavior remains minimax or maximin. However, the interpretation of correlation as a public institution differs for each case.  相似文献   

6.
In reply to McClennen, the paper argues that his criticism is based on a mistaken assumption about the meaning of rationality postulates, to be called the Implication Principle. Once we realize that the Implication Principle has no validity, McClennen's criticisms of what he calls the Reductio Argument and what he calls the Incentive Argument fall to the ground. The rest of the paper criticizes the rationality concept McClennen proposes in lieu of that used by orthodox game theory. It is argued that McClennen's concept is inconsistent with the behavior of real-life intelligent egoists; it is incompatible with the way payoffs are defined in game theory; and it would be highly dangerous as a practical guide to human behavior.The author is indebted to the National Science Foundation for financial support trough Grant GS-3222, administered through the Center for Research in Management Science, University of California, Berkeley.  相似文献   

7.
Can we rationally learn to coordinate?   总被引:1,自引:0,他引:1  
In this paper we examine the issue whether individual rationality considerations are sufficient to guarantee that individuals will learn to coordinate. This question is central in any discussion of whether social phenomena (read: conventions) can be explained in terms of a purely individualistic approach. We argue that the positive answers to this general question that have been obtained in some recent work require assumptions which incorporate some convention. This conclusion may be seen as supporting the viewpoint of institutional individualism in contrast to psychological individualism.  相似文献   

8.
Summary The objective Bayesian program has as its fundamental tenet (in addition to the three Bayesian postulates) the requirement that, from a given knowledge base a particular probability function is uniquely appropriate. This amounts to fixing initial probabilities, based on relatively little information, because Bayes' theorem (conditionalization) then determines the posterior probabilities when the belief state is altered by enlarging the knowledge base. Moreover, in order to reconstruct orthodox statistical procedures within a Bayesian framework, only privileged ignorance probability functions will work.To serve all these ends objective Bayesianism seeks additional principles for specifying ignorance and partial information probabilities. H. Jeffreys' method of invariance (or Jaynes' modification thereof) is used to solve the former problem, and E. Jaynes' rule of maximizing entropy (subject to invariance for continuous distributions) has recently been thought to solve the latter. I have argued that neither policy is acceptable to a Bayesian since each is inconsistent with conditionalization. Invariance fails to give a consistent representation to the state of ignorance professed. The difficulties here parallel familiar weaknesses in the old Laplacean principle of insufficient reason. Maximizing entropy is unsatisfactory because the partial information it works with fails to capture the effect of uncertainty about related nuisance factors. The result is a probability function that represents a state richer in empirical content than the belief state targeted for representation. Alternatively, by conditionalizing on information about a nuisance parameter one may move from a distribution of lower to higher entropy, despite the obvious increase in information available.Each of these two complaints appear to me to be symptoms of the program's inability to formulate rules for picking privileged probability distributions that serve to represent ignorance or near ignorance. Certainly the methods advocated by Jeffreys, Jaynes and Rosenkrantz are mathematically convenient idealizations wherein specified distributions are elevated to the roles of ignorance and partial information distributions. But the cost that goes with the idealization is a violation of conditionalization, and if that is the ante that we must put up to back objective Bayesianism then I propose we look for a different candidate to earn our support.31  相似文献   

9.
Orbell and Dawes develop a non-game theoretic heuristic that yields a cooperator's advantage by allowing players to project their own cooperate-defect choices onto potential partners (1991, p. 515). With appropriate parameter values their heuristic yields a cooperative environment, but the cooperation depends, simply, on optimism about others' behavior (1991, p. 526). In earlier work, Dawes (1989) established a statistical foundation for such optimism. In this paper, I adapt some of the concerns of Dawes (1989) and develop a game theoretic model based on a modification of the Harsanyi structure of games with incomplete information (1967–1968). I show that the commonly made conjecture that strategic play is incompatible with cooperation and the cooperator's advantage is false.  相似文献   

10.
Rubinstein (1982) considered the problem of dividing a given surplus between two players sequentially, and then proposed a model in which the two players alternately make and respond to each other's offers through time. He further characterized the perfect equilibrium outcomes, which depend on the players time preferences and order of moves. Using both equal and unequal bargaining cost conditions and an unlimited number of rounds, two experiments were designed to compare the perfect equilibrium model to alternative models based on norms of fairness. We report analyses of final agreements, first offers, and number of bargaining rounds, which provide limited support to the perfect equilibrium model, and then conclude by recommending a shift in focus from model testing to specification of the conditions favoring one model over another.  相似文献   

11.
This paper discusses the relationship between coalitional stability and the robustness of bargaining outcomes to the bargaining procedure. We consider a class of bargaining procedures described by extensive form games, where payoff opportunities are given by a characteristic function (cooperative) game. The extensive form games differ on the probability distribution assigned to chance moves which determine the order in which players take actions. One way to define mechanism robustness is in terms of the property of no first mover advantage. An equilibrium is mechanism robust if for each member the expected payoff before and after being called to propose is the same. Alternatively one can define mechanism robustness as a property of equilibrium outcomes. An outcome is said to be mechanism robust if it is supported by some equilibrium in all the extensive form games (mechanisms) within our class. We show that both definitions of mechanism robustness provide an interesting characterization of the core of the underlying cooperative game.  相似文献   

12.
Lattices,bargaining and group decisions   总被引:1,自引:1,他引:0  
This essay aims at constructing an abstract mathematical system which, when interpreted, serves to portray group-choices among alternatives that need not be quantifiable. The system in question is a complete distributive lattice, on which a class of non-negative real-valued homomorphisms is defined. Reinforced with appropriate axioms, this class becomes a convex distributive lattice. If this lattice is equipped with a suitable measure, and if the mentioned class of homomorphisms is equipped with a metric, then the class and its convex sets are seen to possess certain characteristic properties. The main result (Theorem 6) follows from a combination of these results and a famous result due to Choquet.The mathematical scheme is then interpreted in the subject-language of choice among alternatives. It is shown, by means of an example, that the system furnishes all the ingredients for describing multi-group choices. Whether or not the same ingredients are also adequate for a behavioural theory of multi-group choices is an issue that will not be gone into. However, the example effectively illustrates how a process of bargaining can be described with the aid of the mathematical scheme.In the second example, a class of bargaining situations is modelled in the symbolism of linear programming with several objective functions combined with unknown weights; the cost vectors in such formulations are identified with homomorphisms, and the main theorem of this essay is applied.  相似文献   

13.
The incoherence of agreeing to disagree   总被引:2,自引:2,他引:0  
The agreeing-to-disagree theorem of Aumann and the no-expected-gain-from-trade theorem of Milgrom and Stokey are reformulated under an operational definition of Bayesian rationality. Common knowledge of beliefs and preferences is achieved through transactions in a contingent claims market, and mutual expectations of Bayesian rationality are defined by the condition of joint coherence,i.e., the collective avoidance of arbitrage opportunities. The existence of a common prior distribution and the impossibility of agreeing to disagree follow from the joint coherence requirement, but the prior must be interpreted as a risk-neutral distribution: a product of probabilities and marginal utilities for money. The failure of heterogenous information to create disagreements or incentives to trade is shown to be an artifact of overlooking the potential role of trade in constructing the initial state of common knowledge.  相似文献   

14.
This paper discusses several concepts that can be used to provide a foundation for a unified, theory of rational, economic behavior. First, decision-making is defined to be a process that takes place with reference to both subjective and objective time, that distinguishes between plans and actions, between information and states and that explicitly incorporates the collection and processing of information. This conception of decision making is then related to several important aspects of behavioral economics, the dependence of values on experience, the use of behavioral rules, the occurrence of multiple goals and environmental feedback.Our conclusions are (1) the non-transitivity of observed or revealed preferences is a characteristic of learning and hence is to be expected of rational decision-makers; (2) the learning of values through experience suggests the sensibleness of short time horizons and the making of choices according to flexible utility; (3) certain rules of thumb used to allow for risk are closely related to principles of Safety-First and can also be based directly on the hypothesis that the feeling of risk (the probability of disaster) is identified with extreme departures from recently executed decisions. (4) The maximization of a hierarchy of goals, or of a lexicographical utility function, is closely related to the search for feasibility and the practice of satisficing. (5) When the dim perception of environmental feedback and the effect of learning on values are acknowledged the intertemporal optimality of planned decision trajectories is seen to be a characteristic of subjective not objective time. This explains why decision making is so often best characterized by rolling plans. In short, we find that economic man - like any other - is an existential being whose plans are based on hopes and fears and whose every act involves a leap of faith.This paper is based on a talk presented at the Conference, New Beginnings in Economics, Akron, Ohio, March 15, 1969. Work on this paper was supported by a grant from the National Science Foundation.  相似文献   

15.
We study the uncertain dichotomous choice model. In this model a set of decision makers is required to select one of two alternatives, say support or reject a certain proposal. Applications of this model are relevant to many areas, such as political science, economics, business and management. The purpose of this paper is to estimate and compare the probabilities that different decision rules may be optimal. We consider the expert rule, the majority rule and a few in-between rules. The information on the decisional skills is incomplete, and these skills arise from an exponential distribution. It turns out that the probability that the expert rule is optimal far exceeds the probability that the majority rule is optimal, especially as the number of the decision makers becomes large.  相似文献   

16.
An agent who violates independence can avoid dynamic inconsistency in sequential choice if he is sophisticated enough to make use of backward induction in planning. However, Seidenfeld has demonstrated that such a sophisticated agent with dependent preferences is bound to violate the principle of dynamic substitution, according to which admissibility of a plan is preserved under substitution of indifferent options at various choice nodes in the decision tree. Since Seidenfeld considers dynamic substitution to be a coherence condition on dynamic choice, he concludes that sophistication cannot save a violator of independence from incoherence. In response to McClennens objection that relying on dynamic substitution when independence is at stake must be question-begging, Seidenfeld undertakes to prove that dynamic substitution follows from the principle of backward induction alone, provided we assume that the agents admissible choices from different sets of feasible plans are all based on a fixed underlying preference ordering of plans. This paper shows that Seidenfelds proof fails: depending on the interpretation, it is either invalid or based on an unacceptable assumption.  相似文献   

17.
Counterexamples to two results by Stalnaker (Theory and Decision, 1994) are given and a corrected version of one of the two results is proved. Stalnaker's proposed results are: (1) if at the true state of an epistemic model of a perfect information game there is common belief in the rationality of every player and common belief that no player has false beliefs (he calls this joint condition strong rationalizability), then the true (or actual) strategy profile is path equivalent to a Nash equilibrium; (2) in a normal-form game a strategy profile is strongly rationalizable if and only if it belongs to C , the set of profiles that survive the iterative deletion of inferior profiles.  相似文献   

18.
In this paper, a problem for utility theory - that it would have an agent who was compelled to play Russian Roulette with one revolver or another, to pay as much to have a six-shooter with four bullets relieved of one bullet before playing with it, as he would be willing to pay to have a six-shooter with two bullets emptied - is reviewed. A less demanding Bayesian theory is described, that would have an agent maximize expected values of possible total consequence of his actions. And utility theory is located within that theory as valid for agents who satisfy certain formal conditions, that is, for agents who are, in terms of that more general theory, indifferent to certain dimensions of risk. Raiffa- and Savage-style arguments for its more general validity are then resisted. Addenda are concerned with implications for game theory, and relations between utilities and values.  相似文献   

19.
Both Popper and Good have noted that a deterministic microscopic physical approach to probability requires subjective assumptions about the statistical distribution of initial conditions. However, they did not use such a fact for defining an a priori probability, but rather recurred to the standard observation of repetitive events. This observational probability may be hard to assess for real-life decision problems under uncertainty that very often are - strictly speaking - non-repetitive, one-time events. This may be a reason for the popularity of subjective probability in decision models. Unfortunately, such subjective probabilities often merely reflect attitudes towards risk, and not the underlying physical processes.In order to get as objective as possible a definition of probability for one-time events, this paper identifies the origin of randomness in individual chance processes. By focusing on the dynamics of the process, rather than on the (static) device, it is found that any process contains two components: observer-independent (= objective) and observer-dependent (= subjective). Randomness, if present, arises from the subjective definition of the rules of the game, and is not - as in Popper's propensity - a physical property of the chance device. In this way, the classical definition of probability is no longer a primitive notion based upon equally possible cases, but is derived from the underlying microscopic processes, plus a subjective, clearly identified, estimate of the branching ratios in an event tree. That is, equipossibility is not an intrinsic property of the system object/subject but is forced upon the system via the rules of the game/measurement.Also, the typically undefined concept of symmetry in games of chance is broken down into objective and subjective components. It is found that macroscopic symmetry may hold under microscopic asymmetry. A similar analysis of urn drawings shows no conceptual difference with other games of chance (contrary to Allais' opinion). Finally, the randomness in Lande's knife problem is not due to objective fortuity (as in Popper's view) but to the rules of the game (the theoretical difficulties arise from intermingling microscopic trajectories and macroscopic events).Dedicated to Professor Maurice Allais on the occasion of the Nobel Prize in Economics awarded December, 1988.  相似文献   

20.
We report an experiment on two treatments of an ultimatum minigame. In one treatment, responders reactions are hidden to proposers. We observe high rejection rates reflecting responders intrinsic resistance to unfairness. In the second treatment, proposers are informed, allowing for dynamic effects over eight rounds of play. The higher rejection rates can be attributed to responders provision of a public good: Punishment creates a group reputation for being tough and effectively educate proposers. Since rejection rates with informed proposers drop to the level of the treatment with non-informed proposers, the hypothesis of responders enjoyment of overt punishment is not supported.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号