首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In reply to McClennen, the paper argues that his criticism is based on a mistaken assumption about the meaning of rationality postulates, to be called the Implication Principle. Once we realize that the Implication Principle has no validity, McClennen's criticisms of what he calls the Reductio Argument and what he calls the Incentive Argument fall to the ground. The rest of the paper criticizes the rationality concept McClennen proposes in lieu of that used by orthodox game theory. It is argued that McClennen's concept is inconsistent with the behavior of real-life intelligent egoists; it is incompatible with the way payoffs are defined in game theory; and it would be highly dangerous as a practical guide to human behavior.The author is indebted to the National Science Foundation for financial support trough Grant GS-3222, administered through the Center for Research in Management Science, University of California, Berkeley.  相似文献   

2.
Nash's solution of a two-person cooperative game prescribes a coordinated mixed strategy solution involving Pareto-optimal outcomes of the game. Testing this normative solution experimentally presents problems in as much as rather detailed explanations must be given to the subjects of the meaning of threat strategy, strategy mixture, expected payoff, etc. To the extent that it is desired to test the solution using naive subjects, the problem arises of imparting to them a minimal level of understanding about the issue involved in the game without actually suggesting the solution.Experiments were performed to test the properties of the solution of a cooperative two-person game as these are embodied in three of Nash's four axioms: Symmetry, Pareto-optimality, and Invariance with respect to positive linear transformations. Of these, the last was definitely discorroborated, suggesting that interpersonal comparison of utilities plays an important part in the negotiations.Some evidence was also found for a conjecture generated by previous experiments, namely that an externally imposed threat (penalty for non-cooperation) tends to bring the players closer together than the threats generated by the subjects themselves in the process of negotiation.  相似文献   

3.
Summary The objective Bayesian program has as its fundamental tenet (in addition to the three Bayesian postulates) the requirement that, from a given knowledge base a particular probability function is uniquely appropriate. This amounts to fixing initial probabilities, based on relatively little information, because Bayes' theorem (conditionalization) then determines the posterior probabilities when the belief state is altered by enlarging the knowledge base. Moreover, in order to reconstruct orthodox statistical procedures within a Bayesian framework, only privileged ignorance probability functions will work.To serve all these ends objective Bayesianism seeks additional principles for specifying ignorance and partial information probabilities. H. Jeffreys' method of invariance (or Jaynes' modification thereof) is used to solve the former problem, and E. Jaynes' rule of maximizing entropy (subject to invariance for continuous distributions) has recently been thought to solve the latter. I have argued that neither policy is acceptable to a Bayesian since each is inconsistent with conditionalization. Invariance fails to give a consistent representation to the state of ignorance professed. The difficulties here parallel familiar weaknesses in the old Laplacean principle of insufficient reason. Maximizing entropy is unsatisfactory because the partial information it works with fails to capture the effect of uncertainty about related nuisance factors. The result is a probability function that represents a state richer in empirical content than the belief state targeted for representation. Alternatively, by conditionalizing on information about a nuisance parameter one may move from a distribution of lower to higher entropy, despite the obvious increase in information available.Each of these two complaints appear to me to be symptoms of the program's inability to formulate rules for picking privileged probability distributions that serve to represent ignorance or near ignorance. Certainly the methods advocated by Jeffreys, Jaynes and Rosenkrantz are mathematically convenient idealizations wherein specified distributions are elevated to the roles of ignorance and partial information distributions. But the cost that goes with the idealization is a violation of conditionalization, and if that is the ante that we must put up to back objective Bayesianism then I propose we look for a different candidate to earn our support.31  相似文献   

4.
Two institutions that are often implicit or overlooked in noncooperative games are the assumption of Nash behavior to solve a game, and the ability to correlate strategies. We consider two behavioral paradoxes; one in which maximin behavior rules out all Nash equilibria (Chicken), and another in which minimax supergame behavior leads to an inefficient outcome in comparison to the unique stage game equilibrium (asymmetric Deadlock). Nash outcomes are achieved in both paradoxes by allowing for correlated strategies, even when individual behavior remains minimax or maximin. However, the interpretation of correlation as a public institution differs for each case.  相似文献   

5.
A new investigation is launched into the problem of decision-making in the face of complete ignorance, and linked to the problem of social choice. In the first section the author introduces a set of properties which might characterize a criterion for decision-making under complete ignorance. Two of these properties are novel: independence of non-discriminating states, and weak pessimism. The second section provides a new characterization of the so-called principle of insufficient reason. In the third part, lexicographic maximin and maximax criteria are characterized. Finally, the author's results are linked to the problem of social choice.  相似文献   

6.
Can we rationally learn to coordinate?   总被引:1,自引:0,他引:1  
In this paper we examine the issue whether individual rationality considerations are sufficient to guarantee that individuals will learn to coordinate. This question is central in any discussion of whether social phenomena (read: conventions) can be explained in terms of a purely individualistic approach. We argue that the positive answers to this general question that have been obtained in some recent work require assumptions which incorporate some convention. This conclusion may be seen as supporting the viewpoint of institutional individualism in contrast to psychological individualism.  相似文献   

7.
A Comparison of Some Distance-Based Choice Rules in Ranking Environments   总被引:1,自引:0,他引:1  
We discuss the relationships between positional rules (such as plurality and approval voting as well as the Borda count), Dodgsons, Kemenys and Litvaks methods of reaching consensus. The discrepancies between methods are seen as results of different intuitive conceptions of consensus goal states and ways of measuring distances therefrom. Saaris geometric methodology is resorted to in the analysis of the consensus reaching methods.  相似文献   

8.
Rawls' Difference Principle asserts that a basic economic structure is just if it makes the worst off people as well off as is feasible. How well off someone is is to be measured by an index of primary social goods. It is this index that gives content to the principle, and Rawls gives no adequate directions for constructing it. In this essay a version of the difference principle is proposed that fits much of what Rawls says, but that makes use of no index. Instead of invoking an index of primary social goods, the principle formulated here invokes a partial ordering of prospects for opportunities.  相似文献   

9.
A soundness proof for an axiomatization of common belief in minimal neighbourhood semantics is provided, thereby leaving aside all assumptions of monotonicity in agents reasoning. Minimality properties of common belief are thus emphasized, in contrast to the more usual fixed point properties. The proof relies on the existence of transfinite fixed points of sequences of neighbourhood systems even when they are not closed under supersets. Obvious shortcoming of the note is the lack of a completeness proof.  相似文献   

10.
The present paper deals with the Galbraithian theory of the managerial firm. Galbraith has stressed corporate size and has questioned the effectiveness of the market demand, technology and capital market constraints, which in conventional theory restrict the size of the firm.Galbraith represents the objectives of the corporation in terms of a conventional lexicographic objective function with some minimal level of profits (in terms of cash flow) being ranked the dominant objective. Also in his treatment of the corporate constraints, Galbraith does not move much beyond the current state of knowledge. The assumption of consumer sovereignty has long been relegated to the text-book literature, and the firm's control over the quality of its product (its price elasticity) has been generally recognized. Similarly, it has been known that the capital market is not perfect so that it is unlikely to constrain the expansion of the firm with some given investor determined earning constraint. In his attempt to show the technostructure's ability to plan the rate and the direction of the technological development Galbraith did not, however, meet with wide support from empirical research and analysis. It is extremely difficult to test the firm's control over its production technology, and while the few industry studies available can hardly be used to reject the Galbraithian position, there is not sufficient evidence to support a generalization of Galbraith's conjecture.While individually these constraints have been analyzed and discussed in the literature, Galbraith has combined these results and has been able to show that in the industrial state the qualitative laws of economic common sense do not hold. The importance of this conclusion is not only academic. Efforts to control corporate allocations through rate controls, antitrust litigation, and in other ways emanate from the conventional theory of firms and markets and do not fit the industrial state. In this state corporate size does matter and cannot be treated as random: The larger the corporation the more perfect the control it assumes over its environment and the higher the efficiency with which it plans its over-all operations.We acknowledge the helpful comments of a referee of this journal.  相似文献   

11.
Given a finite state space and common priors, common knowledge of the identity of an agent with the minimal (or maximal) expectation of a random variable implies consensus, i.e., common knowledge of common expectations. This extremist statistic induces consensus when repeatedly announced, and yet, with n agents, requires at most log2 n bits to broadcast.  相似文献   

12.
In this paper, a problem for utility theory - that it would have an agent who was compelled to play Russian Roulette with one revolver or another, to pay as much to have a six-shooter with four bullets relieved of one bullet before playing with it, as he would be willing to pay to have a six-shooter with two bullets emptied - is reviewed. A less demanding Bayesian theory is described, that would have an agent maximize expected values of possible total consequence of his actions. And utility theory is located within that theory as valid for agents who satisfy certain formal conditions, that is, for agents who are, in terms of that more general theory, indifferent to certain dimensions of risk. Raiffa- and Savage-style arguments for its more general validity are then resisted. Addenda are concerned with implications for game theory, and relations between utilities and values.  相似文献   

13.
This paper considers two fundamental aspects of the analysis of dynamic choices under risk: the issue of the dynamic consistency of the strategies of a non EU maximizer, and the issue that an individual whose preferences are nonlinear in probabilities may choose a strategy which is in some appropriate sense dominated by other strategies. A proposed way of dealing with these problems, due to Karni and Safra and called behavioral consistency, is described. The implications of this notion of behavioral consistency are explored, and it is shown that while the Karni and Safra approach obtains dynamically consistent behavior under nonlinear preferences, it may imply the choice of dominated strategies even in very simple decision trees.  相似文献   

14.
Scientists often disagree about whether a new theory is better than the current theory. From this some (e.g., Thomas Kuhn) have inferred that the values of science are changing and subjective, and hence that science is an irrational enterprise. As an alternative, this paper develops a rational model of the scientific enterprise according to which the scope and elegance of theories are important elements in the scientist's utility function. The varied speed of acceptance of new theories by scientists can be explained in terms of the optimal allocation of time among different scientific activities. The model thus accounts for the rationality of science in a way that is broadly consistent with the empirical evidence on the history and practice of science.  相似文献   

15.
This paper discusses several concepts that can be used to provide a foundation for a unified, theory of rational, economic behavior. First, decision-making is defined to be a process that takes place with reference to both subjective and objective time, that distinguishes between plans and actions, between information and states and that explicitly incorporates the collection and processing of information. This conception of decision making is then related to several important aspects of behavioral economics, the dependence of values on experience, the use of behavioral rules, the occurrence of multiple goals and environmental feedback.Our conclusions are (1) the non-transitivity of observed or revealed preferences is a characteristic of learning and hence is to be expected of rational decision-makers; (2) the learning of values through experience suggests the sensibleness of short time horizons and the making of choices according to flexible utility; (3) certain rules of thumb used to allow for risk are closely related to principles of Safety-First and can also be based directly on the hypothesis that the feeling of risk (the probability of disaster) is identified with extreme departures from recently executed decisions. (4) The maximization of a hierarchy of goals, or of a lexicographical utility function, is closely related to the search for feasibility and the practice of satisficing. (5) When the dim perception of environmental feedback and the effect of learning on values are acknowledged the intertemporal optimality of planned decision trajectories is seen to be a characteristic of subjective not objective time. This explains why decision making is so often best characterized by rolling plans. In short, we find that economic man - like any other - is an existential being whose plans are based on hopes and fears and whose every act involves a leap of faith.This paper is based on a talk presented at the Conference, New Beginnings in Economics, Akron, Ohio, March 15, 1969. Work on this paper was supported by a grant from the National Science Foundation.  相似文献   

16.
Aumann's (1987) theorem shows that correlated equilibrium is an expression of Bayesian rationality. We extend this result to games with incomplete information.First, we rely on Harsanyi's (1967) model and represent the underlying multiperson decision problem as a fixed game with imperfect information. We survey four definitions of correlated equilibrium which have appeared in the literature. We show that these definitions are not equivalent to each other. We prove that one of them fits Aumann's framework; the agents normal form correlated equilibrium is an expression of Bayesian rationality in games with incomplete information.We also follow a universal Bayesian approach based on Mertens and Zamir's (1985) construction of the universal beliefs space. Hierarchies of beliefs over independent variables (states of nature) and dependent variables (actions) are then constructed simultaneously. We establish that the universal set of Bayesian solutions satisfies another extension of Aumann's theorem.We get the following corollary: once the types of the players are not fixed by the model, the various definitions of correlated equilibrium previously considered are equivalent.  相似文献   

17.
The paper presents results from two new experiments designed to test between the rational choice hypothesis and the random error hypothesis for intransitive choice. Error probabilities and population shares for transitive and intransitive preference types are estimated from data collected in the first experiment. An unrestricted model (which treats intransitive patterns as true patterns) performs no better than a model that is restricted to transitive patterns. Analysis of the conditional distributions of choice patterns, using data from the second experiment, confirms more directly the main results of the first experiment: that observed intransitive choice patterns are due to random error.  相似文献   

18.
This paper falls within the field of Distributive Justice and (as the title indicates) addresses itself specifically to the meshing problem. Briefly stated, the meshing problem is the difficulty encountered when one tries to aggregate the two parameters of beneficence and equity in a way that results in determining which of two or more alternative utility distributions is most just. A solution to this problem, in the form of a formal welfare measure, is presented in the paper. This formula incorporates the notions of equity and beneficence (which are defined earlier by the author) and weighs them against each other to compute a numerical value which represents the degree of justice a given distribution possesses. This value can in turn be used comparatively to select which utility scheme, of those being considered, is best.Three fundamental adequacy requirements, which any acceptable welfare measuring method must satisfy, are presented and subsequently demonstrated to be formally deducible as theorems of the author's system. A practical application of the method is then considered as well as a comparison of it with Nicholas Rescher's method (found in his book, Distributive Justice). The conclusion reached is that Rescher's system is unacceptable, since it computes counter-intuitive results. Objections to the author's welfare measure are considered and answered. Finally, a suggestion for expanding the system to cover cases it was not originally designed to handle (i.e. situations where two alternative utility distributions vary with regard to the number of individuals they contain) is made. The conclusion reached at the close of the paper is that an acceptable solution to the meshing problem has been established.I would like to gratefully acknowledge the assistance of Michael Tooley whose positive suggestions and critical comments were invaluable in the writting of this paper.  相似文献   

19.
The idea that an individual's behavior is a function of its utility or Value represents a very common and fundamental assumption in the study of human conduct. In this paper it will be attempted to determine the nature of this function more precisely. Adopting a probabilistic conception of human action, it appears that an exponential function perfectly satisfies the empirical as well as formal conditions which it seems necessary to impose upon it initially. Empirical research into behavioral change lends additional support to the function thus constructed.  相似文献   

20.
We study the uncertain dichotomous choice model. In this model a set of decision makers is required to select one of two alternatives, say support or reject a certain proposal. Applications of this model are relevant to many areas, such as political science, economics, business and management. The purpose of this paper is to estimate and compare the probabilities that different decision rules may be optimal. We consider the expert rule, the majority rule and a few in-between rules. The information on the decisional skills is incomplete, and these skills arise from an exponential distribution. It turns out that the probability that the expert rule is optimal far exceeds the probability that the majority rule is optimal, especially as the number of the decision makers becomes large.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号