首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Critiques two social choice principles employed by Webster's analysis of using information to resolve Sen's paradox of the Impossibility of a Paretian Liberal.  相似文献   

2.
This paper discusses several concepts that can be used to provide a foundation for a unified, theory of rational, economic behavior. First, decision-making is defined to be a process that takes place with reference to both subjective and objective time, that distinguishes between plans and actions, between information and states and that explicitly incorporates the collection and processing of information. This conception of decision making is then related to several important aspects of behavioral economics, the dependence of values on experience, the use of behavioral rules, the occurrence of multiple goals and environmental feedback.Our conclusions are (1) the non-transitivity of observed or revealed preferences is a characteristic of learning and hence is to be expected of rational decision-makers; (2) the learning of values through experience suggests the sensibleness of short time horizons and the making of choices according to flexible utility; (3) certain rules of thumb used to allow for risk are closely related to principles of Safety-First and can also be based directly on the hypothesis that the feeling of risk (the probability of disaster) is identified with extreme departures from recently executed decisions. (4) The maximization of a hierarchy of goals, or of a lexicographical utility function, is closely related to the search for feasibility and the practice of satisficing. (5) When the dim perception of environmental feedback and the effect of learning on values are acknowledged the intertemporal optimality of planned decision trajectories is seen to be a characteristic of subjective not objective time. This explains why decision making is so often best characterized by rolling plans. In short, we find that economic man - like any other - is an existential being whose plans are based on hopes and fears and whose every act involves a leap of faith.This paper is based on a talk presented at the Conference, New Beginnings in Economics, Akron, Ohio, March 15, 1969. Work on this paper was supported by a grant from the National Science Foundation.  相似文献   

3.
A new investigation is launched into the problem of decision-making in the face of complete ignorance, and linked to the problem of social choice. In the first section the author introduces a set of properties which might characterize a criterion for decision-making under complete ignorance. Two of these properties are novel: independence of non-discriminating states, and weak pessimism. The second section provides a new characterization of the so-called principle of insufficient reason. In the third part, lexicographic maximin and maximax criteria are characterized. Finally, the author's results are linked to the problem of social choice.  相似文献   

4.
A complete classification theorem for voting processes on a smooth choice spaceW of dimensionw is presented. Any voting process is classified by two integersv * () andw(), in terms of the existence or otherwise of the optima set, IO(), and the cycle set IC().In dimension belowv * () the cycle set is always empty, and in dimension abovew() the optima set is nearly always empty while the cycle set is open dense and path con nected. In the latter case agenda manipulation results in any outcome.For admissible (compact, convex) choice spaces, the two sets are related by the general equilibrium result that IO() union IC() is non-empty. This in turn implies existence of optima in low dimensions. The equilibrium theorem is used to examine voting games with an infinite electorate, and the nature ofstructure induced equilibria, induced by jurisdictional restrictions.This material is based on work supported by a Nuffield Foundation grant.  相似文献   

5.
Let (, ) and (, ) be mean-standard deviation pairs of two probability distributions on the real line. Mean-variance analyses presume that the preferred distribution depends solely on these pairs, with primary preference given to larger mean and smaller variance. This presumption, in conjunction with the assumption that one distribution is better than a second distribution if the mass of the first is completely to the right of the mass of the second, implies that (, ) is preferred to (, ) if and only if either > or ( = and < ), provided that the set of distributions is sufficiently rich. The latter provision fails if the outcomes of all distributions lie in a finite interval, but then it is still possible to arrive at more liberal dominance conclusions between (, ) and (, ).This research was supported by the Office of Naval Research.  相似文献   

6.
The paper presents results from two new experiments designed to test between the rational choice hypothesis and the random error hypothesis for intransitive choice. Error probabilities and population shares for transitive and intransitive preference types are estimated from data collected in the first experiment. An unrestricted model (which treats intransitive patterns as true patterns) performs no better than a model that is restricted to transitive patterns. Analysis of the conditional distributions of choice patterns, using data from the second experiment, confirms more directly the main results of the first experiment: that observed intransitive choice patterns are due to random error.  相似文献   

7.
Far-sighted equilibria in 2 × 2, non-cooperative,repeated games   总被引:1,自引:1,他引:0  
Consider a two-person simultaneous-move game in strategic form. Suppose this game is played over and over at discrete points in time. Suppose, furthermore, that communication is not possible, but nevertheless we observe some regularity in the sequence of outcomes. The aim of this paper is to provide an explanation for the question why such regularity might persist for many (i.e., infinite) periods.Each player, when contemplating a deviation, considers a sequential-move game, roughly speaking of the following form: if I change my strategy this period, then in the next my opponent will take his strategy b and afterwards I can switch to my strategy a, but then I am worse off since at that outcome my opponent has no incentive to change anymore, whatever I do. Theoretically, however, there is no end to such reaction chains. In case that deviating by some player gives him less utility in the long run than before deviation, we say that the original regular sequence of outcomes is far-sighted stable for that player. It is a far-sighted equilibrium if it is far-sighted stable for both players.  相似文献   

8.
A fixed agenda social choice correspondence on outcome set X maps each profile of individual preferences into a nonempty subset of X. If satisfies an analogue of Arrow's independence of irrelevant alternatives condition, then either the range of contains exactly two alternatives, or else there is at most one individual whose preferences have any bearing on . This is the case even if is not defined for any proper subset of X.  相似文献   

9.
We report a surprising property of --preferences: the assumption of nonincreasing relative risk aversion implies the optimal portfolio being riskless. We discuss a solution of that paradox using wealth dependent utility functions in detail. Using the revealed preference theory we show that (general, i.e. not necessary -) wealth dependent utility functions can be characterized by Wald's axiom.  相似文献   

10.
Aumann's (1987) theorem shows that correlated equilibrium is an expression of Bayesian rationality. We extend this result to games with incomplete information.First, we rely on Harsanyi's (1967) model and represent the underlying multiperson decision problem as a fixed game with imperfect information. We survey four definitions of correlated equilibrium which have appeared in the literature. We show that these definitions are not equivalent to each other. We prove that one of them fits Aumann's framework; the agents normal form correlated equilibrium is an expression of Bayesian rationality in games with incomplete information.We also follow a universal Bayesian approach based on Mertens and Zamir's (1985) construction of the universal beliefs space. Hierarchies of beliefs over independent variables (states of nature) and dependent variables (actions) are then constructed simultaneously. We establish that the universal set of Bayesian solutions satisfies another extension of Aumann's theorem.We get the following corollary: once the types of the players are not fixed by the model, the various definitions of correlated equilibrium previously considered are equivalent.  相似文献   

11.
Singular causal explanations cite explicitly, or may be paraphrased to cite explicitly, a particular factor as the cause of another particular factor. During recent years there has emerged a consensus account of the nature of an important feature of such explanations, the distinction between a factor regarded correctly in a given context of inquiry as the cause of a given result and those other causally relevant factors, sometimes called mere conditions, which are not regarded correctly in that context of inquiry as the cause of that result. In this paper that consensus account is characterized and developed. The developed version is then used to illuminate some recent discussions of singular causal explanations.Work on this paper was supported by a University of Maryland Faculty Research Award. Earlier versions were read at the University of Minnesota and at the 1971 Western Division meetings of the American Philosophical Association. I have profited from criticisms raised on these occasions. I am especially grateful for the comments of James Lesher, Peter Machamer, John Vollrath, and the students in my Macalester College seminar.  相似文献   

12.
Given a finite state space and common priors, common knowledge of the identity of an agent with the minimal (or maximal) expectation of a random variable implies consensus, i.e., common knowledge of common expectations. This extremist statistic induces consensus when repeatedly announced, and yet, with n agents, requires at most log2 n bits to broadcast.  相似文献   

13.
This discussion examines Robert Nozick's claim inAnarchy, State, and Utopia (New York 1974) that his entitlement theory of justice avoids the paradox of collective choice shown by A. K. Sen inCollective Choice and Social Welfare (San Francisco 1970). Nozick argues his system is a stable principle of distributive justice. The author shows Nozick's principle of justice in transfer qualifies as a social decision function in Sen's sense because it is a collective choice rule and meets necessary and sufficient conditions for the existence of a choice function. Next the author demonstrates Nozick's principle of justice in transfer requkes Sen's conditions of unrestricted domain, the Pareto principle, and liberalism which are the conditions of the Sen paradox Nozick claims to avoid. Thus, Nozick's principle of justice in transfer is shown not to be a stable principle of distributive justice.  相似文献   

14.
Dore  Mohammed 《Theory and Decision》1997,43(3):219-239
This paper critically reviews Ken Binmores non- utilitarian and game theoretic solution to the Arrow problem. Binmores solution belongs to the same family as Rawls maximin criterion and requires the use of Nash bargaining theory, empathetic preferences, and results in evolutionary game theory. Harsanyi has earlier presented a solution that relies on utilitarianism, which requires some exogenous valuation criterion and is therefore incompatible with liberalism. Binmores rigorous demonstration of the maximin principle for the first time presents a real alternative to a utilitarian solution.  相似文献   

15.
Summary The objective Bayesian program has as its fundamental tenet (in addition to the three Bayesian postulates) the requirement that, from a given knowledge base a particular probability function is uniquely appropriate. This amounts to fixing initial probabilities, based on relatively little information, because Bayes' theorem (conditionalization) then determines the posterior probabilities when the belief state is altered by enlarging the knowledge base. Moreover, in order to reconstruct orthodox statistical procedures within a Bayesian framework, only privileged ignorance probability functions will work.To serve all these ends objective Bayesianism seeks additional principles for specifying ignorance and partial information probabilities. H. Jeffreys' method of invariance (or Jaynes' modification thereof) is used to solve the former problem, and E. Jaynes' rule of maximizing entropy (subject to invariance for continuous distributions) has recently been thought to solve the latter. I have argued that neither policy is acceptable to a Bayesian since each is inconsistent with conditionalization. Invariance fails to give a consistent representation to the state of ignorance professed. The difficulties here parallel familiar weaknesses in the old Laplacean principle of insufficient reason. Maximizing entropy is unsatisfactory because the partial information it works with fails to capture the effect of uncertainty about related nuisance factors. The result is a probability function that represents a state richer in empirical content than the belief state targeted for representation. Alternatively, by conditionalizing on information about a nuisance parameter one may move from a distribution of lower to higher entropy, despite the obvious increase in information available.Each of these two complaints appear to me to be symptoms of the program's inability to formulate rules for picking privileged probability distributions that serve to represent ignorance or near ignorance. Certainly the methods advocated by Jeffreys, Jaynes and Rosenkrantz are mathematically convenient idealizations wherein specified distributions are elevated to the roles of ignorance and partial information distributions. But the cost that goes with the idealization is a violation of conditionalization, and if that is the ante that we must put up to back objective Bayesianism then I propose we look for a different candidate to earn our support.31  相似文献   

16.
A rule for the acceptance of scientific hypotheses called the principle of cost-benefit dominance is shown to be more effective and efficient than the well-known principle of the maximization of expected (epistemic) utility. Harvey's defense of his theory of the circulation of blood in animals is examined as a historical paradigm case of a successful defense of a scientific hypothesis and as an implicit application of the cost-benefit dominance rule advocated here. Finally, various concepts of dominance are considered by means of which the effectiveness of our rule may be increased.The number of friends who have kindly given me suggestions and encouragement is almost embarrassingly large, but I would like to express my gratitude to Myles Brand, Cliff Hooker, David Hull, Scott Kleiner, Hugh Lehman, Werner Leinfellner, Andrew McLaughlin and Tom W. Settle.  相似文献   

17.
Choices between gambles show systematic violations of stochastic dominance. For example, most people choose ($6, .05; $91, .03; $99, .92) over ($6, .02; $8, .03; $99, .95), violating dominance. Choices also violate two cumulative independence conditions: (1) If S = (z, r; x, p; y, q) R = (z, r; x, p; y, q) then S = (x, r; y, p + q) R = (x, r + p; y, q). (2) If S = (x, p; y, q; z, r) R = (x, p; y, q; z, r) then S = (x, p + q; y, r) R = (x, p; y, q + r), where 0 < z < x < x < y < y < y < z.Violations contradict any utility theory satisfying transivity, outcome monotonicity, coalescing, and comonotonic independence. Because rank-and sign-dependent utility theories, including cumulative prospect theory (CPT), satisfy these properties, they cannot explain these results.However, the configural weight model of Birnbaum and McIntosh (1996) predicted the observed violations of stochastic dominance, cumulative independence, and branch independence. This model assumes the utility of a gamble is a weighted average of outcomes\' utilities, where each configural weight is a function of the rank order of the outcome\'s value among distinct values and that outcome\'s probability. The configural weight, TAX model with the same number of parameters as CPT fit the data of most individuals better than the model of CPT.  相似文献   

18.
The present paper deals with the Galbraithian theory of the managerial firm. Galbraith has stressed corporate size and has questioned the effectiveness of the market demand, technology and capital market constraints, which in conventional theory restrict the size of the firm.Galbraith represents the objectives of the corporation in terms of a conventional lexicographic objective function with some minimal level of profits (in terms of cash flow) being ranked the dominant objective. Also in his treatment of the corporate constraints, Galbraith does not move much beyond the current state of knowledge. The assumption of consumer sovereignty has long been relegated to the text-book literature, and the firm's control over the quality of its product (its price elasticity) has been generally recognized. Similarly, it has been known that the capital market is not perfect so that it is unlikely to constrain the expansion of the firm with some given investor determined earning constraint. In his attempt to show the technostructure's ability to plan the rate and the direction of the technological development Galbraith did not, however, meet with wide support from empirical research and analysis. It is extremely difficult to test the firm's control over its production technology, and while the few industry studies available can hardly be used to reject the Galbraithian position, there is not sufficient evidence to support a generalization of Galbraith's conjecture.While individually these constraints have been analyzed and discussed in the literature, Galbraith has combined these results and has been able to show that in the industrial state the qualitative laws of economic common sense do not hold. The importance of this conclusion is not only academic. Efforts to control corporate allocations through rate controls, antitrust litigation, and in other ways emanate from the conventional theory of firms and markets and do not fit the industrial state. In this state corporate size does matter and cannot be treated as random: The larger the corporation the more perfect the control it assumes over its environment and the higher the efficiency with which it plans its over-all operations.We acknowledge the helpful comments of a referee of this journal.  相似文献   

19.
Rawls' Difference Principle asserts that a basic economic structure is just if it makes the worst off people as well off as is feasible. How well off someone is is to be measured by an index of primary social goods. It is this index that gives content to the principle, and Rawls gives no adequate directions for constructing it. In this essay a version of the difference principle is proposed that fits much of what Rawls says, but that makes use of no index. Instead of invoking an index of primary social goods, the principle formulated here invokes a partial ordering of prospects for opportunities.  相似文献   

20.
A Comparison of Some Distance-Based Choice Rules in Ranking Environments   总被引:1,自引:0,他引:1  
We discuss the relationships between positional rules (such as plurality and approval voting as well as the Borda count), Dodgsons, Kemenys and Litvaks methods of reaching consensus. The discrepancies between methods are seen as results of different intuitive conceptions of consensus goal states and ways of measuring distances therefrom. Saaris geometric methodology is resorted to in the analysis of the consensus reaching methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号