首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Let (, ) and (, ) be mean-standard deviation pairs of two probability distributions on the real line. Mean-variance analyses presume that the preferred distribution depends solely on these pairs, with primary preference given to larger mean and smaller variance. This presumption, in conjunction with the assumption that one distribution is better than a second distribution if the mass of the first is completely to the right of the mass of the second, implies that (, ) is preferred to (, ) if and only if either > or ( = and < ), provided that the set of distributions is sufficiently rich. The latter provision fails if the outcomes of all distributions lie in a finite interval, but then it is still possible to arrive at more liberal dominance conclusions between (, ) and (, ).This research was supported by the Office of Naval Research.  相似文献   

2.
Choices between gambles show systematic violations of stochastic dominance. For example, most people choose ($6, .05; $91, .03; $99, .92) over ($6, .02; $8, .03; $99, .95), violating dominance. Choices also violate two cumulative independence conditions: (1) If S = (z, r; x, p; y, q) R = (z, r; x, p; y, q) then S = (x, r; y, p + q) R = (x, r + p; y, q). (2) If S = (x, p; y, q; z, r) R = (x, p; y, q; z, r) then S = (x, p + q; y, r) R = (x, p; y, q + r), where 0 < z < x < x < y < y < y < z.Violations contradict any utility theory satisfying transivity, outcome monotonicity, coalescing, and comonotonic independence. Because rank-and sign-dependent utility theories, including cumulative prospect theory (CPT), satisfy these properties, they cannot explain these results.However, the configural weight model of Birnbaum and McIntosh (1996) predicted the observed violations of stochastic dominance, cumulative independence, and branch independence. This model assumes the utility of a gamble is a weighted average of outcomes\' utilities, where each configural weight is a function of the rank order of the outcome\'s value among distinct values and that outcome\'s probability. The configural weight, TAX model with the same number of parameters as CPT fit the data of most individuals better than the model of CPT.  相似文献   

3.
This paper discusses several concepts that can be used to provide a foundation for a unified, theory of rational, economic behavior. First, decision-making is defined to be a process that takes place with reference to both subjective and objective time, that distinguishes between plans and actions, between information and states and that explicitly incorporates the collection and processing of information. This conception of decision making is then related to several important aspects of behavioral economics, the dependence of values on experience, the use of behavioral rules, the occurrence of multiple goals and environmental feedback.Our conclusions are (1) the non-transitivity of observed or revealed preferences is a characteristic of learning and hence is to be expected of rational decision-makers; (2) the learning of values through experience suggests the sensibleness of short time horizons and the making of choices according to flexible utility; (3) certain rules of thumb used to allow for risk are closely related to principles of Safety-First and can also be based directly on the hypothesis that the feeling of risk (the probability of disaster) is identified with extreme departures from recently executed decisions. (4) The maximization of a hierarchy of goals, or of a lexicographical utility function, is closely related to the search for feasibility and the practice of satisficing. (5) When the dim perception of environmental feedback and the effect of learning on values are acknowledged the intertemporal optimality of planned decision trajectories is seen to be a characteristic of subjective not objective time. This explains why decision making is so often best characterized by rolling plans. In short, we find that economic man - like any other - is an existential being whose plans are based on hopes and fears and whose every act involves a leap of faith.This paper is based on a talk presented at the Conference, New Beginnings in Economics, Akron, Ohio, March 15, 1969. Work on this paper was supported by a grant from the National Science Foundation.  相似文献   

4.
In this paper, a problem for utility theory - that it would have an agent who was compelled to play Russian Roulette with one revolver or another, to pay as much to have a six-shooter with four bullets relieved of one bullet before playing with it, as he would be willing to pay to have a six-shooter with two bullets emptied - is reviewed. A less demanding Bayesian theory is described, that would have an agent maximize expected values of possible total consequence of his actions. And utility theory is located within that theory as valid for agents who satisfy certain formal conditions, that is, for agents who are, in terms of that more general theory, indifferent to certain dimensions of risk. Raiffa- and Savage-style arguments for its more general validity are then resisted. Addenda are concerned with implications for game theory, and relations between utilities and values.  相似文献   

5.
We report an experiment on two treatments of an ultimatum minigame. In one treatment, responders reactions are hidden to proposers. We observe high rejection rates reflecting responders intrinsic resistance to unfairness. In the second treatment, proposers are informed, allowing for dynamic effects over eight rounds of play. The higher rejection rates can be attributed to responders provision of a public good: Punishment creates a group reputation for being tough and effectively educate proposers. Since rejection rates with informed proposers drop to the level of the treatment with non-informed proposers, the hypothesis of responders enjoyment of overt punishment is not supported.  相似文献   

6.
We introduce two types of protection premia. The unconstrained protection premium, u, is the individual's willingness to pay for certain protection efficiency given flexibility to adjust optimally the investment in protection. The constrained protection premium, c, measures willingness to pay for certain protection efficiency given no flexibility to adjust the investment in protection. u depends on tastes and wealth as well as protection technology whereas c depends only on technology. We show that c cannot exceed u and develop necessary conditions for c=u. Optimal protection for an individual with decision flexibility may be larger or smaller than that desired under no flexibility.Journal Paper No. J-15504 of the Iowa Agriculture and Home Economics Experiment Station, Ames, Iowa. Project No. 3048.  相似文献   

7.
A fixed agenda social choice correspondence on outcome set X maps each profile of individual preferences into a nonempty subset of X. If satisfies an analogue of Arrow's independence of irrelevant alternatives condition, then either the range of contains exactly two alternatives, or else there is at most one individual whose preferences have any bearing on . This is the case even if is not defined for any proper subset of X.  相似文献   

8.
This paper considers two fundamental aspects of the analysis of dynamic choices under risk: the issue of the dynamic consistency of the strategies of a non EU maximizer, and the issue that an individual whose preferences are nonlinear in probabilities may choose a strategy which is in some appropriate sense dominated by other strategies. A proposed way of dealing with these problems, due to Karni and Safra and called behavioral consistency, is described. The implications of this notion of behavioral consistency are explored, and it is shown that while the Karni and Safra approach obtains dynamically consistent behavior under nonlinear preferences, it may imply the choice of dominated strategies even in very simple decision trees.  相似文献   

9.
A new investigation is launched into the problem of decision-making in the face of complete ignorance, and linked to the problem of social choice. In the first section the author introduces a set of properties which might characterize a criterion for decision-making under complete ignorance. Two of these properties are novel: independence of non-discriminating states, and weak pessimism. The second section provides a new characterization of the so-called principle of insufficient reason. In the third part, lexicographic maximin and maximax criteria are characterized. Finally, the author's results are linked to the problem of social choice.  相似文献   

10.
Nash's solution of a two-person cooperative game prescribes a coordinated mixed strategy solution involving Pareto-optimal outcomes of the game. Testing this normative solution experimentally presents problems in as much as rather detailed explanations must be given to the subjects of the meaning of threat strategy, strategy mixture, expected payoff, etc. To the extent that it is desired to test the solution using naive subjects, the problem arises of imparting to them a minimal level of understanding about the issue involved in the game without actually suggesting the solution.Experiments were performed to test the properties of the solution of a cooperative two-person game as these are embodied in three of Nash's four axioms: Symmetry, Pareto-optimality, and Invariance with respect to positive linear transformations. Of these, the last was definitely discorroborated, suggesting that interpersonal comparison of utilities plays an important part in the negotiations.Some evidence was also found for a conjecture generated by previous experiments, namely that an externally imposed threat (penalty for non-cooperation) tends to bring the players closer together than the threats generated by the subjects themselves in the process of negotiation.  相似文献   

11.
In general, the technical apparatus of decision theory is well developed. It has loads of theorems, and they can be proved from axioms. Many of the theorems are interesting, and useful both from a philosophical and a practical perspective. But decision theory does not have a well agreed upon interpretation. Its technical terms, in particular, utility and preference do not have a single clear and uncontroversial meaning.How to interpret these terms depends, of course, on what purposes in pursuit of which one wants to put decision theory to use. One might want to use it as a model of economic decision-making, in order to predict the behavior of corporations or of the stock market. In that case, it might be useful to interpret the technical term utility as meaning money profit. Decision theory would then be an empirical theory. I want to look into the question of what utility could mean, if we want decision theory to function as a theory of practical rationality. I want to know whether it makes good sense to think of practical rationality as fully or even partly accounted for by decision theory. I shall lay my cards on the table: I hope it does make good sense to think of it that way. For, I think, if Humeans are right about practical rationality, then decision theory must play a very large part in their account. And I think Humeanism has very strong attractions.  相似文献   

12.
A Comparison of Some Distance-Based Choice Rules in Ranking Environments   总被引:1,自引:0,他引:1  
We discuss the relationships between positional rules (such as plurality and approval voting as well as the Borda count), Dodgsons, Kemenys and Litvaks methods of reaching consensus. The discrepancies between methods are seen as results of different intuitive conceptions of consensus goal states and ways of measuring distances therefrom. Saaris geometric methodology is resorted to in the analysis of the consensus reaching methods.  相似文献   

13.
Given a finite state space and common priors, common knowledge of the identity of an agent with the minimal (or maximal) expectation of a random variable implies consensus, i.e., common knowledge of common expectations. This extremist statistic induces consensus when repeatedly announced, and yet, with n agents, requires at most log2 n bits to broadcast.  相似文献   

14.
Theories of economic behavior often use as-if-languages: for example, analytical sentences or definitions are used as if they were synthetic and factual-normative theoretical constructs are used as if they were empirical concepts. Such as-if-languages impede the acquisition of knowledge and are apt to encourage the wrong assessment of actual research strategies. The author's criticism is first leveled at revealed-preference theory. In this theory observed behavior is often understood in an empirical sense although it is a pure theoretical construct. Another example can be found in von Mises' representations on marketing behavior: here theoretical valuations are used to achieve a spurious streamlining of reality. Result: Scientists should not ogle with reality if they have nothing to say about it.  相似文献   

15.
The author tries to formulate what a determinist believes to be true. The formulation is based on some concepts defined in a systems-theoretical manner, mainly on the concept of an experiment over the sets A m (a set of m-tuples of input values) and B n (a set of n-tuples of output values) in the time interval (t 1, ..., t k ) (symbolically E[t 1,..., t k , A m , B n ]), on the concept of a behavior of the system S m,n (=(A m , B n )) on the basis of the experiment E[t 1, ..., t k , A m , B n ] and, indeed, on the concept of deterministic behavior .... The resulting formulation of the deterministic hypothesis shows that this hypothesis expresses a belief that we always could find some hidden parameters.  相似文献   

16.
Far-sighted equilibria in 2 × 2, non-cooperative,repeated games   总被引:1,自引:1,他引:0  
Consider a two-person simultaneous-move game in strategic form. Suppose this game is played over and over at discrete points in time. Suppose, furthermore, that communication is not possible, but nevertheless we observe some regularity in the sequence of outcomes. The aim of this paper is to provide an explanation for the question why such regularity might persist for many (i.e., infinite) periods.Each player, when contemplating a deviation, considers a sequential-move game, roughly speaking of the following form: if I change my strategy this period, then in the next my opponent will take his strategy b and afterwards I can switch to my strategy a, but then I am worse off since at that outcome my opponent has no incentive to change anymore, whatever I do. Theoretically, however, there is no end to such reaction chains. In case that deviating by some player gives him less utility in the long run than before deviation, we say that the original regular sequence of outcomes is far-sighted stable for that player. It is a far-sighted equilibrium if it is far-sighted stable for both players.  相似文献   

17.
This paper falls within the field of Distributive Justice and (as the title indicates) addresses itself specifically to the meshing problem. Briefly stated, the meshing problem is the difficulty encountered when one tries to aggregate the two parameters of beneficence and equity in a way that results in determining which of two or more alternative utility distributions is most just. A solution to this problem, in the form of a formal welfare measure, is presented in the paper. This formula incorporates the notions of equity and beneficence (which are defined earlier by the author) and weighs them against each other to compute a numerical value which represents the degree of justice a given distribution possesses. This value can in turn be used comparatively to select which utility scheme, of those being considered, is best.Three fundamental adequacy requirements, which any acceptable welfare measuring method must satisfy, are presented and subsequently demonstrated to be formally deducible as theorems of the author's system. A practical application of the method is then considered as well as a comparison of it with Nicholas Rescher's method (found in his book, Distributive Justice). The conclusion reached is that Rescher's system is unacceptable, since it computes counter-intuitive results. Objections to the author's welfare measure are considered and answered. Finally, a suggestion for expanding the system to cover cases it was not originally designed to handle (i.e. situations where two alternative utility distributions vary with regard to the number of individuals they contain) is made. The conclusion reached at the close of the paper is that an acceptable solution to the meshing problem has been established.I would like to gratefully acknowledge the assistance of Michael Tooley whose positive suggestions and critical comments were invaluable in the writting of this paper.  相似文献   

18.
The Ellsberg Paradox documented the aversion to ambiguity in the probability of winning a prize. Using an original sample of 266 business owners and managers facing risks from climate change, this paper documents the presence of departures from rationality in both directions. Both ambiguity-seeking behavior and ambiguity-averse behavior are evident. People exhibit fear effects of ambiguity for small probabilities of suffering a loss and hope effects for large probabilities. Estimates of the crossover point from ambiguity aversion (fear) to ambiguity seeking (hope) place this value between 0.3 and 0.7 for the risk per decade lotteries considered, with empirical estimates indicating a crossover mean risk of about 0.5. Attitudes toward the degree of ambiguity also reverse at the crossover point.  相似文献   

19.
Tiebreak rules are necessary for revealing indifference in non- sequential decisions. I focus on a preference relation that satisfies Ordering and fails Independence in the following way. Lotteries a and b are indifferent but the compound lottery 0.5f, 0.5b is strictly preferred to the compound lottery 0.5f, 0.5a. Using tiebreak rules the following is shown here: In sequential decisions when backward induction is applied, a preference like the one just described must alter the preference relation between a and b at certain choice nodes, i.e., indifference between a and b is not stable. Using this result, I answer a question posed by Rabinowicz (1997) concerning admissibility in sequential decisions when indifferent options are substituted at choice nodes.  相似文献   

20.
Three rival views of the nature of society are sketched: individualism, holism, and systemism. The ontological and methodological components of these doctrines are formulated and analyzed. Individualism is found wanting for making no room for social relations or emergent properties; holism, for refusing to analyze both of them and for losing sight of the individual.A systems view is then sketched, and it is essentially this: A society is a system of interrelated individuals sharing an environment. This commonsensical idea is formalized as follows: A society is representable as an ordered triple Composition of , Environment of , Structure of , where the structure of is the collection of relations (in particular connections) among components of . Included in the structure of any are the relations of work and of managing which are regarded as typical of human society in contrast to animal societies.Other concepts formalized in the paper are those of subsystem (in particular social subsystem), resultant property, and emergent or gestalt property. The notion of subsystem is used to build the notion of an F-sector of a society, defined as the set of all social subsystems performing a certain function F (e.g. the set of all schools). In turn, an F-institution is defined as the family of all F-sectors. Being abstractions, institutions should not be attributed a life and a mind of their own. But, since an institution is analyzable in terms of concrete totalities (namely social subsystems), it does not comply with the individualist requirement either.It is also shown that the systems view is inherent in any mathematical model in social science, since any such schema is essentially a set of individuals endowed with a certain structure. And it is stressed that the systems view combines the desirable features of both individualism and holism.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号