首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Often the preferences of decision-makers are sufficiently inconsistent so as to preclude the existence of a utility function in the classical sense. Several alternatives for dealing with this situation are discussed. One alternative, that of modifying classical demands on utility functions, is emphasized and described in the context of the theory of measurement developed in recent years by behavioral scientists. The measurement theory approach is illustrated by discussing the concept of the dimension of a partial order. Even if we cannot assign numerical utility or worth values which reflect preferences in the classical utility function sense, from the measurement theory point of view we can still learn a lot about the preferences by finding several measures of worth so that a given alternative x is preferred to an alternative y if and only if x is ranked higher than y on each of the worth scales. If such measures can be found, it follows that the preferences define a partial order, and the smallest number of such scales needed is called the dimension of the partial order. If one-dimensional preferences (those amenable to classical utility assignments) cannot be found, then the next best thing is to search for partially ordered preferences with as small a dimension as possible. Several conditions under which a partial order is two-dimensional are described.The author acknowledges the helpful comments of Joel Spencer and Ralph Strauch. He also thanks Kirby Baker and Peter Fishburn for permission to quote freely from earlier joint work on two-dimensional partial orders.  相似文献   

2.
Stochastic dominance is a notion in expected-utility decision theory which has been developed to facilitate the analysis of risky or uncertain decision alternatives when the full form of the decision maker's von Neumann-Morgenstern utility function on the consequence space X is not completely specified. For example, if f and g are probability functions on X which correspond to two risky alternatives, then f first-degree stochastically dominates g if, for every consequence x in X, the chance of getting a consequence that is preferred to x is as great under f as under g. When this is true, the expected utility of f must be as great as the expected utility of g.Most work in stochastic dominance has been based on increasing utility functions on X with X an interval on the real line. The present paper, following [1], formulates appropriate notions of first-degree and second-degree stochastic dominance when X is an arbitrary finite set. The only structure imposed on X arises from the decision maker's preferences. It is shown how typical analyses with stochastic dominance can be enriched by applying the notion to convex combinations of probability functions. The potential applications of convex stochastic dominance include analyses of simple-majority voting on risky alternatives when voters have similar preference orders on the consequences.  相似文献   

3.
We present an axiomatic model of preferences over menus that is motivated by three assumptions. First, the decision maker is uncertain ex ante (i.e., at the time of choosing a menu) about her ex post (i.e., at the time of choosing an option within her chosen menu) preferences over options, and she anticipates that this subjective uncertainty will not resolve before the ex post stage. Second, she is averse to ex post indecisiveness (i.e., to having to choose between options that she cannot rank with certainty). Third, when evaluating a menu she discards options that are dominated (i.e., inferior to another option whatever her ex post preferences may be) and restricts attention to the undominated ones. Under these assumptions, the decision maker has a preference for commitment in the sense of preferring menus with fewer undominated alternatives. We derive a representation in which the decision maker’s uncertainty about her ex post preferences is captured by means of a subjective state space, which in turn determines which options are undominated in a given menu, and in which the decision maker fears, whenever indecisive, to choose an option that will turn out to be the worst (undominated) one according to the realization of her ex post preferences.  相似文献   

4.
When preferences are such that there is no unique additive prior, the issue of which updating rule to use is of extreme importance. This paper presents an axiomatization of the rule which requires updating of all the priors by Bayes rule. The decision maker has conditional preferences over acts. It is assumed that preferences over acts conditional on event E happening, do not depend on lotteries received on E c, obey axioms which lead to maxmin expected utility representation with multiple priors, and have common induced preferences over lotteries. The paper shows that when all priors give positive probability to an event E, a certain coherence property between conditional and unconditional preferences is satisfied if and only if the set of subjective probability measures considered by the agent given E is obtained by updating all subjective prior probability measures using Bayes rule.  相似文献   

5.
This paper falls within the field of Distributive Justice and (as the title indicates) addresses itself specifically to the meshing problem. Briefly stated, the meshing problem is the difficulty encountered when one tries to aggregate the two parameters of beneficence and equity in a way that results in determining which of two or more alternative utility distributions is most just. A solution to this problem, in the form of a formal welfare measure, is presented in the paper. This formula incorporates the notions of equity and beneficence (which are defined earlier by the author) and weighs them against each other to compute a numerical value which represents the degree of justice a given distribution possesses. This value can in turn be used comparatively to select which utility scheme, of those being considered, is best.Three fundamental adequacy requirements, which any acceptable welfare measuring method must satisfy, are presented and subsequently demonstrated to be formally deducible as theorems of the author's system. A practical application of the method is then considered as well as a comparison of it with Nicholas Rescher's method (found in his book, Distributive Justice). The conclusion reached is that Rescher's system is unacceptable, since it computes counter-intuitive results. Objections to the author's welfare measure are considered and answered. Finally, a suggestion for expanding the system to cover cases it was not originally designed to handle (i.e. situations where two alternative utility distributions vary with regard to the number of individuals they contain) is made. The conclusion reached at the close of the paper is that an acceptable solution to the meshing problem has been established.I would like to gratefully acknowledge the assistance of Michael Tooley whose positive suggestions and critical comments were invaluable in the writting of this paper.  相似文献   

6.
In binary choice between discrete outcome lotteries, an individual may prefer lottery L1 to lottery L2 when the probability that L1 delivers a better outcome than L2 is higher than the probability that L2 delivers a better outcome than L1. Such a preference can be rationalized by three standard axioms (solvability, convexity and symmetry) and one less standard axiom (a fanning-in). A preference for the most probable winner can be represented by a skew-symmetric bilinear utility function. Such a utility function has the structure of a regret theory when lottery outcomes are perceived as ordinal and the assumption of regret aversion is replaced with a preference for a win. The empirical evidence supporting the proposed system of axioms is discussed.  相似文献   

7.
An extensive literature overlapping economics, statistical decision theory and finance, contrasts expected utility [EU] with the more recent framework of mean–variance (MV). A basic proposition is that MV follows from EU under the assumption of quadratic utility. A less recognized proposition, first raised by Markowitz, is that MV is fully justified under EU, if and only if utility is quadratic. The existing proof of this proposition relies on an assumption from EU, described here as “Buridan’s axiom” after the French philosopher’s fable of the ass that starved out of indifference between two bales of hay. To satisfy this axiom, MV must represent not only “pure” strategies, but also their probability mixtures, as points in the (σ, μ) plane. Markowitz and others have argued that probability mixtures are represented sufficiently by (σ, μ) only under quadratic utility, and hence that MV, interpreted as a mathematical re-expression of EU, implies quadratic utility. We prove a stronger form of this theorem, not involving or contradicting Buridan’s axiom, nor any more fundamental axiom of utility theory.  相似文献   

8.
Several experimental studies have reported that an otherwise robust regularity—the disparity between Willingness-To-Accept and Willingness-To-Pay—tends to be greatly reduced in repeated markets, posing a serious challenge to existing reference-dependent and reference-independent models alike. This article offers a new account of the evidence, based on the assumptions that individuals are affected by good and bad deals relative to the expected transaction price (price sensitivity), with bad deals having a larger impact on their utility (`bad-deal’ aversion). These features of preferences explain the existing evidence better than alternative approaches, including the most recent developments of loss aversion models.  相似文献   

9.
Gilboa  Itzhak  Samuelson  Larry 《Theory and Decision》2022,92(3-4):625-645

It has been argued that Pareto-improving trade is not as compelling under uncertainty as it is under certainty. The former may involve agents with different beliefs, who might wish to execute trades that are no more than betting. In response, the concept of no-betting Pareto dominance was introduced, requiring that putative Pareto improvements must be rationalizable by some common probabilities, even though the participants’ beliefs may differ. In this paper, we argue that this definition might be too narrow for use when agents are not Bayesian. Agents who face ambiguity might wish to trade in ways that can be justified by common ambiguity, though not necessarily by common probabilities. We accordingly extend the notion of no-betting Pareto dominance to characterize trades than are “no-betting Pareto” ranked according to the maxmin expected utility model.

  相似文献   

10.
The widely observed preference for lotteries involving precise rather than vague of ambiguous probabilities is called ambiguity aversion. Ambiguity aversion cannot be predicted or explained by conventional expected utility models. For the subjectively weighted linear utility (SWLU) model, we define both probability and payoff premiums for ambiguity, and introduce alocal ambiguity aversion function a(u) that is proportional to these ambiguity premiums for small uncertainties. We show that one individual's ambiguity premiums areglobally larger than another's if and only if hisa(u) function is everywhere larger. Ambiguity aversion has been observed to increase 1) when the mean probability of gain increases and 2) when the mean probability of loss decreases. We show that such behavior is equivalent toa(u) increasing in both the gain and loss domains. Increasing ambiguity aversion also explains the observed excess of sellers' over buyers' prices for insurance against an ambiguous probability of loss.  相似文献   

11.
The paper first summarizes the author's decision-theoretical model of moral behavior, in order to compare the moral implications of the act-utilitarian and of the rule-utilitarian versions of utilitarian theory. This model is then applied to three voting examples. It is argued that the moral behavior of act-utilitarian individuals will have the nature of a noncooperative game, played in the extensive mode, and involving action-by-action maximization of social utility by each player. In contrast, the moral behavior of rule-utilitarian individuals will have the nature of a cooperative game, played in the normal mode, and involving a firm commitment by each player to a specific moral strategy (viz. to the strategy selected by the rule-utilitarian choice criterion) — even if some individual actions prescribed by this strategy, when considered in isolation, should fail to maximize social utility.The most important advantage that rule utilitarianism as an ethical theory has over act utilitarianism lies in its ability to give full recognition to the moral and social importance of individual rights and personal obligations. It is easy to verify that action-by-action maximization of social utility, as required by act utilitarianism, would destroy these rights and obligations. In contrast, rule utilitarianism can fully recognize the moral validity of these rights and obligations precisely because of its commitment to an overall moral strategy, independent of action-by-action social-utility maximization.The paper ends with a discussion of the voter's paradox problem. The conventional theory of rational behavior cannot avoid the paradoxical conclusion that, in any large electorate, voting is always an irrational activity because one's own individual vote is extremely unlikely to make any difference to the outcome of any election. But it can be shown that, by using the principles of rule-utilitarian theory, this paradox can easily be resolved and that, in actual fact, voting, even in large electorates, may be perfectly rational action. More generally, the example of rule utilitarianism shows what an important role the concept of a rational commitment can play in the analysis of rational behavior.  相似文献   

12.
This paper develops an arbitration scheme for resolving a distribution of wealth problem by applying Nash's assumptions to marginal rather than to total utilities. The problem considered is that of distributing a fixed amount of wealth between two claimants, and the paper compares properties of this problem's Nash solution with those of the marginal utility solution. The Nash solution is shown to emphasize application of symmetry considerations to the status quo ante, the marginal utility solution their application to the players' post arbitration positions (as measured by functions that fully describe the players' utilities but that are independent of positive linear transformations). It is argued that while the Nash assumptions are appropriate for many arbitration problems in which a solution reflects the players' status quo ante positions, the marginal utility assumptions are useful when it is desired that a solution attempt to minimize post-arbitration differences between the players' positions. The latter seems to be preferable in contexts where an arbitrator weights the solution's effects on out-comes more heavily than he weights considerations of the status quo ante. Examples are situations such as those involving income redistribution, where attempts to reduce inequality may guide the redistribution decisions. The effects on each type of solution of changes in the status quo ante are also investigated and related to the risk preference properties of the players' utilities.  相似文献   

13.
Most decisions in life involve ambiguity, where probabilities can not be meaningfully specified, as much as they involve probabilistic uncertainty. In such conditions, the aspiration to utility maximization may be self‐deceptive. We propose “robust satisficing” as an alternative to utility maximizing as the normative standard for rational decision making in such circumstances. Instead of seeking to maximize the expected value, or utility, of a decision outcome, robust satisficing aims to maximize the robustness to uncertainty of a satisfactory outcome. That is, robust satisficing asks, “what is a ‘good enough’ outcome,” and then seeks the option that will produce such an outcome under the widest set of circumstances. We explore the conditions under which robust satisficing is a more appropriate norm for decision making than utility maximizing.  相似文献   

14.
Empirical evidence from both utility and psychophysical experiments suggests that people respond quite differently—perhaps discontinuously—to stimulus pairs when one consequence or signal is set to `zero.' Such stimuli are called unitary. The author's earlier theories assumed otherwise. In particular, the key property of segregation relating gambles and joint receipts (or presentations) involves unitary stimuli. Also, the representation of unitary stimuli was assumed to be separable (i.e., multiplicative). The theories developed here do not invoke separability. Four general cases based on two distinctions are explored. The first distinction is between commutative joint receipts, which are relevant to utility, and the non-commutative ones, which are relevant to psychophysics. The second distinction concerns how stimuli of the form (x, C; y) and the operation of joint receipt are linked: by segregation, which mixes stimuli and unitary ones, and by distributivity, which does not involve any unitary stimuli. A class of representations more general than rank-dependent utility (RDU) is found in which monotonic functions of increments U(x)-U(y), where U is an order preseving representation of gambles, and joint receipt play a role. This form and its natural generalization to gambles with n > 2 consequences, which is also axiomatized, appear to encompass models of configural weights and decision affect. When joint receipts are not commutative, somewhat similar representations of stimuli arise, and joint receipts are shown to have a conjoint additive representation and in some cases a constant bias independent of signal intensity is predicted.  相似文献   

15.
A decision theoretic model of the American war in Vietnam   总被引:1,自引:0,他引:1  
This paper presents a decision theoretic model of the American side of the Vietnam war. That is, we only consider the U.S. government declared objectives and assign them utilities from that point of view. We assume that the involvement of the U.S. in this war was the outcome of a deliberate decision and, moreover, that this decision was taken on the basis of a careful weighing of goals and means. Hence decision theory is applicable in this case - and probably it was applied. We make hypotheses on the utilities of the goals and on those of the negative side effects. We also assess the probabilities of the four main possible courses of action available to achieve those goals: total war, advising, negotiating, and staying out. The total efficiencies of these turn out to be -0.30, -0.20, +0.51, and -0.11 respectively. This result explains why neutrality was not tried and why the advisory policy was eventually given up. But it does not explain why war, which has been not just inefficient but countereffective, was preferred over negotiating from the start or keeping neutral. Unless of course one assumes that the strategists either (a) paid no attention to any decision theoretic models or (b) used models that had fatal flaws. If the first alternative is discarded because of the prestige enjoyed by decision theory amongst American executives, we must conclude that the decision theoretic models employed by the U.S. high command had either of the following defects: (a) they ignored or underrated the negative side effects accompanying the implementation of every goal, or (b) they were not supplemented by mathematical models of the decisions likely to be made by the other side. In either case the decision to adopt the strategy with minimal expected utility was, at best, rational but extremely ill informed. It may have been one more victory of ideology over science.  相似文献   

16.
Ellsberg (The Quarterly Journal of Economics 75, 643–669 (1961); Risk, Ambiguity and Decision, Garland Publishing (2001)) argued that uncertainty is not reducible to risk. At the center of Ellsberg’s argument lies a thought experiment that has come to be known as the three-color example. It has been observed that a significant number of sophisticated decision makers violate the requirements of subjective expected utility theory when they are confronted with Ellsberg’s three-color example. More generally, such decision makers are in conflict with either the ordering assumption or the independence assumption of subjective expected utility theory. While a clear majority of the theoretical responses to these violations have advocated maintaining ordering while relaxing independence, a persistent minority has advocated abandoning the ordering assumption. The purpose of this paper is to consider a similar dilemma that exists within the context of multiattribute models, where it arises by considering indeterminacy in the weighting of attributes rather than indeterminacy in the determination of probabilities as in Ellsberg’s example.   相似文献   

17.
Sometimes we believe that others receive harmful information. However, Marschak’s value of information framework always assigns non-negative value under expected utility: it starts from the decision maker’s beliefs – and one can never anticipate information’s harmfulness for oneself. The impact of decision makers’ capabilities to process information and of their expectations remains hidden behind the individual and subjective perspective Marschak’s framework assumes. By introducing a second decision maker as a point of reference, this paper introduces a way for evaluating others’ information from a cross-individual, imperfect expectations perspective for agents maximising expected utility. We define the cross-value of information that can become negative – then the information is “harmful” from a cross-individual perspective – and we define (mutual) cost of limited information processing capabilities and imperfect expectations as an opportunity cost from this same point of reference. The simple relationship between these two expected utility-based concepts and Marschak’s framework is shown, and we discuss evaluating short-term reactions of stock market prices to new information as an important domain of valuing others’ information.   相似文献   

18.
In a previous article (see [3]) a system of axioms is proposed stating conditions which are necessary and sufficient to determine a cardinal utility function on any set, finite or infinite, of outcomes X. The present paper discusses and interprets the meaning of those axioms, and compares this new approach to cardinal utility with the utility differences approach proposed by Alt and Frisch, among others, and with the expected utility approach of von-Neuman and Morgenstern. The notion of repetition of the same choice situation is presented and its interpretation discussed. It is then argued that this notion leads naturally to the system of axioms presented in On Cardinal Utility. It is also argued that this notion must be used if we want to have a more clear understanding of the meaning of the axioms proposed by Alt and Frisch. Finally, it is remarked that since uncertainty is not present in the new approach, it is free of the paradoxes that have plagued the expected utility hypothesis.  相似文献   

19.
Transitivity is a compelling requirement of rational choice, and a transitivity axiom is included in all classical theories of both individual and group choice. Nonetheless, choice contexts exist in which choice might well be systematically intransitive. Moreover, this can occur even when the context is transparent, and the decision maker is reflective. The present paper catalogues such choice contexts, dividing them roughly into the following classes:
1.  Contexts where the intransitivity results from the employment of a choice rule which is justified on ethical or moral grounds (typically, choice by or on behalf of a group).
2.  Contexts where the intransitivity results from the employment of a choice rule that is justified on economic or pragmatic grounds (typically, multi-attribute choice).
2.  Contexts where the choice is intrinsically comparative, namely, where the utility from any chosen alternative depends intrinsically on the rejected alternative(s) as well (typically, certain competitive contexts).
In the latter, independence from irrelevant alternatives may be violated, as well as transitivity. However, the classical money-pump argument against intransitive choice cycles is inapplicable to these contexts. We conclude that the requirement for transitivity, though powerful, is not always overriding.  相似文献   

20.
Among the violations of expected utility (E.U.) theory which have been observed by experimenters, the violations of its independence axiom is, by far, the most common. It seems that, in many cases, these inconsistencies can be ascribed to the desire for security - called the security factor by L. Lopes (1986) - which makes people attach special importance to the worst outcomes of risky decisions as well as to the sole outcomes of riskless decisions (certainty effect). J.-Y. Jaffray (1988) has proposed a model which generalizes E.U. theory by taking into account this factor and is then able to account for certain violations. However, especially in experiments on choice involving prospective losses, violations of the von Neumann-Morgenstern independence axiom cannot be explained by the security factor alone and have to be partially ascribed to the potential factor (L. Lopes, 1986) which reflects heightened attention to the best outcomes of decisions, especially when the best outcome is the status quo. In this paper, we construct an axiomatic model for subjects taking into account simultaneously or alternatively the security factor and the potential factor. For this, as in Jaffray's model, it has been necessary to weaken not only the standard independence axiom but also the continuity axiom and, in the same time, to reinforce the dominance axiom. In the resulting model, choices are partially determined by the mere comparison of the (security level, potential level) (i.e. the (worst outcome, best outcome)) pairs offered, and completed by the maximization of an affine function of the expected utility, the coefficients of which depend on both the security level and potential level.In this model, a decision maker who (i) has constant marginal utility for money, (ii) is sensitive to the security factor alone in the domain of gains, (iii) is sensitive to the potential factor alone in the domain of losses, behaves as a risk averter for gains and a risk seeker for losses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号