首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Separating marginal utility and probabilistic risk aversion   总被引:10,自引:0,他引:10  
This paper is motivated by the search for one cardinal utility for decisions under risk, welfare evaluations, and other contexts. This cardinal utility should have meaningprior to risk, with risk depending on cardinal utility, not the other way around. The rank-dependent utility model can reconcile such a view on utility with the position that risk attitude consists of more than marginal utility, by providing a separate risk component: a probabilistic risk attitude towards probability mixtures of lotteries, modeled through a transformation for cumulative probabilities. While this separation of risk attitude into two independent components is the characteristic feature of rank-dependent utility, it had not yet been axiomatized. Doing that is the purpose of this paper. Therefore, in the second part, the paper extends Yaari's axiomatization to nonlinear utility, and provides separate axiomatizations for increasing/decreasing marginal utility and for optimistic/pessimistic probability transformations. This is generalized to interpersonal comparability. It is also shown that two elementary and often-discussed properties — quasi-convexity (aversion) of preferences with respect to probability mixtures, and convexity (pessimism) of the probability transformation — are equivalent.  相似文献   

2.
Nash's solution of a two-person cooperative game prescribes a coordinated mixed strategy solution involving Pareto-optimal outcomes of the game. Testing this normative solution experimentally presents problems in as much as rather detailed explanations must be given to the subjects of the meaning of threat strategy, strategy mixture, expected payoff, etc. To the extent that it is desired to test the solution using naive subjects, the problem arises of imparting to them a minimal level of understanding about the issue involved in the game without actually suggesting the solution.Experiments were performed to test the properties of the solution of a cooperative two-person game as these are embodied in three of Nash's four axioms: Symmetry, Pareto-optimality, and Invariance with respect to positive linear transformations. Of these, the last was definitely discorroborated, suggesting that interpersonal comparison of utilities plays an important part in the negotiations.Some evidence was also found for a conjecture generated by previous experiments, namely that an externally imposed threat (penalty for non-cooperation) tends to bring the players closer together than the threats generated by the subjects themselves in the process of negotiation.  相似文献   

3.
R. Kast 《Theory and Decision》1991,31(2-3):175-197
A rational statistical decision maker whose preferences satisfy Savage's axioms will minimize a Bayesian risk function: the expectation with respect to a revealed (or subjective) probability distribution of a loss (or negative utility) function over the consequences of the statistical decision problem. However, the nice expected utility form of the Bayesian risk criterion is nothing but a representation of special preferences. The subjective probability is defined together with the utility (or loss) function and it is not possible, in general, to use a given loss function - say a quadratic loss - and to elicit independently a subjective distribution.I construct the Bayesian risk criterion with a set of five axioms, each with a simple mathematical implication. This construction clearly shows that the subjective probability that is revealed by a decider's preferences is nothing but a (Radon) measure equivalent to a linear functional (the criterion). The functions on which the criterion operates are expected utilities in the von Neumann-Morgenstern sense. It then becomes clear that the subjective distribution cannot be eliciteda priori, independently of the utility function on consequences.However, if one considers a statistical decision problem by itself, losses, defined by a given loss function, become the consequences of the decisions. It can be imagined that experienced statisticians are used to dealing with different losses and are able to compare them (i.e. have preferences, or fears over a set of possible losses). Using suitable axioms over these preferences, one can represent them by a (linear) criterion: this criterion is the expectation of losses with respect to a (revealed) distribution. It must be noted that such a distribution is a measure and need not be a probability distribution.  相似文献   

4.
Summary The objective Bayesian program has as its fundamental tenet (in addition to the three Bayesian postulates) the requirement that, from a given knowledge base a particular probability function is uniquely appropriate. This amounts to fixing initial probabilities, based on relatively little information, because Bayes' theorem (conditionalization) then determines the posterior probabilities when the belief state is altered by enlarging the knowledge base. Moreover, in order to reconstruct orthodox statistical procedures within a Bayesian framework, only privileged ignorance probability functions will work.To serve all these ends objective Bayesianism seeks additional principles for specifying ignorance and partial information probabilities. H. Jeffreys' method of invariance (or Jaynes' modification thereof) is used to solve the former problem, and E. Jaynes' rule of maximizing entropy (subject to invariance for continuous distributions) has recently been thought to solve the latter. I have argued that neither policy is acceptable to a Bayesian since each is inconsistent with conditionalization. Invariance fails to give a consistent representation to the state of ignorance professed. The difficulties here parallel familiar weaknesses in the old Laplacean principle of insufficient reason. Maximizing entropy is unsatisfactory because the partial information it works with fails to capture the effect of uncertainty about related nuisance factors. The result is a probability function that represents a state richer in empirical content than the belief state targeted for representation. Alternatively, by conditionalizing on information about a nuisance parameter one may move from a distribution of lower to higher entropy, despite the obvious increase in information available.Each of these two complaints appear to me to be symptoms of the program's inability to formulate rules for picking privileged probability distributions that serve to represent ignorance or near ignorance. Certainly the methods advocated by Jeffreys, Jaynes and Rosenkrantz are mathematically convenient idealizations wherein specified distributions are elevated to the roles of ignorance and partial information distributions. But the cost that goes with the idealization is a violation of conditionalization, and if that is the ante that we must put up to back objective Bayesianism then I propose we look for a different candidate to earn our support.31  相似文献   

5.
Self-reflecting signed orders on a set A and its anti-set A * were introduced previously as a way to account for negative as well as positive feelings about the inclusion of items in A in potential subsets of choice. The present paper extends the notion of signed orders to lotteries on A A *, describes reflection axioms for the lottery context, and shows how these axioms simplify utility representations for preference between lotteries. The simplified representations are then used to guide procedures for extending preferences from A A * and its lotteries to preferences between subsets of items.  相似文献   

6.
Lattices,bargaining and group decisions   总被引:1,自引:1,他引:0  
This essay aims at constructing an abstract mathematical system which, when interpreted, serves to portray group-choices among alternatives that need not be quantifiable. The system in question is a complete distributive lattice, on which a class of non-negative real-valued homomorphisms is defined. Reinforced with appropriate axioms, this class becomes a convex distributive lattice. If this lattice is equipped with a suitable measure, and if the mentioned class of homomorphisms is equipped with a metric, then the class and its convex sets are seen to possess certain characteristic properties. The main result (Theorem 6) follows from a combination of these results and a famous result due to Choquet.The mathematical scheme is then interpreted in the subject-language of choice among alternatives. It is shown, by means of an example, that the system furnishes all the ingredients for describing multi-group choices. Whether or not the same ingredients are also adequate for a behavioural theory of multi-group choices is an issue that will not be gone into. However, the example effectively illustrates how a process of bargaining can be described with the aid of the mathematical scheme.In the second example, a class of bargaining situations is modelled in the symbolism of linear programming with several objective functions combined with unknown weights; the cost vectors in such formulations are identified with homomorphisms, and the main theorem of this essay is applied.  相似文献   

7.
Can we rationally learn to coordinate?   总被引:1,自引:0,他引:1  
In this paper we examine the issue whether individual rationality considerations are sufficient to guarantee that individuals will learn to coordinate. This question is central in any discussion of whether social phenomena (read: conventions) can be explained in terms of a purely individualistic approach. We argue that the positive answers to this general question that have been obtained in some recent work require assumptions which incorporate some convention. This conclusion may be seen as supporting the viewpoint of institutional individualism in contrast to psychological individualism.  相似文献   

8.
The Shapley value is the unique value defined on the class of cooperative games in characteristic function form which satisfies certain intuitively reasonable axioms. Alternatively, the Banzhaf value is the unique value satisfying a different set of axioms. The main drawback of the latter value is that it does not satisfy the efficiency axiom, so that the sum of the values assigned to the players does not need to be equal to the worth of the grand coalition. By definition, the normalized Banzhaf value satisfies the efficiency axiom, but not the usual axiom of additivity.In this paper we generalize the axiom of additivity by introducing a positive real valued function on the class of cooperative games in characteristic function form. The so-called axiom of -additivity generalizes the classical axiom of additivity by putting the weight (v) on the value of the gamev . We show that any additive function determines a unique share function satisfying the axioms of efficient shares, null player property, symmetry and -additivity on the subclass of games on which is positive and which contains all positively scaled unanimity games. The axiom of efficient shares means that the sum of the values equals one. Hence the share function gives the shares of the players in the worth of the grand coalition. The corresponding value function is obtained by multiplying the shares with the worth of the grand coalition. By defining the function appropiately we get the share functions corresponding to the Shapley value and the Banzhaf value. So, for both values we have that the corresponding share functions belong to this class of share functions. Moreover, it shows that our approach provides an axiomatization of the normalized Banzhaf value. We also discuss some other choices of the function and the corresponding share functions. Furthermore we consider the axiomatization on the subclass of monotone simple games.  相似文献   

9.
In reply to McClennen, the paper argues that his criticism is based on a mistaken assumption about the meaning of rationality postulates, to be called the Implication Principle. Once we realize that the Implication Principle has no validity, McClennen's criticisms of what he calls the Reductio Argument and what he calls the Incentive Argument fall to the ground. The rest of the paper criticizes the rationality concept McClennen proposes in lieu of that used by orthodox game theory. It is argued that McClennen's concept is inconsistent with the behavior of real-life intelligent egoists; it is incompatible with the way payoffs are defined in game theory; and it would be highly dangerous as a practical guide to human behavior.The author is indebted to the National Science Foundation for financial support trough Grant GS-3222, administered through the Center for Research in Management Science, University of California, Berkeley.  相似文献   

10.
The author tries to formulate what a determinist believes to be true. The formulation is based on some concepts defined in a systems-theoretical manner, mainly on the concept of an experiment over the sets A m (a set of m-tuples of input values) and B n (a set of n-tuples of output values) in the time interval (t 1, ..., t k ) (symbolically E[t 1,..., t k , A m , B n ]), on the concept of a behavior of the system S m,n (=(A m , B n )) on the basis of the experiment E[t 1, ..., t k , A m , B n ] and, indeed, on the concept of deterministic behavior .... The resulting formulation of the deterministic hypothesis shows that this hypothesis expresses a belief that we always could find some hidden parameters.  相似文献   

11.
Stochastic dominance is a notion in expected-utility decision theory which has been developed to facilitate the analysis of risky or uncertain decision alternatives when the full form of the decision maker's von Neumann-Morgenstern utility function on the consequence space X is not completely specified. For example, if f and g are probability functions on X which correspond to two risky alternatives, then f first-degree stochastically dominates g if, for every consequence x in X, the chance of getting a consequence that is preferred to x is as great under f as under g. When this is true, the expected utility of f must be as great as the expected utility of g.Most work in stochastic dominance has been based on increasing utility functions on X with X an interval on the real line. The present paper, following [1], formulates appropriate notions of first-degree and second-degree stochastic dominance when X is an arbitrary finite set. The only structure imposed on X arises from the decision maker's preferences. It is shown how typical analyses with stochastic dominance can be enriched by applying the notion to convex combinations of probability functions. The potential applications of convex stochastic dominance include analyses of simple-majority voting on risky alternatives when voters have similar preference orders on the consequences.  相似文献   

12.
Dore  Mohammed 《Theory and Decision》1997,43(3):219-239
This paper critically reviews Ken Binmores non- utilitarian and game theoretic solution to the Arrow problem. Binmores solution belongs to the same family as Rawls maximin criterion and requires the use of Nash bargaining theory, empathetic preferences, and results in evolutionary game theory. Harsanyi has earlier presented a solution that relies on utilitarianism, which requires some exogenous valuation criterion and is therefore incompatible with liberalism. Binmores rigorous demonstration of the maximin principle for the first time presents a real alternative to a utilitarian solution.  相似文献   

13.
Ruse  Michael 《Theory and Decision》1974,5(4):413-440
In this paper I consider the problem of man's evolution - in particular the evolutionary problems raised when we consider man as a cultural animal as well as a biological one. I argue that any adequate cultural evolutionary theory must have the notion of adaptation as a central concept, where this must be construed in a fairly literal (biological) sense, that is as something which aids its possessors (i.e. men) to survive and reproduce. I argue against theories which treat adaptation in a metaphorical sense, particularly those speaking of the adaptation of cultures without reference to men. Iron tools per se are not better adapted than bronze tools - it is the men with iron tools who are better adapted than men with bronze tools. I show that by taking the approach that I do, one can apply at once in a fruitful manner some conclusions of biological evolutionary theory directly to men and their cultures. I conclude with a brief discussion of methodological issues raised by cultural evolutionary theories, particularly those of confirmation and falsification.  相似文献   

14.
This paper studies two models of rational behavior under uncertainty whose predictions are invariant under ordinal transformations of utility. The quantile utility model assumes that the agent maximizes some quantile of the distribution of utility. The utility mass model assumes maximization of the probability of obtaining an outcome whose utility is higher than some fixed critical value. Both models satisfy weak stochastic dominance. Lexicographic refinements satisfy strong dominance.The study of these utility models suggests a significant generalization of traditional ideas of riskiness and risk preference. We define one action to be riskier than another if the utility distribution of the latter crosses that of the former from below. The single crossing property is equivalent to a minmax spread of a random variable. With relative risk defined by the single crossing criterion, the risk preference of a quantile utility maximizer increases with the utility distribution quantile that he maximizes. The risk preference of a utility mass maximizer increases with his critical utility value.  相似文献   

15.
This paper falls within the field of Distributive Justice and (as the title indicates) addresses itself specifically to the meshing problem. Briefly stated, the meshing problem is the difficulty encountered when one tries to aggregate the two parameters of beneficence and equity in a way that results in determining which of two or more alternative utility distributions is most just. A solution to this problem, in the form of a formal welfare measure, is presented in the paper. This formula incorporates the notions of equity and beneficence (which are defined earlier by the author) and weighs them against each other to compute a numerical value which represents the degree of justice a given distribution possesses. This value can in turn be used comparatively to select which utility scheme, of those being considered, is best.Three fundamental adequacy requirements, which any acceptable welfare measuring method must satisfy, are presented and subsequently demonstrated to be formally deducible as theorems of the author's system. A practical application of the method is then considered as well as a comparison of it with Nicholas Rescher's method (found in his book, Distributive Justice). The conclusion reached is that Rescher's system is unacceptable, since it computes counter-intuitive results. Objections to the author's welfare measure are considered and answered. Finally, a suggestion for expanding the system to cover cases it was not originally designed to handle (i.e. situations where two alternative utility distributions vary with regard to the number of individuals they contain) is made. The conclusion reached at the close of the paper is that an acceptable solution to the meshing problem has been established.I would like to gratefully acknowledge the assistance of Michael Tooley whose positive suggestions and critical comments were invaluable in the writting of this paper.  相似文献   

16.
This paper considers two fundamental aspects of the analysis of dynamic choices under risk: the issue of the dynamic consistency of the strategies of a non EU maximizer, and the issue that an individual whose preferences are nonlinear in probabilities may choose a strategy which is in some appropriate sense dominated by other strategies. A proposed way of dealing with these problems, due to Karni and Safra and called behavioral consistency, is described. The implications of this notion of behavioral consistency are explored, and it is shown that while the Karni and Safra approach obtains dynamically consistent behavior under nonlinear preferences, it may imply the choice of dominated strategies even in very simple decision trees.  相似文献   

17.
In the fifties, Popper defended an interactionistic version of body-mind dualism. It distinguished between the world of physical bodies and states and the world of mental states. Later he added a third world of objective thought contents. He claims the assumption that there is the third world is a necessary presupposition of problem-solving in general and of his philosophy of science in particular. The present article contains separate reasonings to the effect that this presupposition is neither necessary nor even possible. It is further argued that postulating the existence of entities makes sense only relative to a criterion of ontological commitment, which Popper does not mention and obviously does not have, and that in addition it presupposes a theory, which is tentatively accepted as true and which according to the criterion implies the existence of the entities. But as yet there is no testable theory involving terms like mind, intention etc., which made the notion that itself or its terms are essentially different from what is already known in the empirical sciences at least plausible. Therefore the body-mind controversy is still pointless. Popper's stand on it seems to be but a reflex of his anti-behavioristic and anti-psychologistic attitude.  相似文献   

18.
In general, the technical apparatus of decision theory is well developed. It has loads of theorems, and they can be proved from axioms. Many of the theorems are interesting, and useful both from a philosophical and a practical perspective. But decision theory does not have a well agreed upon interpretation. Its technical terms, in particular, utility and preference do not have a single clear and uncontroversial meaning.How to interpret these terms depends, of course, on what purposes in pursuit of which one wants to put decision theory to use. One might want to use it as a model of economic decision-making, in order to predict the behavior of corporations or of the stock market. In that case, it might be useful to interpret the technical term utility as meaning money profit. Decision theory would then be an empirical theory. I want to look into the question of what utility could mean, if we want decision theory to function as a theory of practical rationality. I want to know whether it makes good sense to think of practical rationality as fully or even partly accounted for by decision theory. I shall lay my cards on the table: I hope it does make good sense to think of it that way. For, I think, if Humeans are right about practical rationality, then decision theory must play a very large part in their account. And I think Humeanism has very strong attractions.  相似文献   

19.
Statistical analysis for negotiation support   总被引:2,自引:0,他引:2  
In this paper we provide an overview of the issues involved in using statistical analysis to support the process of international negotiation. We will illustrate how the approach can contribute to a negotiator's understanding and control of the interactions that occur during the course of a negotiation. The techniques are suited to the analysis of data collected from ongoing discussions and moves made by the parties. The analyses are used to illuminate influences and processes as they operate in particular cases or in negotiations in general. They do not identify a best strategy or outcome from among alternatives suggested either from theoretical assumptions about rationality and information-processing (see Munier and Rullière's paper in this issue), from personal preference structures (see Spector's paper in this issue), or from a rule-based modeling system (see Kersten's paper in this issue). This distinction should be evident in the discussion to follow, organized into several sections: From Empirical to Normative Analysis; Statistical Analysis for Situational Diagnosis; Time-Series Analysis of Cases, and Knowledge as Leverage Over the Negotiation Process. In a final section, we consider the challenge posed by attempts to implement these techniques with practitioners.  相似文献   

20.
The random preference, Fechner (or white noise), and constant error (or tremble) models of stochastic choice under risk are compared. Various combinations of these approaches are used with expected utility and rank-dependent theory. The resulting models are estimated in a random effects framework using experimental data from two samples of 46 subjects who each faced 90 pairwise choice problems. The best fitting model uses the random preference approach with a tremble mechanism, in conjunction with rank-dependent theory. As subjects gain experience, trembles become less frequent and there is less deviation from behaviour consistent with expected utility theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号