首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
This paper studies two models of rational behavior under uncertainty whose predictions are invariant under ordinal transformations of utility. The quantile utility model assumes that the agent maximizes some quantile of the distribution of utility. The utility mass model assumes maximization of the probability of obtaining an outcome whose utility is higher than some fixed critical value. Both models satisfy weak stochastic dominance. Lexicographic refinements satisfy strong dominance.The study of these utility models suggests a significant generalization of traditional ideas of riskiness and risk preference. We define one action to be riskier than another if the utility distribution of the latter crosses that of the former from below. The single crossing property is equivalent to a minmax spread of a random variable. With relative risk defined by the single crossing criterion, the risk preference of a quantile utility maximizer increases with the utility distribution quantile that he maximizes. The risk preference of a utility mass maximizer increases with his critical utility value.  相似文献   

2.
The random preference, Fechner (or white noise), and constant error (or tremble) models of stochastic choice under risk are compared. Various combinations of these approaches are used with expected utility and rank-dependent theory. The resulting models are estimated in a random effects framework using experimental data from two samples of 46 subjects who each faced 90 pairwise choice problems. The best fitting model uses the random preference approach with a tremble mechanism, in conjunction with rank-dependent theory. As subjects gain experience, trembles become less frequent and there is less deviation from behaviour consistent with expected utility theory.  相似文献   

3.
Choices between gambles show systematic violations of stochastic dominance. For example, most people choose ($6, .05; $91, .03; $99, .92) over ($6, .02; $8, .03; $99, .95), violating dominance. Choices also violate two cumulative independence conditions: (1) If S = (z, r; x, p; y, q) R = (z, r; x, p; y, q) then S = (x, r; y, p + q) R = (x, r + p; y, q). (2) If S = (x, p; y, q; z, r) R = (x, p; y, q; z, r) then S = (x, p + q; y, r) R = (x, p; y, q + r), where 0 < z < x < x < y < y < y < z.Violations contradict any utility theory satisfying transivity, outcome monotonicity, coalescing, and comonotonic independence. Because rank-and sign-dependent utility theories, including cumulative prospect theory (CPT), satisfy these properties, they cannot explain these results.However, the configural weight model of Birnbaum and McIntosh (1996) predicted the observed violations of stochastic dominance, cumulative independence, and branch independence. This model assumes the utility of a gamble is a weighted average of outcomes\' utilities, where each configural weight is a function of the rank order of the outcome\'s value among distinct values and that outcome\'s probability. The configural weight, TAX model with the same number of parameters as CPT fit the data of most individuals better than the model of CPT.  相似文献   

4.
In this paper, a problem for utility theory - that it would have an agent who was compelled to play Russian Roulette with one revolver or another, to pay as much to have a six-shooter with four bullets relieved of one bullet before playing with it, as he would be willing to pay to have a six-shooter with two bullets emptied - is reviewed. A less demanding Bayesian theory is described, that would have an agent maximize expected values of possible total consequence of his actions. And utility theory is located within that theory as valid for agents who satisfy certain formal conditions, that is, for agents who are, in terms of that more general theory, indifferent to certain dimensions of risk. Raiffa- and Savage-style arguments for its more general validity are then resisted. Addenda are concerned with implications for game theory, and relations between utilities and values.  相似文献   

5.
Scientists often disagree about whether a new theory is better than the current theory. From this some (e.g., Thomas Kuhn) have inferred that the values of science are changing and subjective, and hence that science is an irrational enterprise. As an alternative, this paper develops a rational model of the scientific enterprise according to which the scope and elegance of theories are important elements in the scientist's utility function. The varied speed of acceptance of new theories by scientists can be explained in terms of the optimal allocation of time among different scientific activities. The model thus accounts for the rationality of science in a way that is broadly consistent with the empirical evidence on the history and practice of science.  相似文献   

6.
We report a surprising property of --preferences: the assumption of nonincreasing relative risk aversion implies the optimal portfolio being riskless. We discuss a solution of that paradox using wealth dependent utility functions in detail. Using the revealed preference theory we show that (general, i.e. not necessary -) wealth dependent utility functions can be characterized by Wald's axiom.  相似文献   

7.
A new investigation is launched into the problem of decision-making in the face of complete ignorance, and linked to the problem of social choice. In the first section the author introduces a set of properties which might characterize a criterion for decision-making under complete ignorance. Two of these properties are novel: independence of non-discriminating states, and weak pessimism. The second section provides a new characterization of the so-called principle of insufficient reason. In the third part, lexicographic maximin and maximax criteria are characterized. Finally, the author's results are linked to the problem of social choice.  相似文献   

8.
R. Kast 《Theory and Decision》1991,31(2-3):175-197
A rational statistical decision maker whose preferences satisfy Savage's axioms will minimize a Bayesian risk function: the expectation with respect to a revealed (or subjective) probability distribution of a loss (or negative utility) function over the consequences of the statistical decision problem. However, the nice expected utility form of the Bayesian risk criterion is nothing but a representation of special preferences. The subjective probability is defined together with the utility (or loss) function and it is not possible, in general, to use a given loss function - say a quadratic loss - and to elicit independently a subjective distribution.I construct the Bayesian risk criterion with a set of five axioms, each with a simple mathematical implication. This construction clearly shows that the subjective probability that is revealed by a decider's preferences is nothing but a (Radon) measure equivalent to a linear functional (the criterion). The functions on which the criterion operates are expected utilities in the von Neumann-Morgenstern sense. It then becomes clear that the subjective distribution cannot be eliciteda priori, independently of the utility function on consequences.However, if one considers a statistical decision problem by itself, losses, defined by a given loss function, become the consequences of the decisions. It can be imagined that experienced statisticians are used to dealing with different losses and are able to compare them (i.e. have preferences, or fears over a set of possible losses). Using suitable axioms over these preferences, one can represent them by a (linear) criterion: this criterion is the expectation of losses with respect to a (revealed) distribution. It must be noted that such a distribution is a measure and need not be a probability distribution.  相似文献   

9.
In general, the technical apparatus of decision theory is well developed. It has loads of theorems, and they can be proved from axioms. Many of the theorems are interesting, and useful both from a philosophical and a practical perspective. But decision theory does not have a well agreed upon interpretation. Its technical terms, in particular, utility and preference do not have a single clear and uncontroversial meaning.How to interpret these terms depends, of course, on what purposes in pursuit of which one wants to put decision theory to use. One might want to use it as a model of economic decision-making, in order to predict the behavior of corporations or of the stock market. In that case, it might be useful to interpret the technical term utility as meaning money profit. Decision theory would then be an empirical theory. I want to look into the question of what utility could mean, if we want decision theory to function as a theory of practical rationality. I want to know whether it makes good sense to think of practical rationality as fully or even partly accounted for by decision theory. I shall lay my cards on the table: I hope it does make good sense to think of it that way. For, I think, if Humeans are right about practical rationality, then decision theory must play a very large part in their account. And I think Humeanism has very strong attractions.  相似文献   

10.
Harrod introduced a refinement to crude Utilitarianism with the aim of reconciling it with common sense ethics. It is shown (a) that this refinement (later known as Rule Utilitarianism) does not maximise utility (b) the principle which truly maximizes utility, marginal private benefit equals marginal social cost, requires that a number of forbidden acts like lying be performed. Hence Harrod's claim that his refined Utilitarianism is the foundation of moral institutions cannot be sustained. Some more modern forms of Utilitarianism are reinterpreted in this paper as utility maximizing decision rules. While they produce more utility than Harrod's rule, they require breaking the moral rules some of the time, just like the marginal rule mentioned above. However, Harrod's rule is useful in warning the members of a group, considered as a single moral agent, of the externalities that lie beyond the immediate consequences of the collective action.  相似文献   

11.
Sometimes conducting an experiment to ascertain the state of a system changes the state of the system being measured. Kahneman & Tversky modelled this effect with support theory. Quantum physics models this effect with probability amplitude mechanics. As this paper shows, probability amplitude mechanics is similar to support theory. Additionally, Viscusi's proposed generalized expected utility model has an analogy in quantum mechanics.  相似文献   

12.
We study the uncertain dichotomous choice model. In this model a set of decision makers is required to select one of two alternatives, say support or reject a certain proposal. Applications of this model are relevant to many areas, such as political science, economics, business and management. The purpose of this paper is to estimate and compare the probabilities that different decision rules may be optimal. We consider the expert rule, the majority rule and a few in-between rules. The information on the decisional skills is incomplete, and these skills arise from an exponential distribution. It turns out that the probability that the expert rule is optimal far exceeds the probability that the majority rule is optimal, especially as the number of the decision makers becomes large.  相似文献   

13.
Far-sighted equilibria in 2 × 2, non-cooperative,repeated games   总被引:1,自引:1,他引:0  
Consider a two-person simultaneous-move game in strategic form. Suppose this game is played over and over at discrete points in time. Suppose, furthermore, that communication is not possible, but nevertheless we observe some regularity in the sequence of outcomes. The aim of this paper is to provide an explanation for the question why such regularity might persist for many (i.e., infinite) periods.Each player, when contemplating a deviation, considers a sequential-move game, roughly speaking of the following form: if I change my strategy this period, then in the next my opponent will take his strategy b and afterwards I can switch to my strategy a, but then I am worse off since at that outcome my opponent has no incentive to change anymore, whatever I do. Theoretically, however, there is no end to such reaction chains. In case that deviating by some player gives him less utility in the long run than before deviation, we say that the original regular sequence of outcomes is far-sighted stable for that player. It is a far-sighted equilibrium if it is far-sighted stable for both players.  相似文献   

14.
Summary The objective Bayesian program has as its fundamental tenet (in addition to the three Bayesian postulates) the requirement that, from a given knowledge base a particular probability function is uniquely appropriate. This amounts to fixing initial probabilities, based on relatively little information, because Bayes' theorem (conditionalization) then determines the posterior probabilities when the belief state is altered by enlarging the knowledge base. Moreover, in order to reconstruct orthodox statistical procedures within a Bayesian framework, only privileged ignorance probability functions will work.To serve all these ends objective Bayesianism seeks additional principles for specifying ignorance and partial information probabilities. H. Jeffreys' method of invariance (or Jaynes' modification thereof) is used to solve the former problem, and E. Jaynes' rule of maximizing entropy (subject to invariance for continuous distributions) has recently been thought to solve the latter. I have argued that neither policy is acceptable to a Bayesian since each is inconsistent with conditionalization. Invariance fails to give a consistent representation to the state of ignorance professed. The difficulties here parallel familiar weaknesses in the old Laplacean principle of insufficient reason. Maximizing entropy is unsatisfactory because the partial information it works with fails to capture the effect of uncertainty about related nuisance factors. The result is a probability function that represents a state richer in empirical content than the belief state targeted for representation. Alternatively, by conditionalizing on information about a nuisance parameter one may move from a distribution of lower to higher entropy, despite the obvious increase in information available.Each of these two complaints appear to me to be symptoms of the program's inability to formulate rules for picking privileged probability distributions that serve to represent ignorance or near ignorance. Certainly the methods advocated by Jeffreys, Jaynes and Rosenkrantz are mathematically convenient idealizations wherein specified distributions are elevated to the roles of ignorance and partial information distributions. But the cost that goes with the idealization is a violation of conditionalization, and if that is the ante that we must put up to back objective Bayesianism then I propose we look for a different candidate to earn our support.31  相似文献   

15.
Both Popper and Good have noted that a deterministic microscopic physical approach to probability requires subjective assumptions about the statistical distribution of initial conditions. However, they did not use such a fact for defining an a priori probability, but rather recurred to the standard observation of repetitive events. This observational probability may be hard to assess for real-life decision problems under uncertainty that very often are - strictly speaking - non-repetitive, one-time events. This may be a reason for the popularity of subjective probability in decision models. Unfortunately, such subjective probabilities often merely reflect attitudes towards risk, and not the underlying physical processes.In order to get as objective as possible a definition of probability for one-time events, this paper identifies the origin of randomness in individual chance processes. By focusing on the dynamics of the process, rather than on the (static) device, it is found that any process contains two components: observer-independent (= objective) and observer-dependent (= subjective). Randomness, if present, arises from the subjective definition of the rules of the game, and is not - as in Popper's propensity - a physical property of the chance device. In this way, the classical definition of probability is no longer a primitive notion based upon equally possible cases, but is derived from the underlying microscopic processes, plus a subjective, clearly identified, estimate of the branching ratios in an event tree. That is, equipossibility is not an intrinsic property of the system object/subject but is forced upon the system via the rules of the game/measurement.Also, the typically undefined concept of symmetry in games of chance is broken down into objective and subjective components. It is found that macroscopic symmetry may hold under microscopic asymmetry. A similar analysis of urn drawings shows no conceptual difference with other games of chance (contrary to Allais' opinion). Finally, the randomness in Lande's knife problem is not due to objective fortuity (as in Popper's view) but to the rules of the game (the theoretical difficulties arise from intermingling microscopic trajectories and macroscopic events).Dedicated to Professor Maurice Allais on the occasion of the Nobel Prize in Economics awarded December, 1988.  相似文献   

16.
Endogenous risk implies an individual perceives he can influence the likelihood that a state of nature will occur. To add structure to endogenous risk models, I define a protection premium for reduced uncertainty about protection efficiency when a stochastic variable enters the probability functionp(x) rather than the utility function. For a binary lottery, a measure of aversion of uncertain protection efficiency(x) =-p(x)/p(x) is defined to unambiguously determine the effects of increased risk on an individual's voluntary contribution to public good supply earmarked to reduce the probability of an undesirable state. Finally, I examine the protection premium in ann-state discrete lottery and when uncertainty exists in both the probability and utility function.  相似文献   

17.
This paper discusses several concepts that can be used to provide a foundation for a unified, theory of rational, economic behavior. First, decision-making is defined to be a process that takes place with reference to both subjective and objective time, that distinguishes between plans and actions, between information and states and that explicitly incorporates the collection and processing of information. This conception of decision making is then related to several important aspects of behavioral economics, the dependence of values on experience, the use of behavioral rules, the occurrence of multiple goals and environmental feedback.Our conclusions are (1) the non-transitivity of observed or revealed preferences is a characteristic of learning and hence is to be expected of rational decision-makers; (2) the learning of values through experience suggests the sensibleness of short time horizons and the making of choices according to flexible utility; (3) certain rules of thumb used to allow for risk are closely related to principles of Safety-First and can also be based directly on the hypothesis that the feeling of risk (the probability of disaster) is identified with extreme departures from recently executed decisions. (4) The maximization of a hierarchy of goals, or of a lexicographical utility function, is closely related to the search for feasibility and the practice of satisficing. (5) When the dim perception of environmental feedback and the effect of learning on values are acknowledged the intertemporal optimality of planned decision trajectories is seen to be a characteristic of subjective not objective time. This explains why decision making is so often best characterized by rolling plans. In short, we find that economic man - like any other - is an existential being whose plans are based on hopes and fears and whose every act involves a leap of faith.This paper is based on a talk presented at the Conference, New Beginnings in Economics, Akron, Ohio, March 15, 1969. Work on this paper was supported by a grant from the National Science Foundation.  相似文献   

18.
Theories of economic behavior often use as-if-languages: for example, analytical sentences or definitions are used as if they were synthetic and factual-normative theoretical constructs are used as if they were empirical concepts. Such as-if-languages impede the acquisition of knowledge and are apt to encourage the wrong assessment of actual research strategies. The author's criticism is first leveled at revealed-preference theory. In this theory observed behavior is often understood in an empirical sense although it is a pure theoretical construct. Another example can be found in von Mises' representations on marketing behavior: here theoretical valuations are used to achieve a spurious streamlining of reality. Result: Scientists should not ogle with reality if they have nothing to say about it.  相似文献   

19.
The Ellsberg Paradox documented the aversion to ambiguity in the probability of winning a prize. Using an original sample of 266 business owners and managers facing risks from climate change, this paper documents the presence of departures from rationality in both directions. Both ambiguity-seeking behavior and ambiguity-averse behavior are evident. People exhibit fear effects of ambiguity for small probabilities of suffering a loss and hope effects for large probabilities. Estimates of the crossover point from ambiguity aversion (fear) to ambiguity seeking (hope) place this value between 0.3 and 0.7 for the risk per decade lotteries considered, with empirical estimates indicating a crossover mean risk of about 0.5. Attitudes toward the degree of ambiguity also reverse at the crossover point.  相似文献   

20.
This paper considers two fundamental aspects of the analysis of dynamic choices under risk: the issue of the dynamic consistency of the strategies of a non EU maximizer, and the issue that an individual whose preferences are nonlinear in probabilities may choose a strategy which is in some appropriate sense dominated by other strategies. A proposed way of dealing with these problems, due to Karni and Safra and called behavioral consistency, is described. The implications of this notion of behavioral consistency are explored, and it is shown that while the Karni and Safra approach obtains dynamically consistent behavior under nonlinear preferences, it may imply the choice of dominated strategies even in very simple decision trees.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号