共查询到20条相似文献,搜索用时 31 毫秒
1.
William S. Neilson 《Journal of Risk and Uncertainty》2010,41(2):113-124
This paper takes the Anscombe–Aumann framework with horse and roulette lotteries, and applies the Savage axioms to the horse
lotteries and the von Neumann–Morgenstern axioms to the roulette lotteries. The resulting representation of preferences yields
a subjective probability measure over states and two utility functions, one governing risk attitudes and one governing ambiguity
attitudes. The model is able to accommodate the Ellsberg paradox and preferences for reductions in ambiguity. 相似文献
2.
A necessary and sufficient condition for linear aggregation of SSB utility functionals is presented. Harsanyi's social aggregation theorem for von Neumann–Morgenstern utility functions is shown to be a corollary to this result. Two generalizations of Fishburn and Gehrlein's conditional linear aggregation theorem for SSB utility functionals are also established. 相似文献
3.
In many real-world gambles, a non-trivial amount of time passes before the uncertainty is resolved but after a choice is made. An individual may have a preference between gambles with identical probability distributions over final outcomes if they differ in the timing of resolution of uncertainty. In this domain, utility consists not only of the consumption of outcomes, but also the psychological utility induced by an unresolved gamble. We term this utility anxiety. Since a reflective decision maker may want to include anxiety explicitly in analysis of unresolved lotteries, a multiple-outcome model for evaluating lotteries with delayed resolution of uncertainty is developed. The result is a rank-dependent utility representation (e.g., Quiggin, 1982), in which period weighting functions are related iteratively. Substitution rules are proposed for evaluating compound temporal lotteries. The representation is appealing for a number of reasons. First, probability weights can be interpreted as the cognitive attention allocated to certain outcomes. Second, the model disaggregates strength of preference from temporal risk aversion and thus provides some insight into the old debate about the relationship between von Neumann–Morgenstern utility functions and strength of preference value functions. 相似文献
4.
Expected utility maximization problem is one of the most useful tools in mathematical finance, decision analysis and economics.
Motivated by statistical model selection, via the principle of expected utility maximization, Friedman and Sandow (J Mach
Learn Res 4:257–291, 2003a) considered the model performance question from the point of view of an investor who evaluates
models based on the performance of the optimal strategies that the models suggest. They interpreted their performance measures
in information theoretic terms and provided new generalizations of Shannon entropy and Kullback–Leibler relative entropy and
called them U-entropy and U-relative entropy. In this article, a utility-based criterion for independence of two random variables is defined. Then, Markov’s
inequality for probabilities is extended from the U-entropy viewpoint. Moreover, a lower bound for the U-relative entropy is obtained. Finally, a link between conditional U-entropy and conditional Renyi entropy is derived. 相似文献
5.
Giovanni Parmigiani 《Theory and Decision》1992,33(3):241-252
Discussing the foundations of the minimax principle, Savage (1954) argued that it is utterly untenable for statistics because it is ultrapessimistic when applied to negative income, but claimed that such objection is not relevant when the principle is applied to regret. In this paper I rebut the latter claim. I first present an example where ultrapessimism, as Savage understood it, applies to minimax regret but not to minimax negative income. Then, for a sequential decision problems with two terminal acts and a finite number of states of nature, I give necessary and sufficient conditions for a decision rule to be ultrapessimistic, and show that for every payoff table with at least three states, be it in regret form or not, there exist an experiment such that the minimax rule is ultrapessimistic. I conclude with some more general remarks on information and the value of experimentation for a minimax agent. 相似文献
6.
Several decision rules, including the minimax regret rule, have been posited to suggest optimizing strategies for an individual when neither objective nor subjective probabilities can be associated to the various states of the world. These all share the shortcoming of focusing only on extreme outcomes. This paper suggests an alternative approach of tempered regrets which may more closely replicate the decision process of individuals in those situations in which avoiding the worst outcome tempers the loss from not achieving the best outcome. The assumption of total ignorance of the probabilities associated with the various states is maintained. Applications and illustrations from standard neoclassical theory are discussed. 相似文献
7.
Jeffrey Helzner 《Theory and Decision》2009,66(4):301-315
Ellsberg (The Quarterly Journal of Economics 75, 643–669 (1961); Risk, Ambiguity and Decision, Garland Publishing (2001)) argued that uncertainty is not reducible to risk. At the center of Ellsberg’s argument lies a
thought experiment that has come to be known as the three-color example. It has been observed that a significant number of
sophisticated decision makers violate the requirements of subjective expected utility theory when they are confronted with
Ellsberg’s three-color example. More generally, such decision makers are in conflict with either the ordering assumption or
the independence assumption of subjective expected utility theory. While a clear majority of the theoretical responses to
these violations have advocated maintaining ordering while relaxing independence, a persistent minority has advocated abandoning
the ordering assumption. The purpose of this paper is to consider a similar dilemma that exists within the context of multiattribute
models, where it arises by considering indeterminacy in the weighting of attributes rather than indeterminacy in the determination
of probabilities as in Ellsberg’s example.
相似文献
8.
Experimental studies have discovered behavior that is inconsistent with the expected utility model (EU) of risky choice (von Neumann and Morgenstern, 1953). The two approaches to address these paradoxes are tested: generalized expected utility models (GEU) and models incorporating decision-making limits or costs through question similarity. Tests are carried out over risky pairs related to well-known examples from Kahneman and Tversky's (1979) influential work. Statistical analysis reveals that GEU models of choice are significantly violated for choice patterns consistent with the similarity hypothesis. Additional tests point to shortcomings in the similarity approach that are consistent with fanning out behavior. 相似文献
9.
Peter C. Fishburn 《Journal of Risk and Uncertainty》1989,2(2):127-157
This article offers an exegesis of the passages in von Neumann and Morgenstern (1944, 1947, 1953) that discuss their conception of utility. It is occasioned by two factors. First, as we approach the semicentennial of the publication of Theory of Games and Economic Behavior, its immense impact on economic thought in the intervening years encourages serious reflection on its authors' ideas. Second, misleading statements about the theory continue to appear. The article will have accomplished its purpose if it helps others appreciate the genius and spirit of the theory of utility fashioned by John von Neumann and Oskar Morgenstern. 相似文献
10.
This paper introduces the likelihood method for decision under uncertainty. The method allows the quantitative determination
of subjective beliefs or decision weights without invoking additional separability conditions, and generalizes the Savage–de
Finetti betting method. It is applied to a number of popular models for decision under uncertainty. In each case, preference
foundations result from the requirement that no inconsistencies are to be revealed by the version of the likelihood method
appropriate for the model considered. A unified treatment of subjective decision weights results for most of the decision
models popular today. Savage’s derivation of subjective expected utility can now be generalized and simplified. In addition
to the intuitive and empirical contributions of the likelihood method, we provide a number of technical contributions: We
generalize Savage’s nonatomiticy condition (“P6”) and his assumption of (sigma) algebras of events, while fully maintaining
his flexibility regarding the outcome set. Derivations of Choquet expected utility and probabilistic sophistication are generalized
and simplified similarly. The likelihood method also reveals a common intuition underlying many other conditions for uncertainty,
such as definitions of ambiguity aversion and pessimism. 相似文献
11.
We provide an economic interpretation of the practice consisting in incorporating risk measures as constraints in an expected
prospect maximization problem. For what we call the infimum of expectations class of risk measures, we show that if the decision
maker (DM) maximizes the expectation of a random prospect under constraint that the risk measure is bounded above, he then
behaves as a “generalized expected utility maximizer” in the following sense. The DM exhibits ambiguity with respect to a
family of utility functions defined on a larger set of decisions than the original one; he adopts pessimism and performs first
a minimization of expected utility over this family, then performs a maximization over a new decisions set. This economic
behaviour is called “maxmin under risk” and studied by Maccheroni (Econ Theory 19:823–831, 2002). As an application, we make the link between an expected prospect maximization problem, subject to conditional value-at-risk
being less than a threshold value, and a non-expected utility economic formulation involving “loss aversion”-type utility
functions. 相似文献
12.
The response mode bias, in which subjects exhibit different risk attitudes when assessing certainty equivalents versus indifference
probabilities, is a well-known phenomenon in the assessment of utility functions. In this empirical study, we develop and
apply a cardinal measure of risk attitudes to analyze not only the existence, but also the strength of this phenomenon. Since
probability levels involved in decision problems are already known to have a strong impact on behavior, we use this approach
to study the impact of probabilities on the extent of the response mode bias. We find that the direction in which probabilities
influence measured risk aversion is the opposite in the certainty equivalence (CE) method versus in the probability equivalence
(PE) method. Utilizing the CE elicitation approach leads to an increase of risk seeking for gambles involving high probabilities.
For the PE method, subjects tend to behave risk averse with gambles of high probabilities. This behavior is reversed in the
gain domain. This “tailwhip” effect is consistently replicated in several experiments, involving both loss and gain domains
of lotteries. 相似文献
13.
This paper provides an experimental test of stochastic choice models of decisions. Models that admit Fechnerian structure are tested through the repeated pairwise choice problems. Results refute the Fechner hypothesis that characterizing the probability of selecting a given prospect increases in how strongly it is preferred to alternative choices. However, the experimental data lend support to characterizing an individual’s binary choice probability as some scalable functions of the von Neumann–Morgenstern utilities in the risky context.
相似文献14.
Ilia Tsetlin 《Theory and Decision》2006,61(1):51-62
Designing a mechanism that provides a direct incentive for an individual to report her utility function over several alternatives
is a difficult task. A framework for such mechanism design is the following: an individual (a decision maker) is faced with
an optimization problem (e.g., maximization of expected utility), and a mechanism designer observes the decision maker’s action.
The mechanism does reveal the individual’s utility truthfully if the mechanism designer, having observed the decision maker’s
action, infers the decision maker’s utilities over several alternatives. This paper studies an example of such a mechanism
and discusses its application to the problem of optimal social choice. Under certain simplifying assumptions about individuals’
utility functions and about how voters choose their voting strategies, this mechanism selects the alternative that maximizes
Harsanyi’s social utility function and is Pareto-efficient. 相似文献
15.
Peter Bardsley 《Theory and Decision》1993,34(2):109-118
In Machina's approach to generalised expected utility theory, decision makers maximise a choice functional which is smooth but not linear in the probabilities. When evaluating small changes, the choice functional can be approximated by the expectation of a local utility function. This local utility function is not however invariant under large changes in risk. This paper gives a simple explicit formula which can be used to write down the local utility functions of some common decision rules. 相似文献
16.
We examine risk attitudes under regret theory and derive analytical expressions for two components—the resolution and regret premiums—of the risk premium under regret theory. We posit that regret-averse decision makers are risk seeking (resp., risk averse) for low (resp., high) probabilities of gains and that feedback concerning the foregone option reinforces risk attitudes. We test these hypotheses experimentally and estimate empirically both the resolution premium and the regret premium. Our results confirm the predominance of regret aversion but not the risk attitudes predicted by regret theory; they also clarify how feedback affects attitudes toward both risk and regret. 相似文献
17.
Jordan Howard Sobel 《Theory and Decision》2009,66(2):103-148
There are narrowest bounds for P(h) when P(e) = y and P(h/e) = x, which bounds collapse to x as y goes to 1. A theorem for these bounds – Bounds for Probable Modus Ponens – entails a principle for updating on possibly uncertain evidence subject to these bounds that is a generalization of the
principle for updating by conditioning on certain evidence. This way of updating on possibly uncertain evidence is appropriate
when updating by ‘probability kinematics’ or ‘Jeffrey-conditioning’ is, and apparently in countless other cases as well. A
more complicated theorem due to Karl Wagner – Bounds for Probable Modus Tollens – registers narrowest bounds for P(∼h) when P(∼e) = y and P(e/h) = x. This theorem serves another principle for updating on possibly uncertain evidence that might be termed ‘contraditioning’,
though it is for a way of updating that seems in practice to be frequently not appropriate. It is definitely not a way of
putting down a theory – for example, a random-chance theory of the apparent fine-tuning for life of the parameters of standard
physics – merely on the ground that the theory made extremely unlikely conditions of which we are now nearly certain. These
theorems for bounds and updating are addressed to standard conditional probabilities defined as ratios of probabilities. Adaptations
for Hosiasson-Lindenbaum ‘free-standing’ conditional probabilities are provided. The extended on-line version of this article
(URL: ) includes appendices and expansions of several notes. Appendix A contains demonstrations and confirmations of elements of
those adaptations. Appendix B discusses and elaborates analogues of modus ponens and modus tollens for probabilities and conditional probabilities found in Elliott Sober’s “Intelligent Design and Probability Reasoning.”
Appendix C adds to observations made below regarding relations of Probability Kinematics and updating subject to Bounds for
Probable Modus Ponens.
相似文献
18.
An extensive literature overlapping economics, statistical decision theory and finance, contrasts expected utility [EU] with
the more recent framework of mean–variance (MV). A basic proposition is that MV follows from EU under the assumption of quadratic
utility. A less recognized proposition, first raised by Markowitz, is that MV is fully justified under EU, if and only if
utility is quadratic. The existing proof of this proposition relies on an assumption from EU, described here as “Buridan’s
axiom” after the French philosopher’s fable of the ass that starved out of indifference between two bales of hay. To satisfy
this axiom, MV must represent not only “pure” strategies, but also their probability mixtures, as points in the (σ, μ) plane. Markowitz and others have argued that probability mixtures are represented sufficiently by (σ, μ) only under quadratic utility, and hence that MV, interpreted as a mathematical re-expression of EU, implies quadratic utility.
We prove a stronger form of this theorem, not involving or contradicting Buridan’s axiom, nor any more fundamental axiom of
utility theory. 相似文献
19.
20.
Game trees (or extensive-form games) were first defined by von Neumann and Morgenstern in 1944. In this paper we examine the use of game trees for representing Bayesian decision problems. We propose a method for solving game trees using local computation. This method is a special case of a method due to Wilson for computing equilibria in 2-person games. Game trees differ from decision trees in the representations of information constraints and uncertainty. We compare the game tree representation and solution technique with other techniques for decision analysis such as decision trees, influence diagrams, and valuation networks. 相似文献