首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Decision making theory in general, and mental models in particular, associate judgment and choice. Decision choice follows probability estimates and errors in choice derive mainly from errors in judgment. In the studies reported here we use the Monty Hall dilemma to illustrate that judgment and choice do not always go together, and that such a dissociation can lead to better decision-making. Specifically, we demonstrate that in certain decision problems, exceeding working memory limitations can actually improve decision choice. We show across four experiments that increasing the number of choice alternatives forces people to collapse choices together, resulting in better decision-making. While choice performance improves, probability judgments do not change, thus demonstrating an important dissociation between choice and probability judgments. We propose the Collapsing Choice Theory (CCT) which explains how working memory capacity, probability estimation, choice alternatives, judgment, and regret all interact and effect decision quality.   相似文献   

2.
We revisit the Nowak (Int J Game Theory 26:137–141, 1997) characterization of the Banzhaf value via 2-efficiency, the Dummy player axiom, symmetry, and marginality. In particular, we provide a brief proof that also works within the classes of superadditive games and of simple games. Within the intersection of these classes, one even can drop marginality. Further, we show that marginality and symmetry can be replaced by van den Brink fairness/differential marginality. For this axiomatization, 2-efficiency can be relaxed into superadditivity on the full domain of games.  相似文献   

3.
We study, from the standpoint of coherence, comparative probabilities on an arbitrary familyE of conditional events. Given a binary relation ·, coherence conditions on · are related to de Finetti's coherent betting system: we consider their connections to the usual properties of comparative probability and to the possibility of numerical representations of ·. In this context, the numerical reference frame is that of de Finetti's coherent subjective conditional probability, which is not introduced (as in Kolmogoroff's approach) through a ratio between probability measures.Another relevant feature of our approach is that the family & need not have any particular algebraic structure, so that the ordering can be initially given for a few conditional events of interest and then possibly extended by a step-by-step procedure, preserving coherence.  相似文献   

4.
?zkal-Sanver (Theory Decis 59:193–205, 2005) studies stability and efficiency of partitions of agents in two-sided matching markets in which agents can form partitions by individual moves only, and a matching rule determines the matching in each coalition in a partition. In this study, we present the relationship between stability and efficiency of partitions that is analyzed for several matching rules and under various membership property rights codes, now allowing coalitional moves.  相似文献   

5.
What determines risk attraction or aversion? We experimentally examine three factors: the gain-loss dichotomy, the probabilities (0.2 vs. 0.8), and the money at risk (7 amounts). We find that the majority display risk attraction for small amounts of money, and risk aversion for larger amounts. Yet the frequency of risk attraction varies according to the gain-loss dichotomy and the probabilities. Kahneman and Tversky studied gain-loss reflections. We submit that a reflection can be decomposed into a translation and a probability switch. We find significant translation and switch effects, which are of comparable magnitude, a result that is equidistant from the diverging two popular views inspired by Prospect Theory: the gain-loss asymmetry, and the fourfold pattern.  相似文献   

6.
An efficient method of value assessment of a set of exchangeable alternatives A = {a 1,a 2, ,a n} is presented. It particularly applies to situations where certain preferences may be easily evaluated or are already known, while other binary comparisons may not at once be available. Further applications are to ranking partial tournaments and the emergence and the characterisation of organisational hierarchy. By sequentially performing transitively efficient assessments of uncompared pairs, an initial weakly acyclical preference structure in A is transformed into an ordering of A in echelons. We call these nicely surveyable preference structures echelon orders. Theoretical properties of echelon orders are investigated, including a characterisation and a numerical representation.  相似文献   

7.
Sacrificing Civil Liberties to Reduce Terrorism Risks   总被引:1,自引:1,他引:0  
Our survey results demonstrate that targeted screening of airline passengers raises conflicting concerns of efficiency and equity. Support for profiling increases if there is a substantial reduction in avoided delays to other passengers. The time cost and benefit components of targeting affect support for targeted screening in an efficiency-oriented manner. Nonwhite respondents are more reluctant than whites to support targeting or to be targeted. Terrorism risk assessments are highly diffuse, reflecting considerable risk ambiguity. People fear highly severe worst case terrorism outcomes, but their best estimates of the risk are more closely related to their lower bound estimates than their upper bound estimates. Anomalies evident in other risk perception contexts, such as hindsight biases and embeddedness effects, are particularly evident for terrorism risk beliefs.  相似文献   

8.
Cumulative Prospect Theory (CPT) does not explain the St. Petersburg Paradox. We show that the solutions related to probability weighting proposed to solve this paradox, (Blavatskyy, Management Science 51:677–678, 2005; Rieger and Wang, Economic Theory 28:665–679, 2006) have to cope with limitations. In that framework, CPT fails to accommodate both gambling and insurance behavior. We suggest replacing the weighting functions generally proposed in the literature by another specification which respects the following properties: (1) to solve the paradox, the slope at zero has to be finite. (2) to account for the fourfold pattern of risk attitudes, the probability weighting has to be strong enough.  相似文献   

9.
We interpret solution rules on a class of simple allocation problems as data on the choices of a policy maker. We analyze conditions under which the policy maker’s choices are (i) rational (ii) transitive-rational, and (iii) representable; that is, they coincide with maximization of a (i) binary relation, (ii) transitive binary relation, and (iii) numerical function on the allocation space. Our main results are as follows: (i) a well-known property, contraction independence (a.k.a. IIA) is equivalent to rationality; (ii) every contraction independent and other-c monotonic rule is transitive-rational; and (iii) every contraction independent and other-c monotonic rule, if additionally continuous, can be represented by a numerical function.  相似文献   

10.
In binary choice between discrete outcome lotteries, an individual may prefer lottery L1 to lottery L2 when the probability that L1 delivers a better outcome than L2 is higher than the probability that L2 delivers a better outcome than L1. Such a preference can be rationalized by three standard axioms (solvability, convexity and symmetry) and one less standard axiom (a fanning-in). A preference for the most probable winner can be represented by a skew-symmetric bilinear utility function. Such a utility function has the structure of a regret theory when lottery outcomes are perceived as ordinal and the assumption of regret aversion is replaced with a preference for a win. The empirical evidence supporting the proposed system of axioms is discussed.  相似文献   

11.
A theory of coarse utility   总被引:1,自引:0,他引:1  
  相似文献   

12.
We examine differences in the value of statistical life (VSL) across potential wage levels in panel data using quantile regressions with intercept heterogeneity. Latent heterogeneity is econometrically important and affects the estimated VSL. Our findings indicate that a reasonable average cost per expected life saved cut-off for health and safety regulations is 7 million to7 million to 8 million per life saved, but the VSL varies considerably across the labor force. Our results reconcile the previous discrepancies between hedonic VSL estimates and the values implied by theories linked to the coefficient of relative risk aversion. Because the VSL varies elastically with income, regulatory agencies should regularly update the VSL used in benefit assessments, increasing the VSL proportionally with changes in income over time.  相似文献   

13.
Framing,probability distortions,and insurance decisions   总被引:7,自引:2,他引:7  
A series of studies examines whether certain biases in probability assessments and perceptions of loss, previously found in experimental studies, affect consumers' decisions about insurance. Framing manipulations lead the consumers studied here to make hypothetical insurance-purchase choices that violate basic laws of probability and value. Subjects exhibit distortions in their perception of risk and framing effects in evaluating premiums and benefits. Illustrations from insurance markets suggest that the same effects occur when consumers make actual insurance purchases.Presented at the Conference onMaking Decisions about Liability and Insurance, The Wharton School, University of Pennsylvania, Philadelphia, PA, 6–7 December, 1991. This research is supported by National Science Foundation Grant SES88-09299. The authors thank Jon Baron, Colin Camerer, Neil Doherty, Paul Kleindorfer, Amos Tversky, and two anonymous referees for many helpful comments. We particularly acknowledge the efforts of Matthew Robinson and Penny Pollister for their help with data analysis.  相似文献   

14.
Schematic conflict occurs when evidence is interpreted in different ways (for example, by different people, who have learned to approach the given evidence with different schemata). Such conflicts are resolved either by weighting some schemata more heavily than others, or by finding common-ground inferences for several schemata, or by a combination of these two processes. Belief functions, interpreted as representations of evidence strength, provide a natural model for weighting schemata, and can be utilized in several distinct ways to compute common-ground inferences. In two examples, different computations seem to be required for reasonable common-ground inference. In the first, competing scientific theories produce distinct, logically independent inferences based on the same data. In this example, the simple product of the competing belief functions is a plausible evaluation of common ground. In the second example (sensitivity analysis), the conflict is among alternative statistical assumptions. Here, a product of belief functions will not do, but the upper envelope of normalized likelihood functions provides a reasonable definition of common ground. Different inference contexts thus seem to require different methods of conflict resolution. A class of such methods is described, and one characteristic property of this class is proved.  相似文献   

15.
Rawling  Piers 《Theory and Decision》1997,43(3):253-277
The two envelopes problem has generated a significant number of publications (I have benefitted from reading many of them, only some of which I cite; see the epilogue for a historical note). Part of my purpose here is to provide a review of previous results (with somewhat simpler demonstrations). In addition, I hope to clear up what I see as some misconceptions concerning the problem. Within a countably additive probability framework, the problem illustrates a breakdown of dominance with respect to infinite partitions in circumstances of infinite expected utility. Within a probability framework that is only finitely additive, there are failures of dominance with respect to infinite partitions in circumstances of bounded utility with finitely many consequences (see the epilogue).  相似文献   

16.
When developing Community Mental Health Services to support people with psychiatric disabilities, European countries are advocating evidence based practice (EBP ). Individual Placement and Support (IPS ) is an evidence based model designed to support people in acquiring and maintaining competitive employment. Implementation science is a growing research field, with a focus on components that impact the process of implementing EBP programmes. In this multiple case study, we have followed three IPS demonstration sites for two years, in order to describe and analyze barriers and facilitators for implementation, according to constructs described in the Consolidated Framework for Implementation Research (Damschroder et al. 2009 ). The results highlight the importance of strategic networking, as well as the need for planning and preparations carried out before the start of an EBP programme, since deficiencies related to these constructs are difficult to compensate for.   相似文献   

17.
The Value of a Probability Forecast from Portfolio Theory   总被引:1,自引:0,他引:1  
A probability forecast scored ex post using a probability scoring rule (e.g. Brier) is analogous to a risky financial security. With only superficial adaptation, the same economic logic by which securities are valued ex ante – in particular, portfolio theory and the capital asset pricing model (CAPM) – applies to the valuation of probability forecasts. Each available forecast of a given event is valued relative to each other and to the “market” (all available forecasts). A forecast is seen to be more valuable the higher its expected score and the lower the covariance of its score with the market aggregate score. Forecasts that score highly in trials when others do poorly are appreciated more than those with equal success in “easy” trials where most forecasts score well. The CAPM defines economically rational (equilibrium) forecast prices at which forecasters can trade shares in each other’s ex post score – or associated monetary payoff – thereby balancing forecast risk against return and ultimately forming optimally hedged portfolios. Hedging this way offers risk averse forecasters an “honest” alternative to the ruse of reporting conservative probability assessments.  相似文献   

18.
19.
20.
In 2010, in an article in this journal, I argued that declassified documents implicated Central Intelligence Agency (CIA) physicians in the conduct of unethical research on enhanced interrogation using detainee subjects. The focus, then as now, is upon physicians at the Office of Medical Services (OMS). The 2010 article highlighted the heavily redacted “Draft OMS Guidelines on Medical and Psychological Support to Detainee Interrogations” (the Draft). This commentary focuses upon the recently declassified final version of that document revealing further culpable evidence of unethical human subject research. The commentary locates that unethical research in historical context and the development of the Nuremberg Code. The commentary also locates enhanced interrogation in contemporary political context and considers how to hold OMS physicians accountable for the conduct of unethical human research using detainee subjects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号