首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
2.
There exists no completely satisfactory theory of risk attitude in current normative decision theories. Existing notions confound attitudes to pure risk with unrelated psychological factors such as strength of preference for certain outcomes, and probability weighting. In addition traditional measures of risk attitude frequently cannot be applied to non-numerical consequences, and are not psychologically intuitive. I develop Pure Risk theory which resolves these problems – it is consistent with existing normative theories, and both internalises and generalises the intuitive notion of risk being related to the probability of not achieving one’s aspirations. Existing models which ignore pure risk attitudes may be misspecified, and effects hitherto modelled as loss aversion or utility curvature may be due instead to Pure Risk attitudes.  相似文献   

3.
In this paper, the empirical performance of several preference functionals is assessed using individual and group experimental data. We investigate if there is a risky choice theory that fits group decisions better than alternative theories, and if there are significant differences between individual and group choices. Experimental findings reported in this paper provide answers to both of those questions showing that expected utility gains a “winning” position over higher-level functionals (we considered disappoint aversion and two variants of rank-dependent utility) when risky choices are undertaken by individuals as well as by small groups. However, in the group experiment, alternatives (and, most notably, disappoint aversion) improve their relative performance, a fact that hints at the existence of differences between individual and group choices. We interpreted this result as evidence that feelings-like disappointment aversion become stronger in group decision.  相似文献   

4.
Common ratio effects should be ruled out if subjects’ preferences satisfy compound independence, reduction of compound lotteries, and coalescing. In other words, at least one of these axioms should be violated in order to generate a common ratio effect. Relying on a simple experiment, we investigate which failure of these axioms is concomitant with the empirical observation of common ratio effects. We observe that compound independence and reduction of compound lotteries hold, whereas coalescing is systematically violated. This result provides support for theories which explain the common ratio effect by violations of coalescing (i.e., configural weight theory) instead of violations of compound independence (i.e., rank-dependent utility or cumulative prospect theory).  相似文献   

5.
The random preference, Fechner (or white noise), and constant error (or tremble) models of stochastic choice under risk are compared. Various combinations of these approaches are used with expected utility and rank-dependent theory. The resulting models are estimated in a random effects framework using experimental data from two samples of 46 subjects who each faced 90 pairwise choice problems. The best fitting model uses the random preference approach with a tremble mechanism, in conjunction with rank-dependent theory. As subjects gain experience, trembles become less frequent and there is less deviation from behaviour consistent with expected utility theory.  相似文献   

6.
A substantial body of empirical evidence shows that individuals overweight extreme events and act in conflict with the expected utility theory. These findings were the primary motivation behind the development of a rank-dependent utility theory for choice under uncertainty. The purpose of this paper is to demonstrate that some simple empirical rules of thumb for choice under uncertainty are consistent with the rank-dependent utility theory.  相似文献   

7.
A number of classical as well as quite new utility representations for gains are explored with the aim of understanding the behavioral conditions that are necessary and sufficient for various subfamilies of successively stronger representations to hold. Among the utility representations are: ranked additive, weighted, rank-dependent (which includes cumulative prospect theory as a special case), gains decomposition, subjective expected, and independent increments*, where * denotes something new in this article. Among the key behavioral conditions are: idempotence, general event commutativity*, coalescing, gains decomposition, and component summing*. The structure of relations is sufficiently simple that certain key experiments are able to exclude entire classes of representations. For example, the class of rank-dependent utility models is very likely excluded because of empirical results about the failure of coalescing. Figures 1–3 summarize some of the primary results.JEL Classification  D46, D81  相似文献   

8.
9.
Objective. Despite the rich discussions about the role of information disclosure programs in environmental policy, our theoretical understanding of how and why information disclosure programs work lacks a clear framework. This article begins to fill that void by laying out some fundamental theories and concepts that underlie the empirical work on the subject. Methods. Basic theories arising from our knowledge of economics, psychology, and politics are compared. Previous research is analyzed with these theories in mind. Results. Research results confirm the plausibility of each of these theories, though the most compelling evidence so far suggests that shock and shame are key motivating factors for improved environmental performance by industry. Conclusions. The argument is made that our theoretical foundations must be understood better if we are to make sense of the empirical work on the subject. Policy implications are addressed.  相似文献   

10.
The paper considers the conjecture that forecasts from preferred economic models or theories d-separate forecasts from less preferred models or theories from the Actual realization of the variable for which a scientific explanation is sought. D-separation provides a succinct notion to represent forecast dominance of one set of forecasts over another; it provides, as well, a criterion for model preference as a fundamental device for progress in economic science. We demonstrate these ideas with examples from three areas of economic modeling.  相似文献   

11.
Empirical evidence from both utility and psychophysical experiments suggests that people respond quite differently—perhaps discontinuously—to stimulus pairs when one consequence or signal is set to `zero.' Such stimuli are called unitary. The author's earlier theories assumed otherwise. In particular, the key property of segregation relating gambles and joint receipts (or presentations) involves unitary stimuli. Also, the representation of unitary stimuli was assumed to be separable (i.e., multiplicative). The theories developed here do not invoke separability. Four general cases based on two distinctions are explored. The first distinction is between commutative joint receipts, which are relevant to utility, and the non-commutative ones, which are relevant to psychophysics. The second distinction concerns how stimuli of the form (x, C; y) and the operation of joint receipt are linked: by segregation, which mixes stimuli and unitary ones, and by distributivity, which does not involve any unitary stimuli. A class of representations more general than rank-dependent utility (RDU) is found in which monotonic functions of increments U(x)-U(y), where U is an order preseving representation of gambles, and joint receipt play a role. This form and its natural generalization to gambles with n > 2 consequences, which is also axiomatized, appear to encompass models of configural weights and decision affect. When joint receipts are not commutative, somewhat similar representations of stimuli arise, and joint receipts are shown to have a conjoint additive representation and in some cases a constant bias independent of signal intensity is predicted.  相似文献   

12.
This paper introduces the likelihood method for decision under uncertainty. The method allows the quantitative determination of subjective beliefs or decision weights without invoking additional separability conditions, and generalizes the Savage–de Finetti betting method. It is applied to a number of popular models for decision under uncertainty. In each case, preference foundations result from the requirement that no inconsistencies are to be revealed by the version of the likelihood method appropriate for the model considered. A unified treatment of subjective decision weights results for most of the decision models popular today. Savage’s derivation of subjective expected utility can now be generalized and simplified. In addition to the intuitive and empirical contributions of the likelihood method, we provide a number of technical contributions: We generalize Savage’s nonatomiticy condition (“P6”) and his assumption of (sigma) algebras of events, while fully maintaining his flexibility regarding the outcome set. Derivations of Choquet expected utility and probabilistic sophistication are generalized and simplified similarly. The likelihood method also reveals a common intuition underlying many other conditions for uncertainty, such as definitions of ambiguity aversion and pessimism.  相似文献   

13.
This paper is about satisficing behaviour. Rather tautologically, this is when decision-makers are satisfied with achieving some objective, rather than in obtaining the best outcome. The term was coined by Simon (Q J Econ 69:99–118, 1955), and has stimulated many discussions and theories. Prominent amongst these theories are models of incomplete preferences, models of behaviour under ambiguity, theories of rational inattention, and search theories. Most of these, however, seem to lack an answer to at least one of two key questions: when should the decision-maker (DM) satisfice; and how should the DM satisfice. In a sense, search models answer the latter question (in that the theory tells the DM when to stop searching), but not the former; moreover, usually the question as to whether any search at all is justified is left to a footnote. A recent paper by Manski (Theory Decis. doi: 10.1007/s11238-017-9592-1, 2017) fills the gaps in the literature and answers the questions: when and how to satisfice? He achieves this by setting the decision problem in an ambiguous situation (so that probabilities do not exist, and many preference functionals can therefore not be applied) and by using the Minimax Regret criterion as the preference functional. The results are simple and intuitive. This paper reports on an experimental test of his theory. The results show that some of his propositions (those relating to the ‘how’) appear to be empirically valid while others (those relating to the ‘when’) are less so.  相似文献   

14.
A hybrid preference framework is proposed for strategic conflict analysis to integrate preference strength and preference uncertainty into the paradigm of the graph model for conflict resolution (GMCR) under multiple decision makers. This structure offers decision makers a more flexible mechanism for preference expression, which can include strong or mild preference of one state or scenario over another, as well as equal preference. In addition, preference between two states can be uncertain. The result is a preference framework that is more general than existing models which consider preference strength and preference uncertainty separately. Within the hybrid preference structure, four kinds of stability are defined as solution concepts and a post-stability analysis, called status quo analysis, which can be used to track the evolution of a given conflict. Algorithms are provided for implementing the key inputs of stability analysis and status quo analysis within the extended preference structure. The new stability concepts under the hybrid preference structure can be used to model complex strategic conflicts arising in practical applications, and can provide new insights for the conflicts. The method is illustrated using the conflict over proposed bulk water exports from Lake Gisborne in Newfoundland, Canada.  相似文献   

15.
16.
Coalescing,Event Commutativity,and Theories of Utility   总被引:1,自引:0,他引:1  
Preferences satisfying rank-dependent utility exhibit three necessary properties: coalescing (forming the union of events having the same consequence), status-quo event commutativity, and rank-dependent additivity. The major result is that, under a few additional, relatively non-controversial, necessary conditions on binary gambles and assuming mappings are onto intervals, the converse is true. A number of other utility representations are checked for each of these three properties (see Table 2, Section 7).  相似文献   

17.
This paper extends the existing literature concerning the relationship between two parameter decision models and those based on expected utility in two main directions. The first relaxes Meyer's location and scale (or Sinn's linear class) condition and shows that a two-parameter representation of preferences over uncertain prospects and the expected utility representation yield consistent rankings of random variables when the decision maker's choice set is restricted to random variables differing by mean shifts and monotone meanpreserving spreads. The second shows that the rank-dependent expected utility model is also consistent with two-parameter ranking methods if the probability transform satisfies certain dominance conditions. The main implication of these results is that the simple two-parameter model can be used to analyze the comparative statics properties of a wide variety of economic models, including those with multiple sources of uncertainty when the random variables are comonotonic. To illustrate this point, we apply our results to the problem of optimal portfolio investment with random initial wealth. We find that it is relatively easy to obtain strong global comparative statics results even if preferences do not satisfy the independence axiom.  相似文献   

18.
The research into the typical behavioral pattern, motivational structure, and the value system of psychopaths can shed light on at least three aspects related to the analysis of the moral agency. First, it can help elucidating the emotive and cognitive conditions necessary for moral performance. Secondly, it can provide empirical evidence supporting the externalist theories of moral motivation. Finally, it can bring into greater focus our intuitive notion of the limits of moral responsibility. In this paper I shall concentrate on the last one—the question of responsibility of the amoralists, but the discussion will have an indirect bearing on the other two themes as well. My main reason for holding psychopaths morally responsible breaks down into two claims: the assumption that most ordinary people are morally responsible for their intentional actions (i.e., the rejection of hard determinism) and the denial that the psychopaths are qualitatively different from the non‐psychopaths. This thesis is further defended against two objections. First, I am arguing that the genetically based emotive deficiency of the psychopaths cannot be seen the factor condemning them to amoral existence, and thus cannot be cited as an exempting condition. Secondly, my position is defended against the claim that psychopaths are partly responsible for their actions. It is argued that the notion of partial responsibility is either incoherent or else rests on a false empirical premise. My conclusion is in agreement with, and provides a theoretical justification for, the position of most classifications of the persons with antisocial personality disorder in the DSM IV.  相似文献   

19.
Summary and Conclusion The present paper addresses the problem of comparison of models ofpayoff disbursement in coalition formation studies which make point,line, or area predictions. A satisfactory solution to this problem is criticalfor model comparison, which has been the major focus of research oncoalition forming behavior during the last decade. The goal of this paperis to devise and subsequently apply a test procedure which, in comparingthe models to each other, offsets the advantage that the less specific modelhas over its competitor. In addition, the test procedure should employmeasures of error which yield intuitive results and are consistent with theprinciples underlying present coalition theories.It was contented that both the error measure of Bonacich and the netrate of success of Selten and Krishker suffer from serious deficiencies.Bonacich's approach allows for degrees of confirmation of a model butemploys an index of error which yields counterintuitive results. Theapproach of Seken and Krischker also defies intuition and commonpractice by treating all payoff vectors that do not fall in the model'sprediction set in exactly the same manner. The test procedure proposedin the present paper allows the prediction set of a model to expanduniformly in all the directions (dimensions) of the outcome space untilit encompasses all the observed payoff vectors which lie in this space. Indoing so it generates a function, called a support function, which relatesthe cumulative proportion of observed payoff vectors within the expand-ed set of predictions against the relative size of this set. By comparing toeach other the cumulative proportions for two different models when therelative sizes of their expanded predictions sets are held equal, theprocedure offsets the advantage possessed by the less specific modelwhich initially prescribes a larger or more dispersed prediction set.Like the index of error E devised by Bonacich, the procedure proposedin the present paper incorporates the intuitive idea that differentoutcomes diffentially confirm a theory if they are not contained in itsprediction set. Error is allowed to be continuous even if the theory underconsideration is algebraic. Statistical tests of algebraic theories in otherareas of psychology are almost always based on this assumption. Theprocedure also incorporates the shortest rather than the mean squareddistance between a payoff vector and a set of predicted payoff vectorsas the appropriate measure of error. The shortest distance is appropriatebecause coalition theories are mute with respect to the degree of importance, representativeness, of 'typicality' of the predictions they make.The procedure seems to yield satisfactory results. When applied to thetwo studies by Rapoport and Kahan (1976) and Kahan and Rapoport(1980) it has not favored models making line predictions over modelsmaking point predictions. It has established either strong or weak domi-nance relations between all pairs of models tested in these two studies.And it has confirmed the major conclusions of the two studies, which hadbeen originally reached by less rigourous tests of a smaller number ofmodels.  相似文献   

20.
ABSTRACT: In order to delimit the realm of social phenomena, sociologists refer implicitly or explicitly to a distinction between living human beings and other entities, that is, sociologists equate the social world with the world of living humans. This consensus has been questioned by only a few authors, such as Luckmann, and some scholars of science studies. According to these approaches, it would be ethnocentric to treat as self-evident the premise that only living human beings can be social actors. The methodological consequence of such critique is a radical deanthropologization of sociological research. It must be considered an open question whether or not only living human can be social actors. The paper starts with a discussion of the methodological problems posed by such an analysis of the borders of the social world, and presents the results of an empirical analysis of these borders in the fields of intensive care and neurological rehabilitation. Within these fields it must be determined whether a body is a living human body or a symbol using human body. The analysis of these elementary border phenomena challenges basic sociological concepts. The relevant contemporary sociological theories refer to a dyadic constellation as the systematic starting point of their concept of sociality. The complex relationship between at least two entities is understood as the basis of the development of a novel order that functions as a mediating structure between the involved parties. Based upon empirical data, I argue that it is necessary to change this foundational assumption. Not the dyad but the triad must be understood as the foundational constellation. This implies a new understanding of the third actor, which is distinct from the concepts developed by Simmel and Berger and Luckmann.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号