首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper discusses several concepts that can be used to provide a foundation for a unified, theory of rational, economic behavior. First, decision-making is defined to be a process that takes place with reference to both subjective and objective time, that distinguishes between plans and actions, between information and states and that explicitly incorporates the collection and processing of information. This conception of decision making is then related to several important aspects of behavioral economics, the dependence of values on experience, the use of behavioral rules, the occurrence of multiple goals and environmental feedback.Our conclusions are (1) the non-transitivity of observed or revealed preferences is a characteristic of learning and hence is to be expected of rational decision-makers; (2) the learning of values through experience suggests the sensibleness of short time horizons and the making of choices according to flexible utility; (3) certain rules of thumb used to allow for risk are closely related to principles of Safety-First and can also be based directly on the hypothesis that the feeling of risk (the probability of disaster) is identified with extreme departures from recently executed decisions. (4) The maximization of a hierarchy of goals, or of a lexicographical utility function, is closely related to the search for feasibility and the practice of satisficing. (5) When the dim perception of environmental feedback and the effect of learning on values are acknowledged the intertemporal optimality of planned decision trajectories is seen to be a characteristic of subjective not objective time. This explains why decision making is so often best characterized by rolling plans. In short, we find that economic man - like any other - is an existential being whose plans are based on hopes and fears and whose every act involves a leap of faith.This paper is based on a talk presented at the Conference, New Beginnings in Economics, Akron, Ohio, March 15, 1969. Work on this paper was supported by a grant from the National Science Foundation.  相似文献   

2.
Rao Tummala  V.M.  Ling  Hong 《Theory and Decision》1998,44(3):221-230
In this paper, we use Saaty's Eigenvector Method and the Power Method as well as =1, 2, , 9, 1/2, 1/3, , 1/9} and -={1,2, ,9,1, 1/2, ,1/9} as the sets from which the pairwise comparison judgments are assigned at random to examine the variation in the values determined for the mean random consistency index. By extensive simulation analysis, we found that both methods produce the same values for the mean random consistency random index. Also, we found that the reason for producing two different sets of values is the use of vs. - and not the selection of the Power Method vs. Saaty's Eigenvector Method.  相似文献   

3.
Nash's solution of a two-person cooperative game prescribes a coordinated mixed strategy solution involving Pareto-optimal outcomes of the game. Testing this normative solution experimentally presents problems in as much as rather detailed explanations must be given to the subjects of the meaning of threat strategy, strategy mixture, expected payoff, etc. To the extent that it is desired to test the solution using naive subjects, the problem arises of imparting to them a minimal level of understanding about the issue involved in the game without actually suggesting the solution.Experiments were performed to test the properties of the solution of a cooperative two-person game as these are embodied in three of Nash's four axioms: Symmetry, Pareto-optimality, and Invariance with respect to positive linear transformations. Of these, the last was definitely discorroborated, suggesting that interpersonal comparison of utilities plays an important part in the negotiations.Some evidence was also found for a conjecture generated by previous experiments, namely that an externally imposed threat (penalty for non-cooperation) tends to bring the players closer together than the threats generated by the subjects themselves in the process of negotiation.  相似文献   

4.
Can we rationally learn to coordinate?   总被引:1,自引:0,他引:1  
In this paper we examine the issue whether individual rationality considerations are sufficient to guarantee that individuals will learn to coordinate. This question is central in any discussion of whether social phenomena (read: conventions) can be explained in terms of a purely individualistic approach. We argue that the positive answers to this general question that have been obtained in some recent work require assumptions which incorporate some convention. This conclusion may be seen as supporting the viewpoint of institutional individualism in contrast to psychological individualism.  相似文献   

5.
Let (, ) and (, ) be mean-standard deviation pairs of two probability distributions on the real line. Mean-variance analyses presume that the preferred distribution depends solely on these pairs, with primary preference given to larger mean and smaller variance. This presumption, in conjunction with the assumption that one distribution is better than a second distribution if the mass of the first is completely to the right of the mass of the second, implies that (, ) is preferred to (, ) if and only if either > or ( = and < ), provided that the set of distributions is sufficiently rich. The latter provision fails if the outcomes of all distributions lie in a finite interval, but then it is still possible to arrive at more liberal dominance conclusions between (, ) and (, ).This research was supported by the Office of Naval Research.  相似文献   

6.
The Ellsberg Paradox documented the aversion to ambiguity in the probability of winning a prize. Using an original sample of 266 business owners and managers facing risks from climate change, this paper documents the presence of departures from rationality in both directions. Both ambiguity-seeking behavior and ambiguity-averse behavior are evident. People exhibit fear effects of ambiguity for small probabilities of suffering a loss and hope effects for large probabilities. Estimates of the crossover point from ambiguity aversion (fear) to ambiguity seeking (hope) place this value between 0.3 and 0.7 for the risk per decade lotteries considered, with empirical estimates indicating a crossover mean risk of about 0.5. Attitudes toward the degree of ambiguity also reverse at the crossover point.  相似文献   

7.
A soundness proof for an axiomatization of common belief in minimal neighbourhood semantics is provided, thereby leaving aside all assumptions of monotonicity in agents reasoning. Minimality properties of common belief are thus emphasized, in contrast to the more usual fixed point properties. The proof relies on the existence of transfinite fixed points of sequences of neighbourhood systems even when they are not closed under supersets. Obvious shortcoming of the note is the lack of a completeness proof.  相似文献   

8.
Two institutions that are often implicit or overlooked in noncooperative games are the assumption of Nash behavior to solve a game, and the ability to correlate strategies. We consider two behavioral paradoxes; one in which maximin behavior rules out all Nash equilibria (Chicken), and another in which minimax supergame behavior leads to an inefficient outcome in comparison to the unique stage game equilibrium (asymmetric Deadlock). Nash outcomes are achieved in both paradoxes by allowing for correlated strategies, even when individual behavior remains minimax or maximin. However, the interpretation of correlation as a public institution differs for each case.  相似文献   

9.
We report a surprising property of --preferences: the assumption of nonincreasing relative risk aversion implies the optimal portfolio being riskless. We discuss a solution of that paradox using wealth dependent utility functions in detail. Using the revealed preference theory we show that (general, i.e. not necessary -) wealth dependent utility functions can be characterized by Wald's axiom.  相似文献   

10.
We report an experiment on two treatments of an ultimatum minigame. In one treatment, responders reactions are hidden to proposers. We observe high rejection rates reflecting responders intrinsic resistance to unfairness. In the second treatment, proposers are informed, allowing for dynamic effects over eight rounds of play. The higher rejection rates can be attributed to responders provision of a public good: Punishment creates a group reputation for being tough and effectively educate proposers. Since rejection rates with informed proposers drop to the level of the treatment with non-informed proposers, the hypothesis of responders enjoyment of overt punishment is not supported.  相似文献   

11.
In reply to McClennen, the paper argues that his criticism is based on a mistaken assumption about the meaning of rationality postulates, to be called the Implication Principle. Once we realize that the Implication Principle has no validity, McClennen's criticisms of what he calls the Reductio Argument and what he calls the Incentive Argument fall to the ground. The rest of the paper criticizes the rationality concept McClennen proposes in lieu of that used by orthodox game theory. It is argued that McClennen's concept is inconsistent with the behavior of real-life intelligent egoists; it is incompatible with the way payoffs are defined in game theory; and it would be highly dangerous as a practical guide to human behavior.The author is indebted to the National Science Foundation for financial support trough Grant GS-3222, administered through the Center for Research in Management Science, University of California, Berkeley.  相似文献   

12.
Dore  Mohammed 《Theory and Decision》1997,43(3):219-239
This paper critically reviews Ken Binmores non- utilitarian and game theoretic solution to the Arrow problem. Binmores solution belongs to the same family as Rawls maximin criterion and requires the use of Nash bargaining theory, empathetic preferences, and results in evolutionary game theory. Harsanyi has earlier presented a solution that relies on utilitarianism, which requires some exogenous valuation criterion and is therefore incompatible with liberalism. Binmores rigorous demonstration of the maximin principle for the first time presents a real alternative to a utilitarian solution.  相似文献   

13.
Summary The objective Bayesian program has as its fundamental tenet (in addition to the three Bayesian postulates) the requirement that, from a given knowledge base a particular probability function is uniquely appropriate. This amounts to fixing initial probabilities, based on relatively little information, because Bayes' theorem (conditionalization) then determines the posterior probabilities when the belief state is altered by enlarging the knowledge base. Moreover, in order to reconstruct orthodox statistical procedures within a Bayesian framework, only privileged ignorance probability functions will work.To serve all these ends objective Bayesianism seeks additional principles for specifying ignorance and partial information probabilities. H. Jeffreys' method of invariance (or Jaynes' modification thereof) is used to solve the former problem, and E. Jaynes' rule of maximizing entropy (subject to invariance for continuous distributions) has recently been thought to solve the latter. I have argued that neither policy is acceptable to a Bayesian since each is inconsistent with conditionalization. Invariance fails to give a consistent representation to the state of ignorance professed. The difficulties here parallel familiar weaknesses in the old Laplacean principle of insufficient reason. Maximizing entropy is unsatisfactory because the partial information it works with fails to capture the effect of uncertainty about related nuisance factors. The result is a probability function that represents a state richer in empirical content than the belief state targeted for representation. Alternatively, by conditionalizing on information about a nuisance parameter one may move from a distribution of lower to higher entropy, despite the obvious increase in information available.Each of these two complaints appear to me to be symptoms of the program's inability to formulate rules for picking privileged probability distributions that serve to represent ignorance or near ignorance. Certainly the methods advocated by Jeffreys, Jaynes and Rosenkrantz are mathematically convenient idealizations wherein specified distributions are elevated to the roles of ignorance and partial information distributions. But the cost that goes with the idealization is a violation of conditionalization, and if that is the ante that we must put up to back objective Bayesianism then I propose we look for a different candidate to earn our support.31  相似文献   

14.
Choices between gambles show systematic violations of stochastic dominance. For example, most people choose ($6, .05; $91, .03; $99, .92) over ($6, .02; $8, .03; $99, .95), violating dominance. Choices also violate two cumulative independence conditions: (1) If S = (z, r; x, p; y, q) R = (z, r; x, p; y, q) then S = (x, r; y, p + q) R = (x, r + p; y, q). (2) If S = (x, p; y, q; z, r) R = (x, p; y, q; z, r) then S = (x, p + q; y, r) R = (x, p; y, q + r), where 0 < z < x < x < y < y < y < z.Violations contradict any utility theory satisfying transivity, outcome monotonicity, coalescing, and comonotonic independence. Because rank-and sign-dependent utility theories, including cumulative prospect theory (CPT), satisfy these properties, they cannot explain these results.However, the configural weight model of Birnbaum and McIntosh (1996) predicted the observed violations of stochastic dominance, cumulative independence, and branch independence. This model assumes the utility of a gamble is a weighted average of outcomes\' utilities, where each configural weight is a function of the rank order of the outcome\'s value among distinct values and that outcome\'s probability. The configural weight, TAX model with the same number of parameters as CPT fit the data of most individuals better than the model of CPT.  相似文献   

15.
The traditional or orthodox decision rule of maximizing conditional expected utility has recently come under attack by critics who advance alternative causal decision theories. The traditional theory has, however, been defended. And these defenses have in turn been criticized. Here, I examine two objections to such defenses and advance a theory about the dynamics of deliberation (a diachronic theory about the process of deliberation) within the framework of which both objections to the defenses of the traditional theory fail.  相似文献   

16.
In the fifties, Popper defended an interactionistic version of body-mind dualism. It distinguished between the world of physical bodies and states and the world of mental states. Later he added a third world of objective thought contents. He claims the assumption that there is the third world is a necessary presupposition of problem-solving in general and of his philosophy of science in particular. The present article contains separate reasonings to the effect that this presupposition is neither necessary nor even possible. It is further argued that postulating the existence of entities makes sense only relative to a criterion of ontological commitment, which Popper does not mention and obviously does not have, and that in addition it presupposes a theory, which is tentatively accepted as true and which according to the criterion implies the existence of the entities. But as yet there is no testable theory involving terms like mind, intention etc., which made the notion that itself or its terms are essentially different from what is already known in the empirical sciences at least plausible. Therefore the body-mind controversy is still pointless. Popper's stand on it seems to be but a reflex of his anti-behavioristic and anti-psychologistic attitude.  相似文献   

17.
An agent who violates independence can avoid dynamic inconsistency in sequential choice if he is sophisticated enough to make use of backward induction in planning. However, Seidenfeld has demonstrated that such a sophisticated agent with dependent preferences is bound to violate the principle of dynamic substitution, according to which admissibility of a plan is preserved under substitution of indifferent options at various choice nodes in the decision tree. Since Seidenfeld considers dynamic substitution to be a coherence condition on dynamic choice, he concludes that sophistication cannot save a violator of independence from incoherence. In response to McClennens objection that relying on dynamic substitution when independence is at stake must be question-begging, Seidenfeld undertakes to prove that dynamic substitution follows from the principle of backward induction alone, provided we assume that the agents admissible choices from different sets of feasible plans are all based on a fixed underlying preference ordering of plans. This paper shows that Seidenfelds proof fails: depending on the interpretation, it is either invalid or based on an unacceptable assumption.  相似文献   

18.
We study the uncertain dichotomous choice model. In this model a set of decision makers is required to select one of two alternatives, say support or reject a certain proposal. Applications of this model are relevant to many areas, such as political science, economics, business and management. The purpose of this paper is to estimate and compare the probabilities that different decision rules may be optimal. We consider the expert rule, the majority rule and a few in-between rules. The information on the decisional skills is incomplete, and these skills arise from an exponential distribution. It turns out that the probability that the expert rule is optimal far exceeds the probability that the majority rule is optimal, especially as the number of the decision makers becomes large.  相似文献   

19.
Statistical analysis for negotiation support   总被引:2,自引:0,他引:2  
In this paper we provide an overview of the issues involved in using statistical analysis to support the process of international negotiation. We will illustrate how the approach can contribute to a negotiator's understanding and control of the interactions that occur during the course of a negotiation. The techniques are suited to the analysis of data collected from ongoing discussions and moves made by the parties. The analyses are used to illuminate influences and processes as they operate in particular cases or in negotiations in general. They do not identify a best strategy or outcome from among alternatives suggested either from theoretical assumptions about rationality and information-processing (see Munier and Rullière's paper in this issue), from personal preference structures (see Spector's paper in this issue), or from a rule-based modeling system (see Kersten's paper in this issue). This distinction should be evident in the discussion to follow, organized into several sections: From Empirical to Normative Analysis; Statistical Analysis for Situational Diagnosis; Time-Series Analysis of Cases, and Knowledge as Leverage Over the Negotiation Process. In a final section, we consider the challenge posed by attempts to implement these techniques with practitioners.  相似文献   

20.
Harrod introduced a refinement to crude Utilitarianism with the aim of reconciling it with common sense ethics. It is shown (a) that this refinement (later known as Rule Utilitarianism) does not maximise utility (b) the principle which truly maximizes utility, marginal private benefit equals marginal social cost, requires that a number of forbidden acts like lying be performed. Hence Harrod's claim that his refined Utilitarianism is the foundation of moral institutions cannot be sustained. Some more modern forms of Utilitarianism are reinterpreted in this paper as utility maximizing decision rules. While they produce more utility than Harrod's rule, they require breaking the moral rules some of the time, just like the marginal rule mentioned above. However, Harrod's rule is useful in warning the members of a group, considered as a single moral agent, of the externalities that lie beyond the immediate consequences of the collective action.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号