首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper discusses several concepts that can be used to provide a foundation for a unified, theory of rational, economic behavior. First, decision-making is defined to be a process that takes place with reference to both subjective and objective time, that distinguishes between plans and actions, between information and states and that explicitly incorporates the collection and processing of information. This conception of decision making is then related to several important aspects of behavioral economics, the dependence of values on experience, the use of behavioral rules, the occurrence of multiple goals and environmental feedback.Our conclusions are (1) the non-transitivity of observed or revealed preferences is a characteristic of learning and hence is to be expected of rational decision-makers; (2) the learning of values through experience suggests the sensibleness of short time horizons and the making of choices according to flexible utility; (3) certain rules of thumb used to allow for risk are closely related to principles of Safety-First and can also be based directly on the hypothesis that the feeling of risk (the probability of disaster) is identified with extreme departures from recently executed decisions. (4) The maximization of a hierarchy of goals, or of a lexicographical utility function, is closely related to the search for feasibility and the practice of satisficing. (5) When the dim perception of environmental feedback and the effect of learning on values are acknowledged the intertemporal optimality of planned decision trajectories is seen to be a characteristic of subjective not objective time. This explains why decision making is so often best characterized by rolling plans. In short, we find that economic man - like any other - is an existential being whose plans are based on hopes and fears and whose every act involves a leap of faith.This paper is based on a talk presented at the Conference, New Beginnings in Economics, Akron, Ohio, March 15, 1969. Work on this paper was supported by a grant from the National Science Foundation.  相似文献   

2.
A new investigation is launched into the problem of decision-making in the face of complete ignorance, and linked to the problem of social choice. In the first section the author introduces a set of properties which might characterize a criterion for decision-making under complete ignorance. Two of these properties are novel: independence of non-discriminating states, and weak pessimism. The second section provides a new characterization of the so-called principle of insufficient reason. In the third part, lexicographic maximin and maximax criteria are characterized. Finally, the author's results are linked to the problem of social choice.  相似文献   

3.
In this paper, a problem for utility theory - that it would have an agent who was compelled to play Russian Roulette with one revolver or another, to pay as much to have a six-shooter with four bullets relieved of one bullet before playing with it, as he would be willing to pay to have a six-shooter with two bullets emptied - is reviewed. A less demanding Bayesian theory is described, that would have an agent maximize expected values of possible total consequence of his actions. And utility theory is located within that theory as valid for agents who satisfy certain formal conditions, that is, for agents who are, in terms of that more general theory, indifferent to certain dimensions of risk. Raiffa- and Savage-style arguments for its more general validity are then resisted. Addenda are concerned with implications for game theory, and relations between utilities and values.  相似文献   

4.
Operational researchers, management scientists, and industrial engineers have been asked by Russell Ackoff to become systems scientists, yet he stated that Systems Science is not a science. (TIMS Interfaces, 2 (4), 41). A. C. Fabergé (Science 184, 1330) notes that the original intent of operational researchers was that they be scientists, trained to observe. Hugh J. Miser (Operations Research 22, 903), views operations research as a science, noting that its progress indeed is of a cyclic nature.The present paper delineates explicitly the attributes of simulation methodology. Simulation is shown to be both an art and a science; its methodology, properly used, is founded both on confirmed (validated) observation and scrutinised (verified) art work.The paper delineates the existing procedures by which computer-directed models can be cyclically scrutinised and confirmed and therefore deemed credible. The complexities of the phenomena observed by social scientists are amenable to human understanding by properly applied simulation; the methodology of the scientist of systems (the systemic scientist).
Résumé Russell Ackoff propose à ceux qui s'occupent de recherches opérationnelle, industrielle, et de gestion, d'agir en systems scientists, et pourtant il affirme que systems science n'est pas une science (TIMS Interfaces 2 (4), 41). A. C. Fabergé (Science 184, 1330) remarque, qu'à l'origine, le but de ceux qui s'occupaient de recherche opérationnelle était d'agir en hommes de science instruits à observer. Hugh J. Miser (Operational Research 22, 903) considère la recherche opérationnelle comme science, notant que ses progrès sont en effet de nature cyclique.La présente étude délimite explicitement les attributs de la méthode de la simulation. Il est démontré que la simulation est à la fois un art et une science; sa méthode, lorsqu'utilisée correctement, repose sur l'observation validée et le modèle vérifié.L'étude délimite les moyens actuels dont nous disposons pour vérifier et valider cycliquement les modèles bâtis à l'aide d'ordinateurs, établissant ainsi leur crédibilité. La nature complexe des phénomènes étudiés par les sciences sociales peut être comprise à l'aide de la simulation: la méthode dont se servent les hommes de science qui étudient les systèmes (les scientistes systémiques).
  相似文献   

5.
This paper considers two fundamental aspects of the analysis of dynamic choices under risk: the issue of the dynamic consistency of the strategies of a non EU maximizer, and the issue that an individual whose preferences are nonlinear in probabilities may choose a strategy which is in some appropriate sense dominated by other strategies. A proposed way of dealing with these problems, due to Karni and Safra and called behavioral consistency, is described. The implications of this notion of behavioral consistency are explored, and it is shown that while the Karni and Safra approach obtains dynamically consistent behavior under nonlinear preferences, it may imply the choice of dominated strategies even in very simple decision trees.  相似文献   

6.
This paper falls within the field of Distributive Justice and (as the title indicates) addresses itself specifically to the meshing problem. Briefly stated, the meshing problem is the difficulty encountered when one tries to aggregate the two parameters of beneficence and equity in a way that results in determining which of two or more alternative utility distributions is most just. A solution to this problem, in the form of a formal welfare measure, is presented in the paper. This formula incorporates the notions of equity and beneficence (which are defined earlier by the author) and weighs them against each other to compute a numerical value which represents the degree of justice a given distribution possesses. This value can in turn be used comparatively to select which utility scheme, of those being considered, is best.Three fundamental adequacy requirements, which any acceptable welfare measuring method must satisfy, are presented and subsequently demonstrated to be formally deducible as theorems of the author's system. A practical application of the method is then considered as well as a comparison of it with Nicholas Rescher's method (found in his book, Distributive Justice). The conclusion reached is that Rescher's system is unacceptable, since it computes counter-intuitive results. Objections to the author's welfare measure are considered and answered. Finally, a suggestion for expanding the system to cover cases it was not originally designed to handle (i.e. situations where two alternative utility distributions vary with regard to the number of individuals they contain) is made. The conclusion reached at the close of the paper is that an acceptable solution to the meshing problem has been established.I would like to gratefully acknowledge the assistance of Michael Tooley whose positive suggestions and critical comments were invaluable in the writting of this paper.  相似文献   

7.
Scientists often disagree about whether a new theory is better than the current theory. From this some (e.g., Thomas Kuhn) have inferred that the values of science are changing and subjective, and hence that science is an irrational enterprise. As an alternative, this paper develops a rational model of the scientific enterprise according to which the scope and elegance of theories are important elements in the scientist's utility function. The varied speed of acceptance of new theories by scientists can be explained in terms of the optimal allocation of time among different scientific activities. The model thus accounts for the rationality of science in a way that is broadly consistent with the empirical evidence on the history and practice of science.  相似文献   

8.
In general, the technical apparatus of decision theory is well developed. It has loads of theorems, and they can be proved from axioms. Many of the theorems are interesting, and useful both from a philosophical and a practical perspective. But decision theory does not have a well agreed upon interpretation. Its technical terms, in particular, utility and preference do not have a single clear and uncontroversial meaning.How to interpret these terms depends, of course, on what purposes in pursuit of which one wants to put decision theory to use. One might want to use it as a model of economic decision-making, in order to predict the behavior of corporations or of the stock market. In that case, it might be useful to interpret the technical term utility as meaning money profit. Decision theory would then be an empirical theory. I want to look into the question of what utility could mean, if we want decision theory to function as a theory of practical rationality. I want to know whether it makes good sense to think of practical rationality as fully or even partly accounted for by decision theory. I shall lay my cards on the table: I hope it does make good sense to think of it that way. For, I think, if Humeans are right about practical rationality, then decision theory must play a very large part in their account. And I think Humeanism has very strong attractions.  相似文献   

9.
A soundness proof for an axiomatization of common belief in minimal neighbourhood semantics is provided, thereby leaving aside all assumptions of monotonicity in agents reasoning. Minimality properties of common belief are thus emphasized, in contrast to the more usual fixed point properties. The proof relies on the existence of transfinite fixed points of sequences of neighbourhood systems even when they are not closed under supersets. Obvious shortcoming of the note is the lack of a completeness proof.  相似文献   

10.
Separating marginal utility and probabilistic risk aversion   总被引:10,自引:0,他引:10  
This paper is motivated by the search for one cardinal utility for decisions under risk, welfare evaluations, and other contexts. This cardinal utility should have meaningprior to risk, with risk depending on cardinal utility, not the other way around. The rank-dependent utility model can reconcile such a view on utility with the position that risk attitude consists of more than marginal utility, by providing a separate risk component: a probabilistic risk attitude towards probability mixtures of lotteries, modeled through a transformation for cumulative probabilities. While this separation of risk attitude into two independent components is the characteristic feature of rank-dependent utility, it had not yet been axiomatized. Doing that is the purpose of this paper. Therefore, in the second part, the paper extends Yaari's axiomatization to nonlinear utility, and provides separate axiomatizations for increasing/decreasing marginal utility and for optimistic/pessimistic probability transformations. This is generalized to interpersonal comparability. It is also shown that two elementary and often-discussed properties — quasi-convexity (aversion) of preferences with respect to probability mixtures, and convexity (pessimism) of the probability transformation — are equivalent.  相似文献   

11.
Summary The objective Bayesian program has as its fundamental tenet (in addition to the three Bayesian postulates) the requirement that, from a given knowledge base a particular probability function is uniquely appropriate. This amounts to fixing initial probabilities, based on relatively little information, because Bayes' theorem (conditionalization) then determines the posterior probabilities when the belief state is altered by enlarging the knowledge base. Moreover, in order to reconstruct orthodox statistical procedures within a Bayesian framework, only privileged ignorance probability functions will work.To serve all these ends objective Bayesianism seeks additional principles for specifying ignorance and partial information probabilities. H. Jeffreys' method of invariance (or Jaynes' modification thereof) is used to solve the former problem, and E. Jaynes' rule of maximizing entropy (subject to invariance for continuous distributions) has recently been thought to solve the latter. I have argued that neither policy is acceptable to a Bayesian since each is inconsistent with conditionalization. Invariance fails to give a consistent representation to the state of ignorance professed. The difficulties here parallel familiar weaknesses in the old Laplacean principle of insufficient reason. Maximizing entropy is unsatisfactory because the partial information it works with fails to capture the effect of uncertainty about related nuisance factors. The result is a probability function that represents a state richer in empirical content than the belief state targeted for representation. Alternatively, by conditionalizing on information about a nuisance parameter one may move from a distribution of lower to higher entropy, despite the obvious increase in information available.Each of these two complaints appear to me to be symptoms of the program's inability to formulate rules for picking privileged probability distributions that serve to represent ignorance or near ignorance. Certainly the methods advocated by Jeffreys, Jaynes and Rosenkrantz are mathematically convenient idealizations wherein specified distributions are elevated to the roles of ignorance and partial information distributions. But the cost that goes with the idealization is a violation of conditionalization, and if that is the ante that we must put up to back objective Bayesianism then I propose we look for a different candidate to earn our support.31  相似文献   

12.
Singular causal explanations cite explicitly, or may be paraphrased to cite explicitly, a particular factor as the cause of another particular factor. During recent years there has emerged a consensus account of the nature of an important feature of such explanations, the distinction between a factor regarded correctly in a given context of inquiry as the cause of a given result and those other causally relevant factors, sometimes called mere conditions, which are not regarded correctly in that context of inquiry as the cause of that result. In this paper that consensus account is characterized and developed. The developed version is then used to illuminate some recent discussions of singular causal explanations.Work on this paper was supported by a University of Maryland Faculty Research Award. Earlier versions were read at the University of Minnesota and at the 1971 Western Division meetings of the American Philosophical Association. I have profited from criticisms raised on these occasions. I am especially grateful for the comments of James Lesher, Peter Machamer, John Vollrath, and the students in my Macalester College seminar.  相似文献   

13.
We study the uncertain dichotomous choice model. In this model a set of decision makers is required to select one of two alternatives, say support or reject a certain proposal. Applications of this model are relevant to many areas, such as political science, economics, business and management. The purpose of this paper is to estimate and compare the probabilities that different decision rules may be optimal. We consider the expert rule, the majority rule and a few in-between rules. The information on the decisional skills is incomplete, and these skills arise from an exponential distribution. It turns out that the probability that the expert rule is optimal far exceeds the probability that the majority rule is optimal, especially as the number of the decision makers becomes large.  相似文献   

14.
We report an experiment on two treatments of an ultimatum minigame. In one treatment, responders reactions are hidden to proposers. We observe high rejection rates reflecting responders intrinsic resistance to unfairness. In the second treatment, proposers are informed, allowing for dynamic effects over eight rounds of play. The higher rejection rates can be attributed to responders provision of a public good: Punishment creates a group reputation for being tough and effectively educate proposers. Since rejection rates with informed proposers drop to the level of the treatment with non-informed proposers, the hypothesis of responders enjoyment of overt punishment is not supported.  相似文献   

15.
Two institutions that are often implicit or overlooked in noncooperative games are the assumption of Nash behavior to solve a game, and the ability to correlate strategies. We consider two behavioral paradoxes; one in which maximin behavior rules out all Nash equilibria (Chicken), and another in which minimax supergame behavior leads to an inefficient outcome in comparison to the unique stage game equilibrium (asymmetric Deadlock). Nash outcomes are achieved in both paradoxes by allowing for correlated strategies, even when individual behavior remains minimax or maximin. However, the interpretation of correlation as a public institution differs for each case.  相似文献   

16.
Harrod introduced a refinement to crude Utilitarianism with the aim of reconciling it with common sense ethics. It is shown (a) that this refinement (later known as Rule Utilitarianism) does not maximise utility (b) the principle which truly maximizes utility, marginal private benefit equals marginal social cost, requires that a number of forbidden acts like lying be performed. Hence Harrod's claim that his refined Utilitarianism is the foundation of moral institutions cannot be sustained. Some more modern forms of Utilitarianism are reinterpreted in this paper as utility maximizing decision rules. While they produce more utility than Harrod's rule, they require breaking the moral rules some of the time, just like the marginal rule mentioned above. However, Harrod's rule is useful in warning the members of a group, considered as a single moral agent, of the externalities that lie beyond the immediate consequences of the collective action.  相似文献   

17.
Statistical analysis for negotiation support   总被引:2,自引:0,他引:2  
In this paper we provide an overview of the issues involved in using statistical analysis to support the process of international negotiation. We will illustrate how the approach can contribute to a negotiator's understanding and control of the interactions that occur during the course of a negotiation. The techniques are suited to the analysis of data collected from ongoing discussions and moves made by the parties. The analyses are used to illuminate influences and processes as they operate in particular cases or in negotiations in general. They do not identify a best strategy or outcome from among alternatives suggested either from theoretical assumptions about rationality and information-processing (see Munier and Rullière's paper in this issue), from personal preference structures (see Spector's paper in this issue), or from a rule-based modeling system (see Kersten's paper in this issue). This distinction should be evident in the discussion to follow, organized into several sections: From Empirical to Normative Analysis; Statistical Analysis for Situational Diagnosis; Time-Series Analysis of Cases, and Knowledge as Leverage Over the Negotiation Process. In a final section, we consider the challenge posed by attempts to implement these techniques with practitioners.  相似文献   

18.
Nash's solution of a two-person cooperative game prescribes a coordinated mixed strategy solution involving Pareto-optimal outcomes of the game. Testing this normative solution experimentally presents problems in as much as rather detailed explanations must be given to the subjects of the meaning of threat strategy, strategy mixture, expected payoff, etc. To the extent that it is desired to test the solution using naive subjects, the problem arises of imparting to them a minimal level of understanding about the issue involved in the game without actually suggesting the solution.Experiments were performed to test the properties of the solution of a cooperative two-person game as these are embodied in three of Nash's four axioms: Symmetry, Pareto-optimality, and Invariance with respect to positive linear transformations. Of these, the last was definitely discorroborated, suggesting that interpersonal comparison of utilities plays an important part in the negotiations.Some evidence was also found for a conjecture generated by previous experiments, namely that an externally imposed threat (penalty for non-cooperation) tends to bring the players closer together than the threats generated by the subjects themselves in the process of negotiation.  相似文献   

19.
In reply to McClennen, the paper argues that his criticism is based on a mistaken assumption about the meaning of rationality postulates, to be called the Implication Principle. Once we realize that the Implication Principle has no validity, McClennen's criticisms of what he calls the Reductio Argument and what he calls the Incentive Argument fall to the ground. The rest of the paper criticizes the rationality concept McClennen proposes in lieu of that used by orthodox game theory. It is argued that McClennen's concept is inconsistent with the behavior of real-life intelligent egoists; it is incompatible with the way payoffs are defined in game theory; and it would be highly dangerous as a practical guide to human behavior.The author is indebted to the National Science Foundation for financial support trough Grant GS-3222, administered through the Center for Research in Management Science, University of California, Berkeley.  相似文献   

20.
Rawls' Difference Principle asserts that a basic economic structure is just if it makes the worst off people as well off as is feasible. How well off someone is is to be measured by an index of primary social goods. It is this index that gives content to the principle, and Rawls gives no adequate directions for constructing it. In this essay a version of the difference principle is proposed that fits much of what Rawls says, but that makes use of no index. Instead of invoking an index of primary social goods, the principle formulated here invokes a partial ordering of prospects for opportunities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号