首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper discusses several concepts that can be used to provide a foundation for a unified, theory of rational, economic behavior. First, decision-making is defined to be a process that takes place with reference to both subjective and objective time, that distinguishes between plans and actions, between information and states and that explicitly incorporates the collection and processing of information. This conception of decision making is then related to several important aspects of behavioral economics, the dependence of values on experience, the use of behavioral rules, the occurrence of multiple goals and environmental feedback.Our conclusions are (1) the non-transitivity of observed or revealed preferences is a characteristic of learning and hence is to be expected of rational decision-makers; (2) the learning of values through experience suggests the sensibleness of short time horizons and the making of choices according to flexible utility; (3) certain rules of thumb used to allow for risk are closely related to principles of Safety-First and can also be based directly on the hypothesis that the feeling of risk (the probability of disaster) is identified with extreme departures from recently executed decisions. (4) The maximization of a hierarchy of goals, or of a lexicographical utility function, is closely related to the search for feasibility and the practice of satisficing. (5) When the dim perception of environmental feedback and the effect of learning on values are acknowledged the intertemporal optimality of planned decision trajectories is seen to be a characteristic of subjective not objective time. This explains why decision making is so often best characterized by rolling plans. In short, we find that economic man - like any other - is an existential being whose plans are based on hopes and fears and whose every act involves a leap of faith.This paper is based on a talk presented at the Conference, New Beginnings in Economics, Akron, Ohio, March 15, 1969. Work on this paper was supported by a grant from the National Science Foundation.  相似文献   

2.
Operational researchers, management scientists, and industrial engineers have been asked by Russell Ackoff to become systems scientists, yet he stated that Systems Science is not a science. (TIMS Interfaces, 2 (4), 41). A. C. Fabergé (Science 184, 1330) notes that the original intent of operational researchers was that they be scientists, trained to observe. Hugh J. Miser (Operations Research 22, 903), views operations research as a science, noting that its progress indeed is of a cyclic nature.The present paper delineates explicitly the attributes of simulation methodology. Simulation is shown to be both an art and a science; its methodology, properly used, is founded both on confirmed (validated) observation and scrutinised (verified) art work.The paper delineates the existing procedures by which computer-directed models can be cyclically scrutinised and confirmed and therefore deemed credible. The complexities of the phenomena observed by social scientists are amenable to human understanding by properly applied simulation; the methodology of the scientist of systems (the systemic scientist).
Résumé Russell Ackoff propose à ceux qui s'occupent de recherches opérationnelle, industrielle, et de gestion, d'agir en systems scientists, et pourtant il affirme que systems science n'est pas une science (TIMS Interfaces 2 (4), 41). A. C. Fabergé (Science 184, 1330) remarque, qu'à l'origine, le but de ceux qui s'occupaient de recherche opérationnelle était d'agir en hommes de science instruits à observer. Hugh J. Miser (Operational Research 22, 903) considère la recherche opérationnelle comme science, notant que ses progrès sont en effet de nature cyclique.La présente étude délimite explicitement les attributs de la méthode de la simulation. Il est démontré que la simulation est à la fois un art et une science; sa méthode, lorsqu'utilisée correctement, repose sur l'observation validée et le modèle vérifié.L'étude délimite les moyens actuels dont nous disposons pour vérifier et valider cycliquement les modèles bâtis à l'aide d'ordinateurs, établissant ainsi leur crédibilité. La nature complexe des phénomènes étudiés par les sciences sociales peut être comprise à l'aide de la simulation: la méthode dont se servent les hommes de science qui étudient les systèmes (les scientistes systémiques).
  相似文献   

3.
Nash's solution of a two-person cooperative game prescribes a coordinated mixed strategy solution involving Pareto-optimal outcomes of the game. Testing this normative solution experimentally presents problems in as much as rather detailed explanations must be given to the subjects of the meaning of threat strategy, strategy mixture, expected payoff, etc. To the extent that it is desired to test the solution using naive subjects, the problem arises of imparting to them a minimal level of understanding about the issue involved in the game without actually suggesting the solution.Experiments were performed to test the properties of the solution of a cooperative two-person game as these are embodied in three of Nash's four axioms: Symmetry, Pareto-optimality, and Invariance with respect to positive linear transformations. Of these, the last was definitely discorroborated, suggesting that interpersonal comparison of utilities plays an important part in the negotiations.Some evidence was also found for a conjecture generated by previous experiments, namely that an externally imposed threat (penalty for non-cooperation) tends to bring the players closer together than the threats generated by the subjects themselves in the process of negotiation.  相似文献   

4.
Summary The objective Bayesian program has as its fundamental tenet (in addition to the three Bayesian postulates) the requirement that, from a given knowledge base a particular probability function is uniquely appropriate. This amounts to fixing initial probabilities, based on relatively little information, because Bayes' theorem (conditionalization) then determines the posterior probabilities when the belief state is altered by enlarging the knowledge base. Moreover, in order to reconstruct orthodox statistical procedures within a Bayesian framework, only privileged ignorance probability functions will work.To serve all these ends objective Bayesianism seeks additional principles for specifying ignorance and partial information probabilities. H. Jeffreys' method of invariance (or Jaynes' modification thereof) is used to solve the former problem, and E. Jaynes' rule of maximizing entropy (subject to invariance for continuous distributions) has recently been thought to solve the latter. I have argued that neither policy is acceptable to a Bayesian since each is inconsistent with conditionalization. Invariance fails to give a consistent representation to the state of ignorance professed. The difficulties here parallel familiar weaknesses in the old Laplacean principle of insufficient reason. Maximizing entropy is unsatisfactory because the partial information it works with fails to capture the effect of uncertainty about related nuisance factors. The result is a probability function that represents a state richer in empirical content than the belief state targeted for representation. Alternatively, by conditionalizing on information about a nuisance parameter one may move from a distribution of lower to higher entropy, despite the obvious increase in information available.Each of these two complaints appear to me to be symptoms of the program's inability to formulate rules for picking privileged probability distributions that serve to represent ignorance or near ignorance. Certainly the methods advocated by Jeffreys, Jaynes and Rosenkrantz are mathematically convenient idealizations wherein specified distributions are elevated to the roles of ignorance and partial information distributions. But the cost that goes with the idealization is a violation of conditionalization, and if that is the ante that we must put up to back objective Bayesianism then I propose we look for a different candidate to earn our support.31  相似文献   

5.
A soundness proof for an axiomatization of common belief in minimal neighbourhood semantics is provided, thereby leaving aside all assumptions of monotonicity in agents reasoning. Minimality properties of common belief are thus emphasized, in contrast to the more usual fixed point properties. The proof relies on the existence of transfinite fixed points of sequences of neighbourhood systems even when they are not closed under supersets. Obvious shortcoming of the note is the lack of a completeness proof.  相似文献   

6.
In this paper, a problem for utility theory - that it would have an agent who was compelled to play Russian Roulette with one revolver or another, to pay as much to have a six-shooter with four bullets relieved of one bullet before playing with it, as he would be willing to pay to have a six-shooter with two bullets emptied - is reviewed. A less demanding Bayesian theory is described, that would have an agent maximize expected values of possible total consequence of his actions. And utility theory is located within that theory as valid for agents who satisfy certain formal conditions, that is, for agents who are, in terms of that more general theory, indifferent to certain dimensions of risk. Raiffa- and Savage-style arguments for its more general validity are then resisted. Addenda are concerned with implications for game theory, and relations between utilities and values.  相似文献   

7.
Two institutions that are often implicit or overlooked in noncooperative games are the assumption of Nash behavior to solve a game, and the ability to correlate strategies. We consider two behavioral paradoxes; one in which maximin behavior rules out all Nash equilibria (Chicken), and another in which minimax supergame behavior leads to an inefficient outcome in comparison to the unique stage game equilibrium (asymmetric Deadlock). Nash outcomes are achieved in both paradoxes by allowing for correlated strategies, even when individual behavior remains minimax or maximin. However, the interpretation of correlation as a public institution differs for each case.  相似文献   

8.
The paper presents results from two new experiments designed to test between the rational choice hypothesis and the random error hypothesis for intransitive choice. Error probabilities and population shares for transitive and intransitive preference types are estimated from data collected in the first experiment. An unrestricted model (which treats intransitive patterns as true patterns) performs no better than a model that is restricted to transitive patterns. Analysis of the conditional distributions of choice patterns, using data from the second experiment, confirms more directly the main results of the first experiment: that observed intransitive choice patterns are due to random error.  相似文献   

9.
Aumann's (1987) theorem shows that correlated equilibrium is an expression of Bayesian rationality. We extend this result to games with incomplete information.First, we rely on Harsanyi's (1967) model and represent the underlying multiperson decision problem as a fixed game with imperfect information. We survey four definitions of correlated equilibrium which have appeared in the literature. We show that these definitions are not equivalent to each other. We prove that one of them fits Aumann's framework; the agents normal form correlated equilibrium is an expression of Bayesian rationality in games with incomplete information.We also follow a universal Bayesian approach based on Mertens and Zamir's (1985) construction of the universal beliefs space. Hierarchies of beliefs over independent variables (states of nature) and dependent variables (actions) are then constructed simultaneously. We establish that the universal set of Bayesian solutions satisfies another extension of Aumann's theorem.We get the following corollary: once the types of the players are not fixed by the model, the various definitions of correlated equilibrium previously considered are equivalent.  相似文献   

10.
Rawls' Difference Principle asserts that a basic economic structure is just if it makes the worst off people as well off as is feasible. How well off someone is is to be measured by an index of primary social goods. It is this index that gives content to the principle, and Rawls gives no adequate directions for constructing it. In this essay a version of the difference principle is proposed that fits much of what Rawls says, but that makes use of no index. Instead of invoking an index of primary social goods, the principle formulated here invokes a partial ordering of prospects for opportunities.  相似文献   

11.
A rule for the acceptance of scientific hypotheses called the principle of cost-benefit dominance is shown to be more effective and efficient than the well-known principle of the maximization of expected (epistemic) utility. Harvey's defense of his theory of the circulation of blood in animals is examined as a historical paradigm case of a successful defense of a scientific hypothesis and as an implicit application of the cost-benefit dominance rule advocated here. Finally, various concepts of dominance are considered by means of which the effectiveness of our rule may be increased.The number of friends who have kindly given me suggestions and encouragement is almost embarrassingly large, but I would like to express my gratitude to Myles Brand, Cliff Hooker, David Hull, Scott Kleiner, Hugh Lehman, Werner Leinfellner, Andrew McLaughlin and Tom W. Settle.  相似文献   

12.
If K is an index of relative voting power for simple voting games, the bicameral postulate requires that the distribution of K -power within a voting assembly, as measured by the ratios of the powers of the voters, be independent of whether the assembly is viewed as a separate legislature or as one chamber of a bicameral system, provided that there are no voters common to both chambers. We argue that a reasonable index – if it is to be used as a tool for analysing abstract, uninhabited decision rules – should satisfy this postulate. We show that, among known indices, only the Banzhaf measure does so. Moreover, the Shapley–Shubik, Deegan–Packel and Johnston indices sometimes witness a reversal under these circumstances, with voter x less powerful than y when measured in the simple voting game G1 , but more powerful than y when G1 is bicamerally joined with a second chamber G2 . Thus these three indices violate a weaker, and correspondingly more compelling, form of the bicameral postulate. It is also shown that these indices are not always co-monotonic with the Banzhaf index and that as a result they infringe another intuitively plausible condition – the price monotonicity condition. We discuss implications of these findings, in light of recent work showing that only the Shapley–Shubik index, among known measures, satisfies another compelling principle known as the bloc postulate. We also propose a distinction between two separate aspects of voting power: power as share in a fixed purse (P-power) and power as influence (I-power).  相似文献   

13.
Separating marginal utility and probabilistic risk aversion   总被引:10,自引:0,他引:10  
This paper is motivated by the search for one cardinal utility for decisions under risk, welfare evaluations, and other contexts. This cardinal utility should have meaningprior to risk, with risk depending on cardinal utility, not the other way around. The rank-dependent utility model can reconcile such a view on utility with the position that risk attitude consists of more than marginal utility, by providing a separate risk component: a probabilistic risk attitude towards probability mixtures of lotteries, modeled through a transformation for cumulative probabilities. While this separation of risk attitude into two independent components is the characteristic feature of rank-dependent utility, it had not yet been axiomatized. Doing that is the purpose of this paper. Therefore, in the second part, the paper extends Yaari's axiomatization to nonlinear utility, and provides separate axiomatizations for increasing/decreasing marginal utility and for optimistic/pessimistic probability transformations. This is generalized to interpersonal comparability. It is also shown that two elementary and often-discussed properties — quasi-convexity (aversion) of preferences with respect to probability mixtures, and convexity (pessimism) of the probability transformation — are equivalent.  相似文献   

14.
The present paper deals with the Galbraithian theory of the managerial firm. Galbraith has stressed corporate size and has questioned the effectiveness of the market demand, technology and capital market constraints, which in conventional theory restrict the size of the firm.Galbraith represents the objectives of the corporation in terms of a conventional lexicographic objective function with some minimal level of profits (in terms of cash flow) being ranked the dominant objective. Also in his treatment of the corporate constraints, Galbraith does not move much beyond the current state of knowledge. The assumption of consumer sovereignty has long been relegated to the text-book literature, and the firm's control over the quality of its product (its price elasticity) has been generally recognized. Similarly, it has been known that the capital market is not perfect so that it is unlikely to constrain the expansion of the firm with some given investor determined earning constraint. In his attempt to show the technostructure's ability to plan the rate and the direction of the technological development Galbraith did not, however, meet with wide support from empirical research and analysis. It is extremely difficult to test the firm's control over its production technology, and while the few industry studies available can hardly be used to reject the Galbraithian position, there is not sufficient evidence to support a generalization of Galbraith's conjecture.While individually these constraints have been analyzed and discussed in the literature, Galbraith has combined these results and has been able to show that in the industrial state the qualitative laws of economic common sense do not hold. The importance of this conclusion is not only academic. Efforts to control corporate allocations through rate controls, antitrust litigation, and in other ways emanate from the conventional theory of firms and markets and do not fit the industrial state. In this state corporate size does matter and cannot be treated as random: The larger the corporation the more perfect the control it assumes over its environment and the higher the efficiency with which it plans its over-all operations.We acknowledge the helpful comments of a referee of this journal.  相似文献   

15.
A new investigation is launched into the problem of decision-making in the face of complete ignorance, and linked to the problem of social choice. In the first section the author introduces a set of properties which might characterize a criterion for decision-making under complete ignorance. Two of these properties are novel: independence of non-discriminating states, and weak pessimism. The second section provides a new characterization of the so-called principle of insufficient reason. In the third part, lexicographic maximin and maximax criteria are characterized. Finally, the author's results are linked to the problem of social choice.  相似文献   

16.
Far-sighted equilibria in 2 × 2, non-cooperative,repeated games   总被引:1,自引:1,他引:0  
Consider a two-person simultaneous-move game in strategic form. Suppose this game is played over and over at discrete points in time. Suppose, furthermore, that communication is not possible, but nevertheless we observe some regularity in the sequence of outcomes. The aim of this paper is to provide an explanation for the question why such regularity might persist for many (i.e., infinite) periods.Each player, when contemplating a deviation, considers a sequential-move game, roughly speaking of the following form: if I change my strategy this period, then in the next my opponent will take his strategy b and afterwards I can switch to my strategy a, but then I am worse off since at that outcome my opponent has no incentive to change anymore, whatever I do. Theoretically, however, there is no end to such reaction chains. In case that deviating by some player gives him less utility in the long run than before deviation, we say that the original regular sequence of outcomes is far-sighted stable for that player. It is a far-sighted equilibrium if it is far-sighted stable for both players.  相似文献   

17.
This paper considers two fundamental aspects of the analysis of dynamic choices under risk: the issue of the dynamic consistency of the strategies of a non EU maximizer, and the issue that an individual whose preferences are nonlinear in probabilities may choose a strategy which is in some appropriate sense dominated by other strategies. A proposed way of dealing with these problems, due to Karni and Safra and called behavioral consistency, is described. The implications of this notion of behavioral consistency are explored, and it is shown that while the Karni and Safra approach obtains dynamically consistent behavior under nonlinear preferences, it may imply the choice of dominated strategies even in very simple decision trees.  相似文献   

18.
The idea that an individual's behavior is a function of its utility or Value represents a very common and fundamental assumption in the study of human conduct. In this paper it will be attempted to determine the nature of this function more precisely. Adopting a probabilistic conception of human action, it appears that an exponential function perfectly satisfies the empirical as well as formal conditions which it seems necessary to impose upon it initially. Empirical research into behavioral change lends additional support to the function thus constructed.  相似文献   

19.
This paper studies two models of rational behavior under uncertainty whose predictions are invariant under ordinal transformations of utility. The quantile utility model assumes that the agent maximizes some quantile of the distribution of utility. The utility mass model assumes maximization of the probability of obtaining an outcome whose utility is higher than some fixed critical value. Both models satisfy weak stochastic dominance. Lexicographic refinements satisfy strong dominance.The study of these utility models suggests a significant generalization of traditional ideas of riskiness and risk preference. We define one action to be riskier than another if the utility distribution of the latter crosses that of the former from below. The single crossing property is equivalent to a minmax spread of a random variable. With relative risk defined by the single crossing criterion, the risk preference of a quantile utility maximizer increases with the utility distribution quantile that he maximizes. The risk preference of a utility mass maximizer increases with his critical utility value.  相似文献   

20.
This paper falls within the field of Distributive Justice and (as the title indicates) addresses itself specifically to the meshing problem. Briefly stated, the meshing problem is the difficulty encountered when one tries to aggregate the two parameters of beneficence and equity in a way that results in determining which of two or more alternative utility distributions is most just. A solution to this problem, in the form of a formal welfare measure, is presented in the paper. This formula incorporates the notions of equity and beneficence (which are defined earlier by the author) and weighs them against each other to compute a numerical value which represents the degree of justice a given distribution possesses. This value can in turn be used comparatively to select which utility scheme, of those being considered, is best.Three fundamental adequacy requirements, which any acceptable welfare measuring method must satisfy, are presented and subsequently demonstrated to be formally deducible as theorems of the author's system. A practical application of the method is then considered as well as a comparison of it with Nicholas Rescher's method (found in his book, Distributive Justice). The conclusion reached is that Rescher's system is unacceptable, since it computes counter-intuitive results. Objections to the author's welfare measure are considered and answered. Finally, a suggestion for expanding the system to cover cases it was not originally designed to handle (i.e. situations where two alternative utility distributions vary with regard to the number of individuals they contain) is made. The conclusion reached at the close of the paper is that an acceptable solution to the meshing problem has been established.I would like to gratefully acknowledge the assistance of Michael Tooley whose positive suggestions and critical comments were invaluable in the writting of this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号