首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The traditional or orthodox decision rule of maximizing conditional expected utility has recently come under attack by critics who advance alternative causal decision theories. The traditional theory has, however, been defended. And these defenses have in turn been criticized. Here, I examine two objections to such defenses and advance a theory about the dynamics of deliberation (a diachronic theory about the process of deliberation) within the framework of which both objections to the defenses of the traditional theory fail.  相似文献   

2.
A fixed agenda social choice correspondence on outcome set X maps each profile of individual preferences into a nonempty subset of X. If satisfies an analogue of Arrow's independence of irrelevant alternatives condition, then either the range of contains exactly two alternatives, or else there is at most one individual whose preferences have any bearing on . This is the case even if is not defined for any proper subset of X.  相似文献   

3.
A complete classification theorem for voting processes on a smooth choice spaceW of dimensionw is presented. Any voting process is classified by two integersv * () andw(), in terms of the existence or otherwise of the optima set, IO(), and the cycle set IC().In dimension belowv * () the cycle set is always empty, and in dimension abovew() the optima set is nearly always empty while the cycle set is open dense and path con nected. In the latter case agenda manipulation results in any outcome.For admissible (compact, convex) choice spaces, the two sets are related by the general equilibrium result that IO() union IC() is non-empty. This in turn implies existence of optima in low dimensions. The equilibrium theorem is used to examine voting games with an infinite electorate, and the nature ofstructure induced equilibria, induced by jurisdictional restrictions.This material is based on work supported by a Nuffield Foundation grant.  相似文献   

4.
Chipman (1979) proves that for an expected utility maximizer choosing from a domain of normal distributions with mean and variance 2 the induced preference functionV(, ) satisfies a differential equation known as the heat equation. The purpose of this note is to provide a generalization and simple proof of this result which does not depend on the normality assumption.  相似文献   

5.
This article reports a test of the predictive accuracy of solution concepts in cooperative non-sidepayment n-person games with empty core. Six solutions were tested. Three of these were value solutions (i.e., -transfer value, -transfer nucleolus, and -transfer disruption value) and three were equilibrium solutions (deterrence set, stable set, and imputation set). The test was based on a laboratory experiment utilizing 5-person, 2-choice normal form games with empty core; other related data sets were also analyzed. Goodness-of-fit results based on discrepancy scores show that the three value solutions are about equally accurate in predicting outcomes, and that all three are substantially more accurate than the other solutions tested.  相似文献   

6.
The idea that an individual's behavior is a function of its utility or Value represents a very common and fundamental assumption in the study of human conduct. In this paper it will be attempted to determine the nature of this function more precisely. Adopting a probabilistic conception of human action, it appears that an exponential function perfectly satisfies the empirical as well as formal conditions which it seems necessary to impose upon it initially. Empirical research into behavioral change lends additional support to the function thus constructed.  相似文献   

7.
Given a finite state space and common priors, common knowledge of the identity of an agent with the minimal (or maximal) expectation of a random variable implies consensus, i.e., common knowledge of common expectations. This extremist statistic induces consensus when repeatedly announced, and yet, with n agents, requires at most log2 n bits to broadcast.  相似文献   

8.
Two institutions that are often implicit or overlooked in noncooperative games are the assumption of Nash behavior to solve a game, and the ability to correlate strategies. We consider two behavioral paradoxes; one in which maximin behavior rules out all Nash equilibria (Chicken), and another in which minimax supergame behavior leads to an inefficient outcome in comparison to the unique stage game equilibrium (asymmetric Deadlock). Nash outcomes are achieved in both paradoxes by allowing for correlated strategies, even when individual behavior remains minimax or maximin. However, the interpretation of correlation as a public institution differs for each case.  相似文献   

9.
Orbell and Dawes develop a non-game theoretic heuristic that yields a cooperator's advantage by allowing players to project their own cooperate-defect choices onto potential partners (1991, p. 515). With appropriate parameter values their heuristic yields a cooperative environment, but the cooperation depends, simply, on optimism about others' behavior (1991, p. 526). In earlier work, Dawes (1989) established a statistical foundation for such optimism. In this paper, I adapt some of the concerns of Dawes (1989) and develop a game theoretic model based on a modification of the Harsanyi structure of games with incomplete information (1967–1968). I show that the commonly made conjecture that strategic play is incompatible with cooperation and the cooperator's advantage is false.  相似文献   

10.
Operational researchers, management scientists, and industrial engineers have been asked by Russell Ackoff to become systems scientists, yet he stated that Systems Science is not a science. (TIMS Interfaces, 2 (4), 41). A. C. Fabergé (Science 184, 1330) notes that the original intent of operational researchers was that they be scientists, trained to observe. Hugh J. Miser (Operations Research 22, 903), views operations research as a science, noting that its progress indeed is of a cyclic nature.The present paper delineates explicitly the attributes of simulation methodology. Simulation is shown to be both an art and a science; its methodology, properly used, is founded both on confirmed (validated) observation and scrutinised (verified) art work.The paper delineates the existing procedures by which computer-directed models can be cyclically scrutinised and confirmed and therefore deemed credible. The complexities of the phenomena observed by social scientists are amenable to human understanding by properly applied simulation; the methodology of the scientist of systems (the systemic scientist).
Résumé Russell Ackoff propose à ceux qui s'occupent de recherches opérationnelle, industrielle, et de gestion, d'agir en systems scientists, et pourtant il affirme que systems science n'est pas une science (TIMS Interfaces 2 (4), 41). A. C. Fabergé (Science 184, 1330) remarque, qu'à l'origine, le but de ceux qui s'occupaient de recherche opérationnelle était d'agir en hommes de science instruits à observer. Hugh J. Miser (Operational Research 22, 903) considère la recherche opérationnelle comme science, notant que ses progrès sont en effet de nature cyclique.La présente étude délimite explicitement les attributs de la méthode de la simulation. Il est démontré que la simulation est à la fois un art et une science; sa méthode, lorsqu'utilisée correctement, repose sur l'observation validée et le modèle vérifié.L'étude délimite les moyens actuels dont nous disposons pour vérifier et valider cycliquement les modèles bâtis à l'aide d'ordinateurs, établissant ainsi leur crédibilité. La nature complexe des phénomènes étudiés par les sciences sociales peut être comprise à l'aide de la simulation: la méthode dont se servent les hommes de science qui étudient les systèmes (les scientistes systémiques).
  相似文献   

11.
This paper falls within the field of Distributive Justice and (as the title indicates) addresses itself specifically to the meshing problem. Briefly stated, the meshing problem is the difficulty encountered when one tries to aggregate the two parameters of beneficence and equity in a way that results in determining which of two or more alternative utility distributions is most just. A solution to this problem, in the form of a formal welfare measure, is presented in the paper. This formula incorporates the notions of equity and beneficence (which are defined earlier by the author) and weighs them against each other to compute a numerical value which represents the degree of justice a given distribution possesses. This value can in turn be used comparatively to select which utility scheme, of those being considered, is best.Three fundamental adequacy requirements, which any acceptable welfare measuring method must satisfy, are presented and subsequently demonstrated to be formally deducible as theorems of the author's system. A practical application of the method is then considered as well as a comparison of it with Nicholas Rescher's method (found in his book, Distributive Justice). The conclusion reached is that Rescher's system is unacceptable, since it computes counter-intuitive results. Objections to the author's welfare measure are considered and answered. Finally, a suggestion for expanding the system to cover cases it was not originally designed to handle (i.e. situations where two alternative utility distributions vary with regard to the number of individuals they contain) is made. The conclusion reached at the close of the paper is that an acceptable solution to the meshing problem has been established.I would like to gratefully acknowledge the assistance of Michael Tooley whose positive suggestions and critical comments were invaluable in the writting of this paper.  相似文献   

12.
This paper is concerned with selecting an appropriate perspective from which to understand and evaluate social-decision procedures. Distinguishing between agentrationality and option-rationality, the author argues that a rational agent may choose a social-decision procedure that is not itself agent-rational (but merely option-rational). The argument puts the voter's paradox in a context allowing evaluation of (a) its general import and (b) practical proposals for avoiding it in particular cases. Arrow's four conditions for a social-decision procedure are shown to have little relevance to the understanding or evaluation of constitutions. The author concludes that the more fruitful perspective for discussing social-decision procedures is that of option-rationality rather than (as Arrow, Wolff, and others have supposed) that of agent-rationality.  相似文献   

13.
The author tries to formulate what a determinist believes to be true. The formulation is based on some concepts defined in a systems-theoretical manner, mainly on the concept of an experiment over the sets A m (a set of m-tuples of input values) and B n (a set of n-tuples of output values) in the time interval (t 1, ..., t k ) (symbolically E[t 1,..., t k , A m , B n ]), on the concept of a behavior of the system S m,n (=(A m , B n )) on the basis of the experiment E[t 1, ..., t k , A m , B n ] and, indeed, on the concept of deterministic behavior .... The resulting formulation of the deterministic hypothesis shows that this hypothesis expresses a belief that we always could find some hidden parameters.  相似文献   

14.
Let (, ) and (, ) be mean-standard deviation pairs of two probability distributions on the real line. Mean-variance analyses presume that the preferred distribution depends solely on these pairs, with primary preference given to larger mean and smaller variance. This presumption, in conjunction with the assumption that one distribution is better than a second distribution if the mass of the first is completely to the right of the mass of the second, implies that (, ) is preferred to (, ) if and only if either > or ( = and < ), provided that the set of distributions is sufficiently rich. The latter provision fails if the outcomes of all distributions lie in a finite interval, but then it is still possible to arrive at more liberal dominance conclusions between (, ) and (, ).This research was supported by the Office of Naval Research.  相似文献   

15.
Tiebreak rules are necessary for revealing indifference in non- sequential decisions. I focus on a preference relation that satisfies Ordering and fails Independence in the following way. Lotteries a and b are indifferent but the compound lottery 0.5f, 0.5b is strictly preferred to the compound lottery 0.5f, 0.5a. Using tiebreak rules the following is shown here: In sequential decisions when backward induction is applied, a preference like the one just described must alter the preference relation between a and b at certain choice nodes, i.e., indifference between a and b is not stable. Using this result, I answer a question posed by Rabinowicz (1997) concerning admissibility in sequential decisions when indifferent options are substituted at choice nodes.  相似文献   

16.
This paper discusses several concepts that can be used to provide a foundation for a unified, theory of rational, economic behavior. First, decision-making is defined to be a process that takes place with reference to both subjective and objective time, that distinguishes between plans and actions, between information and states and that explicitly incorporates the collection and processing of information. This conception of decision making is then related to several important aspects of behavioral economics, the dependence of values on experience, the use of behavioral rules, the occurrence of multiple goals and environmental feedback.Our conclusions are (1) the non-transitivity of observed or revealed preferences is a characteristic of learning and hence is to be expected of rational decision-makers; (2) the learning of values through experience suggests the sensibleness of short time horizons and the making of choices according to flexible utility; (3) certain rules of thumb used to allow for risk are closely related to principles of Safety-First and can also be based directly on the hypothesis that the feeling of risk (the probability of disaster) is identified with extreme departures from recently executed decisions. (4) The maximization of a hierarchy of goals, or of a lexicographical utility function, is closely related to the search for feasibility and the practice of satisficing. (5) When the dim perception of environmental feedback and the effect of learning on values are acknowledged the intertemporal optimality of planned decision trajectories is seen to be a characteristic of subjective not objective time. This explains why decision making is so often best characterized by rolling plans. In short, we find that economic man - like any other - is an existential being whose plans are based on hopes and fears and whose every act involves a leap of faith.This paper is based on a talk presented at the Conference, New Beginnings in Economics, Akron, Ohio, March 15, 1969. Work on this paper was supported by a grant from the National Science Foundation.  相似文献   

17.
In general, the technical apparatus of decision theory is well developed. It has loads of theorems, and they can be proved from axioms. Many of the theorems are interesting, and useful both from a philosophical and a practical perspective. But decision theory does not have a well agreed upon interpretation. Its technical terms, in particular, utility and preference do not have a single clear and uncontroversial meaning.How to interpret these terms depends, of course, on what purposes in pursuit of which one wants to put decision theory to use. One might want to use it as a model of economic decision-making, in order to predict the behavior of corporations or of the stock market. In that case, it might be useful to interpret the technical term utility as meaning money profit. Decision theory would then be an empirical theory. I want to look into the question of what utility could mean, if we want decision theory to function as a theory of practical rationality. I want to know whether it makes good sense to think of practical rationality as fully or even partly accounted for by decision theory. I shall lay my cards on the table: I hope it does make good sense to think of it that way. For, I think, if Humeans are right about practical rationality, then decision theory must play a very large part in their account. And I think Humeanism has very strong attractions.  相似文献   

18.
This paper develops the idea of a choice as a mapping of subsets of a set X into their respective subsets and the idea of the comparison, as a relation between elements of X, that is determined or revealed by a choice. It then studies how certain properties of a choice imply or are implied by certain properties, such as acyclicity, quasi-transitivity, pseudo-transitivity and transitivity, of the comparison revealed, finally giving a complete logical diagram of all the implications between these latter properties of the comparison.  相似文献   

19.
Aumann's (1987) theorem shows that correlated equilibrium is an expression of Bayesian rationality. We extend this result to games with incomplete information.First, we rely on Harsanyi's (1967) model and represent the underlying multiperson decision problem as a fixed game with imperfect information. We survey four definitions of correlated equilibrium which have appeared in the literature. We show that these definitions are not equivalent to each other. We prove that one of them fits Aumann's framework; the agents normal form correlated equilibrium is an expression of Bayesian rationality in games with incomplete information.We also follow a universal Bayesian approach based on Mertens and Zamir's (1985) construction of the universal beliefs space. Hierarchies of beliefs over independent variables (states of nature) and dependent variables (actions) are then constructed simultaneously. We establish that the universal set of Bayesian solutions satisfies another extension of Aumann's theorem.We get the following corollary: once the types of the players are not fixed by the model, the various definitions of correlated equilibrium previously considered are equivalent.  相似文献   

20.
If K is an index of relative voting power for simple voting games, the bicameral postulate requires that the distribution of K -power within a voting assembly, as measured by the ratios of the powers of the voters, be independent of whether the assembly is viewed as a separate legislature or as one chamber of a bicameral system, provided that there are no voters common to both chambers. We argue that a reasonable index – if it is to be used as a tool for analysing abstract, uninhabited decision rules – should satisfy this postulate. We show that, among known indices, only the Banzhaf measure does so. Moreover, the Shapley–Shubik, Deegan–Packel and Johnston indices sometimes witness a reversal under these circumstances, with voter x less powerful than y when measured in the simple voting game G1 , but more powerful than y when G1 is bicamerally joined with a second chamber G2 . Thus these three indices violate a weaker, and correspondingly more compelling, form of the bicameral postulate. It is also shown that these indices are not always co-monotonic with the Banzhaf index and that as a result they infringe another intuitively plausible condition – the price monotonicity condition. We discuss implications of these findings, in light of recent work showing that only the Shapley–Shubik index, among known measures, satisfies another compelling principle known as the bloc postulate. We also propose a distinction between two separate aspects of voting power: power as share in a fixed purse (P-power) and power as influence (I-power).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号