首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
A fixed agenda social choice correspondence on outcome set X maps each profile of individual preferences into a nonempty subset of X. If satisfies an analogue of Arrow's independence of irrelevant alternatives condition, then either the range of contains exactly two alternatives, or else there is at most one individual whose preferences have any bearing on . This is the case even if is not defined for any proper subset of X.  相似文献   

2.
This paper develops the idea of a choice as a mapping of subsets of a set X into their respective subsets and the idea of the comparison, as a relation between elements of X, that is determined or revealed by a choice. It then studies how certain properties of a choice imply or are implied by certain properties, such as acyclicity, quasi-transitivity, pseudo-transitivity and transitivity, of the comparison revealed, finally giving a complete logical diagram of all the implications between these latter properties of the comparison.  相似文献   

3.
The main object of this paper is to provide the logical machinery needed for a viable basis for talking of the consequences, the content, or of equivalences between inconsistent sets of premisses.With reference to its maximal consistent subsets (m.c.s.), two kinds of consequences of a propositional set S are defined. A proposition P is a weak consequence (W-consequence) of S if it is a logical consequence of at least one m.c.s. of S, and P is an inevitable consequence (I-consequence) of S if it is a logical consequence of all the m.c.s. of S. The set of W-consequences of a set S it determines (up to logical equivalence) its m.c.s. (This enables us to define a normal form for every set such that any two sets having the same W-consequences have the same normal form.) The W-consequences and I-consequences will not do to define the content of a set S. The first is too broad, may include propositions mutually inconsistent, the second is too narrow. A via media between these concepts is accordingly defined: P is a P-consequence of S, where P is some preference criterion yielding some of the m.c.s. of S as preferred to others, and P is a consequence of all of the P-preferred m.c.s. of S. The bulk of the paper is devoted to discussion of various preference criteria, and also surveys the application of this machinery in diverse contexts - for example, in connection with the processing of mutually inconsistent reports.  相似文献   

4.
Stochastic dominance is a notion in expected-utility decision theory which has been developed to facilitate the analysis of risky or uncertain decision alternatives when the full form of the decision maker's von Neumann-Morgenstern utility function on the consequence space X is not completely specified. For example, if f and g are probability functions on X which correspond to two risky alternatives, then f first-degree stochastically dominates g if, for every consequence x in X, the chance of getting a consequence that is preferred to x is as great under f as under g. When this is true, the expected utility of f must be as great as the expected utility of g.Most work in stochastic dominance has been based on increasing utility functions on X with X an interval on the real line. The present paper, following [1], formulates appropriate notions of first-degree and second-degree stochastic dominance when X is an arbitrary finite set. The only structure imposed on X arises from the decision maker's preferences. It is shown how typical analyses with stochastic dominance can be enriched by applying the notion to convex combinations of probability functions. The potential applications of convex stochastic dominance include analyses of simple-majority voting on risky alternatives when voters have similar preference orders on the consequences.  相似文献   

5.
Operational researchers, management scientists, and industrial engineers have been asked by Russell Ackoff to become systems scientists, yet he stated that Systems Science is not a science. (TIMS Interfaces, 2 (4), 41). A. C. Fabergé (Science 184, 1330) notes that the original intent of operational researchers was that they be scientists, trained to observe. Hugh J. Miser (Operations Research 22, 903), views operations research as a science, noting that its progress indeed is of a cyclic nature.The present paper delineates explicitly the attributes of simulation methodology. Simulation is shown to be both an art and a science; its methodology, properly used, is founded both on confirmed (validated) observation and scrutinised (verified) art work.The paper delineates the existing procedures by which computer-directed models can be cyclically scrutinised and confirmed and therefore deemed credible. The complexities of the phenomena observed by social scientists are amenable to human understanding by properly applied simulation; the methodology of the scientist of systems (the systemic scientist).
Résumé Russell Ackoff propose à ceux qui s'occupent de recherches opérationnelle, industrielle, et de gestion, d'agir en systems scientists, et pourtant il affirme que systems science n'est pas une science (TIMS Interfaces 2 (4), 41). A. C. Fabergé (Science 184, 1330) remarque, qu'à l'origine, le but de ceux qui s'occupaient de recherche opérationnelle était d'agir en hommes de science instruits à observer. Hugh J. Miser (Operational Research 22, 903) considère la recherche opérationnelle comme science, notant que ses progrès sont en effet de nature cyclique.La présente étude délimite explicitement les attributs de la méthode de la simulation. Il est démontré que la simulation est à la fois un art et une science; sa méthode, lorsqu'utilisée correctement, repose sur l'observation validée et le modèle vérifié.L'étude délimite les moyens actuels dont nous disposons pour vérifier et valider cycliquement les modèles bâtis à l'aide d'ordinateurs, établissant ainsi leur crédibilité. La nature complexe des phénomènes étudiés par les sciences sociales peut être comprise à l'aide de la simulation: la méthode dont se servent les hommes de science qui étudient les systèmes (les scientistes systémiques).
  相似文献   

6.
This paper falls within the field of Distributive Justice and (as the title indicates) addresses itself specifically to the meshing problem. Briefly stated, the meshing problem is the difficulty encountered when one tries to aggregate the two parameters of beneficence and equity in a way that results in determining which of two or more alternative utility distributions is most just. A solution to this problem, in the form of a formal welfare measure, is presented in the paper. This formula incorporates the notions of equity and beneficence (which are defined earlier by the author) and weighs them against each other to compute a numerical value which represents the degree of justice a given distribution possesses. This value can in turn be used comparatively to select which utility scheme, of those being considered, is best.Three fundamental adequacy requirements, which any acceptable welfare measuring method must satisfy, are presented and subsequently demonstrated to be formally deducible as theorems of the author's system. A practical application of the method is then considered as well as a comparison of it with Nicholas Rescher's method (found in his book, Distributive Justice). The conclusion reached is that Rescher's system is unacceptable, since it computes counter-intuitive results. Objections to the author's welfare measure are considered and answered. Finally, a suggestion for expanding the system to cover cases it was not originally designed to handle (i.e. situations where two alternative utility distributions vary with regard to the number of individuals they contain) is made. The conclusion reached at the close of the paper is that an acceptable solution to the meshing problem has been established.I would like to gratefully acknowledge the assistance of Michael Tooley whose positive suggestions and critical comments were invaluable in the writting of this paper.  相似文献   

7.
This article is concerned with extensions of a continuous ordering R on a set X to a subset P(X) of the power set of X. The underlying topology will be the Hausdorff metric topology. We will see that continuous extensions of R do not require that P(X) contain every nonempty finite subset of X. Therefore, the analysis can be applied to consumer theory and inverse choice functions. In analogy to these functions budget correspondences are established which relate alternatives x with certain subsets of X, according to the extended ordering.  相似文献   

8.
The author tries to formulate what a determinist believes to be true. The formulation is based on some concepts defined in a systems-theoretical manner, mainly on the concept of an experiment over the sets A m (a set of m-tuples of input values) and B n (a set of n-tuples of output values) in the time interval (t 1, ..., t k ) (symbolically E[t 1,..., t k , A m , B n ]), on the concept of a behavior of the system S m,n (=(A m , B n )) on the basis of the experiment E[t 1, ..., t k , A m , B n ] and, indeed, on the concept of deterministic behavior .... The resulting formulation of the deterministic hypothesis shows that this hypothesis expresses a belief that we always could find some hidden parameters.  相似文献   

9.
We study, from the standpoint of coherence, comparative probabilities on an arbitrary familyE of conditional events. Given a binary relation ·, coherence conditions on · are related to de Finetti's coherent betting system: we consider their connections to the usual properties of comparative probability and to the possibility of numerical representations of ·. In this context, the numerical reference frame is that of de Finetti's coherent subjective conditional probability, which is not introduced (as in Kolmogoroff's approach) through a ratio between probability measures.Another relevant feature of our approach is that the family & need not have any particular algebraic structure, so that the ordering can be initially given for a few conditional events of interest and then possibly extended by a step-by-step procedure, preserving coherence.  相似文献   

10.
In general, the technical apparatus of decision theory is well developed. It has loads of theorems, and they can be proved from axioms. Many of the theorems are interesting, and useful both from a philosophical and a practical perspective. But decision theory does not have a well agreed upon interpretation. Its technical terms, in particular, utility and preference do not have a single clear and uncontroversial meaning.How to interpret these terms depends, of course, on what purposes in pursuit of which one wants to put decision theory to use. One might want to use it as a model of economic decision-making, in order to predict the behavior of corporations or of the stock market. In that case, it might be useful to interpret the technical term utility as meaning money profit. Decision theory would then be an empirical theory. I want to look into the question of what utility could mean, if we want decision theory to function as a theory of practical rationality. I want to know whether it makes good sense to think of practical rationality as fully or even partly accounted for by decision theory. I shall lay my cards on the table: I hope it does make good sense to think of it that way. For, I think, if Humeans are right about practical rationality, then decision theory must play a very large part in their account. And I think Humeanism has very strong attractions.  相似文献   

11.
An observer attempts to infer the unobserved ranking of two ideal objects, A and B, from observed rankings in which these objects are `accompanied' by `noise' components, C and D. In the first ranking, A is accompanied by C and B is accompanied by D, while in the second ranking, A is accompanied by D and B is accompanied by C. In both rankings, noisy-A is ranked above noisy-B. The observer infers that ideal-A is ranked above ideal-B. This commonly used inference rule is formalized for the case in which A,B,C,D are sets. Let X be a finite set and let be a linear ordering on 2X. The following condition is imposed on . For every quadruple (A,B,C,D)Y, where Y is some domain in (2X)4, if and , then . The implications and interpretation of this condition for various domains Y are discussed.  相似文献   

12.
A complete classification theorem for voting processes on a smooth choice spaceW of dimensionw is presented. Any voting process is classified by two integersv * () andw(), in terms of the existence or otherwise of the optima set, IO(), and the cycle set IC().In dimension belowv * () the cycle set is always empty, and in dimension abovew() the optima set is nearly always empty while the cycle set is open dense and path con nected. In the latter case agenda manipulation results in any outcome.For admissible (compact, convex) choice spaces, the two sets are related by the general equilibrium result that IO() union IC() is non-empty. This in turn implies existence of optima in low dimensions. The equilibrium theorem is used to examine voting games with an infinite electorate, and the nature ofstructure induced equilibria, induced by jurisdictional restrictions.This material is based on work supported by a Nuffield Foundation grant.  相似文献   

13.
This paper presents a metatheorem with the following property: Given a proven axiom-free lemma, which interrelates some of the elementary properties of a binary relation, the metatheorem mechanically transforms this lemma into its uniquely corresponding complementary lemma. Therefore, the metatheorem nearly doubles our knowledge about the elementary properties of binary relations, for application to statistical decision theory. At present we do not know whether there exists a nontrivial axiom-free lemma that is its own complementary lemma.  相似文献   

14.
Far-sighted equilibria in 2 × 2, non-cooperative,repeated games   总被引:1,自引:1,他引:0  
Consider a two-person simultaneous-move game in strategic form. Suppose this game is played over and over at discrete points in time. Suppose, furthermore, that communication is not possible, but nevertheless we observe some regularity in the sequence of outcomes. The aim of this paper is to provide an explanation for the question why such regularity might persist for many (i.e., infinite) periods.Each player, when contemplating a deviation, considers a sequential-move game, roughly speaking of the following form: if I change my strategy this period, then in the next my opponent will take his strategy b and afterwards I can switch to my strategy a, but then I am worse off since at that outcome my opponent has no incentive to change anymore, whatever I do. Theoretically, however, there is no end to such reaction chains. In case that deviating by some player gives him less utility in the long run than before deviation, we say that the original regular sequence of outcomes is far-sighted stable for that player. It is a far-sighted equilibrium if it is far-sighted stable for both players.  相似文献   

15.
This paper discusses several concepts that can be used to provide a foundation for a unified, theory of rational, economic behavior. First, decision-making is defined to be a process that takes place with reference to both subjective and objective time, that distinguishes between plans and actions, between information and states and that explicitly incorporates the collection and processing of information. This conception of decision making is then related to several important aspects of behavioral economics, the dependence of values on experience, the use of behavioral rules, the occurrence of multiple goals and environmental feedback.Our conclusions are (1) the non-transitivity of observed or revealed preferences is a characteristic of learning and hence is to be expected of rational decision-makers; (2) the learning of values through experience suggests the sensibleness of short time horizons and the making of choices according to flexible utility; (3) certain rules of thumb used to allow for risk are closely related to principles of Safety-First and can also be based directly on the hypothesis that the feeling of risk (the probability of disaster) is identified with extreme departures from recently executed decisions. (4) The maximization of a hierarchy of goals, or of a lexicographical utility function, is closely related to the search for feasibility and the practice of satisficing. (5) When the dim perception of environmental feedback and the effect of learning on values are acknowledged the intertemporal optimality of planned decision trajectories is seen to be a characteristic of subjective not objective time. This explains why decision making is so often best characterized by rolling plans. In short, we find that economic man - like any other - is an existential being whose plans are based on hopes and fears and whose every act involves a leap of faith.This paper is based on a talk presented at the Conference, New Beginnings in Economics, Akron, Ohio, March 15, 1969. Work on this paper was supported by a grant from the National Science Foundation.  相似文献   

16.
Choices between gambles show systematic violations of stochastic dominance. For example, most people choose ($6, .05; $91, .03; $99, .92) over ($6, .02; $8, .03; $99, .95), violating dominance. Choices also violate two cumulative independence conditions: (1) If S = (z, r; x, p; y, q) R = (z, r; x, p; y, q) then S = (x, r; y, p + q) R = (x, r + p; y, q). (2) If S = (x, p; y, q; z, r) R = (x, p; y, q; z, r) then S = (x, p + q; y, r) R = (x, p; y, q + r), where 0 < z < x < x < y < y < y < z.Violations contradict any utility theory satisfying transivity, outcome monotonicity, coalescing, and comonotonic independence. Because rank-and sign-dependent utility theories, including cumulative prospect theory (CPT), satisfy these properties, they cannot explain these results.However, the configural weight model of Birnbaum and McIntosh (1996) predicted the observed violations of stochastic dominance, cumulative independence, and branch independence. This model assumes the utility of a gamble is a weighted average of outcomes\' utilities, where each configural weight is a function of the rank order of the outcome\'s value among distinct values and that outcome\'s probability. The configural weight, TAX model with the same number of parameters as CPT fit the data of most individuals better than the model of CPT.  相似文献   

17.
We introduce two types of protection premia. The unconstrained protection premium, u, is the individual's willingness to pay for certain protection efficiency given flexibility to adjust optimally the investment in protection. The constrained protection premium, c, measures willingness to pay for certain protection efficiency given no flexibility to adjust the investment in protection. u depends on tastes and wealth as well as protection technology whereas c depends only on technology. We show that c cannot exceed u and develop necessary conditions for c=u. Optimal protection for an individual with decision flexibility may be larger or smaller than that desired under no flexibility.Journal Paper No. J-15504 of the Iowa Agriculture and Home Economics Experiment Station, Ames, Iowa. Project No. 3048.  相似文献   

18.
In a previous article (see [3]) a system of axioms is proposed stating conditions which are necessary and sufficient to determine a cardinal utility function on any set, finite or infinite, of outcomes X. The present paper discusses and interprets the meaning of those axioms, and compares this new approach to cardinal utility with the utility differences approach proposed by Alt and Frisch, among others, and with the expected utility approach of von-Neuman and Morgenstern. The notion of repetition of the same choice situation is presented and its interpretation discussed. It is then argued that this notion leads naturally to the system of axioms presented in On Cardinal Utility. It is also argued that this notion must be used if we want to have a more clear understanding of the meaning of the axioms proposed by Alt and Frisch. Finally, it is remarked that since uncertainty is not present in the new approach, it is free of the paradoxes that have plagued the expected utility hypothesis.  相似文献   

19.
Logical principles, in particular the law of noncontradiction and the law of exclusion of middle term, play different roles at different levels of discourse: valid formulae in an axiomatic calculus, methodological requirements (of consistency and completeness) for formalized systems. When postulated as formal laws, —pvp and —(p·—p), they are totally interdefinable and equivalent as well (DeMorgan's transformations are proof of this). If postulated as methodological requirements, the principles are not equivalent, although they could still be said in some sense to be interdefinable (the existence of consistent yet incomplete systems shows that the requirements are not equivalent; still, completeness of a system can be defined in terms of consistency of another system which keeps a definite relationship with the first one).There exists a third level of discourse: scientific praxis. At this level, the principles come even farther apart: they neither have the same logical value nor is one definable in terms of the other. However, they keep a family resemblance which justifies our dealing with them jointly. Let us call the principles at this level pragmatic imperatives. They deal with paradoxes, which are of two types: knots (conflicts) and blanks (gaps in the scientific pattern). The left-hand pragmatic imperative says: Be intolerant with knots, try to remove (dissolve) them. The right-hand pragmatic imperative says: Try to remove (fill) all blanks. The knot-removing and the blank-dissolving imperatives are prior to and more important than the laws of noncontradiction and excluded middle and the requirements of consistency and completeness. Logical principles are not prime categories: pragmatic imperatives are primordial.  相似文献   

20.
The theory of games recently proposed by John C. Harsanyi in A General Theory of Rational Behavior in Game Situations, (Econometrica, Vol. 34, No. 3) has one anomalous feature, viz., that it generates for a special class of non-cooperative games solutions which are not equilibrium points. It is argued that this feature of the theory turns on an argument concerning the instability of weak equilibrium points, and that this argument, in turn, involves appeal to an unrestricted version of a postulate subsequently included in the theory in restricted form. It is then shown that if this line of reasoning is permitted, then one must, by parity of reasoning, permit another instability argument. But, if both of these instability arguments are permitted in the construction of the theory, the resultant theory must be incomplete, in the sense that there will be simple non-cooperative games for which such a theory cannot yield solutions. This result is then generalized and shown to be endemic to all theories which have made the equilibrium condition central to the treatment of non-cooperative games. Some suggestions are then offered concerning how this incompleteness problem can be resolved, and what one might expect concerning the postulate structure and implications of a theory of games which embodies the revisions necessitated by a resolution of this problem.This research was supported by a grant to the author from the City University of New York Faculty Research Award Program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号