首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is shown in this paper that a very mild form of Pareto principle is compatible with a set of restrictive conditions. Deriving a choice set identical with the set of alternatives in the case of paradox of voting amounts to begging the problem. If we restrict that the choice set should be a proper sub-set of the original set, the paradox will be revived. In the realistic sense liberalism may well be treated as an outcome of the choice rather than as a basic value judgement. Choice of Rules of the Game ought to be the first step and then only society can seek the optimal situation under those Rules.I am very grateful to P. K. Pattanaik for helpful discussions and valuable comments on the first draft. I am also grateful to Prof. Amartya Sen whose lectures at the Delhi School of Economics introduced me to the theory of social choice.  相似文献   

2.
In general, the technical apparatus of decision theory is well developed. It has loads of theorems, and they can be proved from axioms. Many of the theorems are interesting, and useful both from a philosophical and a practical perspective. But decision theory does not have a well agreed upon interpretation. Its technical terms, in particular, utility and preference do not have a single clear and uncontroversial meaning.How to interpret these terms depends, of course, on what purposes in pursuit of which one wants to put decision theory to use. One might want to use it as a model of economic decision-making, in order to predict the behavior of corporations or of the stock market. In that case, it might be useful to interpret the technical term utility as meaning money profit. Decision theory would then be an empirical theory. I want to look into the question of what utility could mean, if we want decision theory to function as a theory of practical rationality. I want to know whether it makes good sense to think of practical rationality as fully or even partly accounted for by decision theory. I shall lay my cards on the table: I hope it does make good sense to think of it that way. For, I think, if Humeans are right about practical rationality, then decision theory must play a very large part in their account. And I think Humeanism has very strong attractions.  相似文献   

3.
A soundness proof for an axiomatization of common belief in minimal neighbourhood semantics is provided, thereby leaving aside all assumptions of monotonicity in agents reasoning. Minimality properties of common belief are thus emphasized, in contrast to the more usual fixed point properties. The proof relies on the existence of transfinite fixed points of sequences of neighbourhood systems even when they are not closed under supersets. Obvious shortcoming of the note is the lack of a completeness proof.  相似文献   

4.
Far-sighted equilibria in 2 × 2, non-cooperative,repeated games   总被引:1,自引:1,他引:0  
Consider a two-person simultaneous-move game in strategic form. Suppose this game is played over and over at discrete points in time. Suppose, furthermore, that communication is not possible, but nevertheless we observe some regularity in the sequence of outcomes. The aim of this paper is to provide an explanation for the question why such regularity might persist for many (i.e., infinite) periods.Each player, when contemplating a deviation, considers a sequential-move game, roughly speaking of the following form: if I change my strategy this period, then in the next my opponent will take his strategy b and afterwards I can switch to my strategy a, but then I am worse off since at that outcome my opponent has no incentive to change anymore, whatever I do. Theoretically, however, there is no end to such reaction chains. In case that deviating by some player gives him less utility in the long run than before deviation, we say that the original regular sequence of outcomes is far-sighted stable for that player. It is a far-sighted equilibrium if it is far-sighted stable for both players.  相似文献   

5.
This article presents the thesis that a critique of decisions is not necessarily (except in the trivial sense) a critique of preferences. This thesis runs contrary to the fundamental assumption in economic theory that a critique of decisions will always simultaneously be a critique of (subjective) preferences, since decision behavior is after all a manifestation of preferences. If this thesis is right, then the paradigm of so-called instrumental rationality is in serious trouble, not for external reasons but because of imminent inconsistencies. The thesis is developed in five parts: I. A preliminary remark to the economic theory of rationality in general. II. The cooperation problem as a challenge to the economic theory of rationality. III. An account of the most interesting attempt to save the theory. IV. A critique of that attempt. V. And the conclusion: practical reason is concerned with actions and not with preferences.  相似文献   

6.
Dominance and Efficiency in Multicriteria Decision under Uncertainty   总被引:2,自引:0,他引:2  
Ben Abdelaziz  F.  Lang  P.  Nadeau  R. 《Theory and Decision》1999,47(3):191-211
This paper proposes several concepts of efficient solutions for multicriteria decision problems under uncertainty. We show how alternative notions of efficiency may be grounded on different decision contexts, depending on what is known about the Decision Maker's (DM) preference structure and probabilistic anticipations. We define efficient sets arising naturally from polar decision contexts. We investigate these sets from the points of view of their relative inclusions and point out some particular subsets which may be especially relevant to some decision situations.  相似文献   

7.
This paper considers two fundamental aspects of the analysis of dynamic choices under risk: the issue of the dynamic consistency of the strategies of a non EU maximizer, and the issue that an individual whose preferences are nonlinear in probabilities may choose a strategy which is in some appropriate sense dominated by other strategies. A proposed way of dealing with these problems, due to Karni and Safra and called behavioral consistency, is described. The implications of this notion of behavioral consistency are explored, and it is shown that while the Karni and Safra approach obtains dynamically consistent behavior under nonlinear preferences, it may imply the choice of dominated strategies even in very simple decision trees.  相似文献   

8.
If K is an index of relative voting power for simple voting games, the bicameral postulate requires that the distribution of K -power within a voting assembly, as measured by the ratios of the powers of the voters, be independent of whether the assembly is viewed as a separate legislature or as one chamber of a bicameral system, provided that there are no voters common to both chambers. We argue that a reasonable index – if it is to be used as a tool for analysing abstract, uninhabited decision rules – should satisfy this postulate. We show that, among known indices, only the Banzhaf measure does so. Moreover, the Shapley–Shubik, Deegan–Packel and Johnston indices sometimes witness a reversal under these circumstances, with voter x less powerful than y when measured in the simple voting game G1 , but more powerful than y when G1 is bicamerally joined with a second chamber G2 . Thus these three indices violate a weaker, and correspondingly more compelling, form of the bicameral postulate. It is also shown that these indices are not always co-monotonic with the Banzhaf index and that as a result they infringe another intuitively plausible condition – the price monotonicity condition. We discuss implications of these findings, in light of recent work showing that only the Shapley–Shubik index, among known measures, satisfies another compelling principle known as the bloc postulate. We also propose a distinction between two separate aspects of voting power: power as share in a fixed purse (P-power) and power as influence (I-power).  相似文献   

9.
Tiebreak rules are necessary for revealing indifference in non- sequential decisions. I focus on a preference relation that satisfies Ordering and fails Independence in the following way. Lotteries a and b are indifferent but the compound lottery 0.5f, 0.5b is strictly preferred to the compound lottery 0.5f, 0.5a. Using tiebreak rules the following is shown here: In sequential decisions when backward induction is applied, a preference like the one just described must alter the preference relation between a and b at certain choice nodes, i.e., indifference between a and b is not stable. Using this result, I answer a question posed by Rabinowicz (1997) concerning admissibility in sequential decisions when indifferent options are substituted at choice nodes.  相似文献   

10.
Acker  Mary H. 《Theory and Decision》1997,42(3):207-213
Several decision rules, including the minimax regret rule, have been posited to suggest optimizing strategies for an individual when neither objective nor subjective probabilities can be associated to the various states of the world. These all share the shortcoming of focusing only on extreme outcomes. This paper suggests an alternative approach of tempered regrets which may more closely replicate the decision process of individuals in those situations in which avoiding the worst outcome tempers the loss from not achieving the best outcome. The assumption of total ignorance of the probabilities associated with the various states is maintained. Applications and illustrations from standard neoclassical theory are discussed.  相似文献   

11.
Pareto-inefficient perfect equilibria can be represented by the liberal paradox approach of Sen, appropriately reconfigured to model intertemporal decision-making by an individual. We show that the preference profile used by Grout (1982) to construct a case in which naive choice Pareto-dominates sophisticated choice can be so represented, if tastes change and if the individual can make decisions at time t, which restrict or determine opportunities available in period t + 1 and beyond. This ability to make a decision that binds oneself in the future is a form of rights assignment. We also show how two resolutions of the liberal paradox work out in the individual decision framework.  相似文献   

12.
A complete classification theorem for voting processes on a smooth choice spaceW of dimensionw is presented. Any voting process is classified by two integersv * () andw(), in terms of the existence or otherwise of the optima set, IO(), and the cycle set IC().In dimension belowv * () the cycle set is always empty, and in dimension abovew() the optima set is nearly always empty while the cycle set is open dense and path con nected. In the latter case agenda manipulation results in any outcome.For admissible (compact, convex) choice spaces, the two sets are related by the general equilibrium result that IO() union IC() is non-empty. This in turn implies existence of optima in low dimensions. The equilibrium theorem is used to examine voting games with an infinite electorate, and the nature ofstructure induced equilibria, induced by jurisdictional restrictions.This material is based on work supported by a Nuffield Foundation grant.  相似文献   

13.
Three different methods of obtaining certainty equivalent valuations of four simple gambles were used with a sample of 358 people paid entirely according to their decisions. The results caution against oversimplistic utility models, and exhibit various characteristics that invite further investigation, including: a marked tendency to round valuations up or down; a tendency to value riskier actions more highly than less risky actions; and multimodal distributions of valuations which, despite their unusual shape, appeared to constitute an identifiable pattern of behaviour.  相似文献   

14.
A rule for the acceptance of scientific hypotheses called the principle of cost-benefit dominance is shown to be more effective and efficient than the well-known principle of the maximization of expected (epistemic) utility. Harvey's defense of his theory of the circulation of blood in animals is examined as a historical paradigm case of a successful defense of a scientific hypothesis and as an implicit application of the cost-benefit dominance rule advocated here. Finally, various concepts of dominance are considered by means of which the effectiveness of our rule may be increased.The number of friends who have kindly given me suggestions and encouragement is almost embarrassingly large, but I would like to express my gratitude to Myles Brand, Cliff Hooker, David Hull, Scott Kleiner, Hugh Lehman, Werner Leinfellner, Andrew McLaughlin and Tom W. Settle.  相似文献   

15.
A system-based decision logic predicated on subjective and objective probabilities is developed incorporating the Bayesian learning process. Selection of specific analytical instruments for generating informational stock pertaining to the system under investigation is described by learning curves which empirically treat either growth in raw information stocks or- which, as a corollary, empirically measure the reduction in expected error associated with models of system phenomena. The decision logic is extended for handling shifts in instrumental modalities, that is, switching from one instrumental category to another during the analysis process. Thus, selection of analytical instruments, and development of system-analysis strategies, need not be totally a prioristic. Although the procedural paradigm presented here is still somewhat immature, it may help focus attention on opportunities for optimizing analytical and system modelling processes.  相似文献   

16.
In reply to McClennen, the paper argues that his criticism is based on a mistaken assumption about the meaning of rationality postulates, to be called the Implication Principle. Once we realize that the Implication Principle has no validity, McClennen's criticisms of what he calls the Reductio Argument and what he calls the Incentive Argument fall to the ground. The rest of the paper criticizes the rationality concept McClennen proposes in lieu of that used by orthodox game theory. It is argued that McClennen's concept is inconsistent with the behavior of real-life intelligent egoists; it is incompatible with the way payoffs are defined in game theory; and it would be highly dangerous as a practical guide to human behavior.The author is indebted to the National Science Foundation for financial support trough Grant GS-3222, administered through the Center for Research in Management Science, University of California, Berkeley.  相似文献   

17.
This paper falls within the field of Distributive Justice and (as the title indicates) addresses itself specifically to the meshing problem. Briefly stated, the meshing problem is the difficulty encountered when one tries to aggregate the two parameters of beneficence and equity in a way that results in determining which of two or more alternative utility distributions is most just. A solution to this problem, in the form of a formal welfare measure, is presented in the paper. This formula incorporates the notions of equity and beneficence (which are defined earlier by the author) and weighs them against each other to compute a numerical value which represents the degree of justice a given distribution possesses. This value can in turn be used comparatively to select which utility scheme, of those being considered, is best.Three fundamental adequacy requirements, which any acceptable welfare measuring method must satisfy, are presented and subsequently demonstrated to be formally deducible as theorems of the author's system. A practical application of the method is then considered as well as a comparison of it with Nicholas Rescher's method (found in his book, Distributive Justice). The conclusion reached is that Rescher's system is unacceptable, since it computes counter-intuitive results. Objections to the author's welfare measure are considered and answered. Finally, a suggestion for expanding the system to cover cases it was not originally designed to handle (i.e. situations where two alternative utility distributions vary with regard to the number of individuals they contain) is made. The conclusion reached at the close of the paper is that an acceptable solution to the meshing problem has been established.I would like to gratefully acknowledge the assistance of Michael Tooley whose positive suggestions and critical comments were invaluable in the writting of this paper.  相似文献   

18.
Singular causal explanations cite explicitly, or may be paraphrased to cite explicitly, a particular factor as the cause of another particular factor. During recent years there has emerged a consensus account of the nature of an important feature of such explanations, the distinction between a factor regarded correctly in a given context of inquiry as the cause of a given result and those other causally relevant factors, sometimes called mere conditions, which are not regarded correctly in that context of inquiry as the cause of that result. In this paper that consensus account is characterized and developed. The developed version is then used to illuminate some recent discussions of singular causal explanations.Work on this paper was supported by a University of Maryland Faculty Research Award. Earlier versions were read at the University of Minnesota and at the 1971 Western Division meetings of the American Philosophical Association. I have profited from criticisms raised on these occasions. I am especially grateful for the comments of James Lesher, Peter Machamer, John Vollrath, and the students in my Macalester College seminar.  相似文献   

19.
Summary The objective Bayesian program has as its fundamental tenet (in addition to the three Bayesian postulates) the requirement that, from a given knowledge base a particular probability function is uniquely appropriate. This amounts to fixing initial probabilities, based on relatively little information, because Bayes' theorem (conditionalization) then determines the posterior probabilities when the belief state is altered by enlarging the knowledge base. Moreover, in order to reconstruct orthodox statistical procedures within a Bayesian framework, only privileged ignorance probability functions will work.To serve all these ends objective Bayesianism seeks additional principles for specifying ignorance and partial information probabilities. H. Jeffreys' method of invariance (or Jaynes' modification thereof) is used to solve the former problem, and E. Jaynes' rule of maximizing entropy (subject to invariance for continuous distributions) has recently been thought to solve the latter. I have argued that neither policy is acceptable to a Bayesian since each is inconsistent with conditionalization. Invariance fails to give a consistent representation to the state of ignorance professed. The difficulties here parallel familiar weaknesses in the old Laplacean principle of insufficient reason. Maximizing entropy is unsatisfactory because the partial information it works with fails to capture the effect of uncertainty about related nuisance factors. The result is a probability function that represents a state richer in empirical content than the belief state targeted for representation. Alternatively, by conditionalizing on information about a nuisance parameter one may move from a distribution of lower to higher entropy, despite the obvious increase in information available.Each of these two complaints appear to me to be symptoms of the program's inability to formulate rules for picking privileged probability distributions that serve to represent ignorance or near ignorance. Certainly the methods advocated by Jeffreys, Jaynes and Rosenkrantz are mathematically convenient idealizations wherein specified distributions are elevated to the roles of ignorance and partial information distributions. But the cost that goes with the idealization is a violation of conditionalization, and if that is the ante that we must put up to back objective Bayesianism then I propose we look for a different candidate to earn our support.31  相似文献   

20.
Scientists often disagree about whether a new theory is better than the current theory. From this some (e.g., Thomas Kuhn) have inferred that the values of science are changing and subjective, and hence that science is an irrational enterprise. As an alternative, this paper develops a rational model of the scientific enterprise according to which the scope and elegance of theories are important elements in the scientist's utility function. The varied speed of acceptance of new theories by scientists can be explained in terms of the optimal allocation of time among different scientific activities. The model thus accounts for the rationality of science in a way that is broadly consistent with the empirical evidence on the history and practice of science.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号