首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Ruse  Michael 《Theory and Decision》1974,5(4):413-440
In this paper I consider the problem of man's evolution - in particular the evolutionary problems raised when we consider man as a cultural animal as well as a biological one. I argue that any adequate cultural evolutionary theory must have the notion of adaptation as a central concept, where this must be construed in a fairly literal (biological) sense, that is as something which aids its possessors (i.e. men) to survive and reproduce. I argue against theories which treat adaptation in a metaphorical sense, particularly those speaking of the adaptation of cultures without reference to men. Iron tools per se are not better adapted than bronze tools - it is the men with iron tools who are better adapted than men with bronze tools. I show that by taking the approach that I do, one can apply at once in a fruitful manner some conclusions of biological evolutionary theory directly to men and their cultures. I conclude with a brief discussion of methodological issues raised by cultural evolutionary theories, particularly those of confirmation and falsification.  相似文献   

2.
The author tries to formulate what a determinist believes to be true. The formulation is based on some concepts defined in a systems-theoretical manner, mainly on the concept of an experiment over the sets A m (a set of m-tuples of input values) and B n (a set of n-tuples of output values) in the time interval (t 1, ..., t k ) (symbolically E[t 1,..., t k , A m , B n ]), on the concept of a behavior of the system S m,n (=(A m , B n )) on the basis of the experiment E[t 1, ..., t k , A m , B n ] and, indeed, on the concept of deterministic behavior .... The resulting formulation of the deterministic hypothesis shows that this hypothesis expresses a belief that we always could find some hidden parameters.  相似文献   

3.
The main object of this paper is to provide the logical machinery needed for a viable basis for talking of the consequences, the content, or of equivalences between inconsistent sets of premisses.With reference to its maximal consistent subsets (m.c.s.), two kinds of consequences of a propositional set S are defined. A proposition P is a weak consequence (W-consequence) of S if it is a logical consequence of at least one m.c.s. of S, and P is an inevitable consequence (I-consequence) of S if it is a logical consequence of all the m.c.s. of S. The set of W-consequences of a set S it determines (up to logical equivalence) its m.c.s. (This enables us to define a normal form for every set such that any two sets having the same W-consequences have the same normal form.) The W-consequences and I-consequences will not do to define the content of a set S. The first is too broad, may include propositions mutually inconsistent, the second is too narrow. A via media between these concepts is accordingly defined: P is a P-consequence of S, where P is some preference criterion yielding some of the m.c.s. of S as preferred to others, and P is a consequence of all of the P-preferred m.c.s. of S. The bulk of the paper is devoted to discussion of various preference criteria, and also surveys the application of this machinery in diverse contexts - for example, in connection with the processing of mutually inconsistent reports.  相似文献   

4.
A new investigation is launched into the problem of decision-making in the face of complete ignorance, and linked to the problem of social choice. In the first section the author introduces a set of properties which might characterize a criterion for decision-making under complete ignorance. Two of these properties are novel: independence of non-discriminating states, and weak pessimism. The second section provides a new characterization of the so-called principle of insufficient reason. In the third part, lexicographic maximin and maximax criteria are characterized. Finally, the author's results are linked to the problem of social choice.  相似文献   

5.
Chipman (1979) proves that for an expected utility maximizer choosing from a domain of normal distributions with mean and variance 2 the induced preference functionV(, ) satisfies a differential equation known as the heat equation. The purpose of this note is to provide a generalization and simple proof of this result which does not depend on the normality assumption.  相似文献   

6.
Separating marginal utility and probabilistic risk aversion   总被引:10,自引:0,他引:10  
This paper is motivated by the search for one cardinal utility for decisions under risk, welfare evaluations, and other contexts. This cardinal utility should have meaningprior to risk, with risk depending on cardinal utility, not the other way around. The rank-dependent utility model can reconcile such a view on utility with the position that risk attitude consists of more than marginal utility, by providing a separate risk component: a probabilistic risk attitude towards probability mixtures of lotteries, modeled through a transformation for cumulative probabilities. While this separation of risk attitude into two independent components is the characteristic feature of rank-dependent utility, it had not yet been axiomatized. Doing that is the purpose of this paper. Therefore, in the second part, the paper extends Yaari's axiomatization to nonlinear utility, and provides separate axiomatizations for increasing/decreasing marginal utility and for optimistic/pessimistic probability transformations. This is generalized to interpersonal comparability. It is also shown that two elementary and often-discussed properties — quasi-convexity (aversion) of preferences with respect to probability mixtures, and convexity (pessimism) of the probability transformation — are equivalent.  相似文献   

7.
Let (, ) and (, ) be mean-standard deviation pairs of two probability distributions on the real line. Mean-variance analyses presume that the preferred distribution depends solely on these pairs, with primary preference given to larger mean and smaller variance. This presumption, in conjunction with the assumption that one distribution is better than a second distribution if the mass of the first is completely to the right of the mass of the second, implies that (, ) is preferred to (, ) if and only if either > or ( = and < ), provided that the set of distributions is sufficiently rich. The latter provision fails if the outcomes of all distributions lie in a finite interval, but then it is still possible to arrive at more liberal dominance conclusions between (, ) and (, ).This research was supported by the Office of Naval Research.  相似文献   

8.
A complete classification theorem for voting processes on a smooth choice spaceW of dimensionw is presented. Any voting process is classified by two integersv * () andw(), in terms of the existence or otherwise of the optima set, IO(), and the cycle set IC().In dimension belowv * () the cycle set is always empty, and in dimension abovew() the optima set is nearly always empty while the cycle set is open dense and path con nected. In the latter case agenda manipulation results in any outcome.For admissible (compact, convex) choice spaces, the two sets are related by the general equilibrium result that IO() union IC() is non-empty. This in turn implies existence of optima in low dimensions. The equilibrium theorem is used to examine voting games with an infinite electorate, and the nature ofstructure induced equilibria, induced by jurisdictional restrictions.This material is based on work supported by a Nuffield Foundation grant.  相似文献   

9.
Nash's solution of a two-person cooperative game prescribes a coordinated mixed strategy solution involving Pareto-optimal outcomes of the game. Testing this normative solution experimentally presents problems in as much as rather detailed explanations must be given to the subjects of the meaning of threat strategy, strategy mixture, expected payoff, etc. To the extent that it is desired to test the solution using naive subjects, the problem arises of imparting to them a minimal level of understanding about the issue involved in the game without actually suggesting the solution.Experiments were performed to test the properties of the solution of a cooperative two-person game as these are embodied in three of Nash's four axioms: Symmetry, Pareto-optimality, and Invariance with respect to positive linear transformations. Of these, the last was definitely discorroborated, suggesting that interpersonal comparison of utilities plays an important part in the negotiations.Some evidence was also found for a conjecture generated by previous experiments, namely that an externally imposed threat (penalty for non-cooperation) tends to bring the players closer together than the threats generated by the subjects themselves in the process of negotiation.  相似文献   

10.
Far-sighted equilibria in 2 × 2, non-cooperative,repeated games   总被引:1,自引:1,他引:0  
Consider a two-person simultaneous-move game in strategic form. Suppose this game is played over and over at discrete points in time. Suppose, furthermore, that communication is not possible, but nevertheless we observe some regularity in the sequence of outcomes. The aim of this paper is to provide an explanation for the question why such regularity might persist for many (i.e., infinite) periods.Each player, when contemplating a deviation, considers a sequential-move game, roughly speaking of the following form: if I change my strategy this period, then in the next my opponent will take his strategy b and afterwards I can switch to my strategy a, but then I am worse off since at that outcome my opponent has no incentive to change anymore, whatever I do. Theoretically, however, there is no end to such reaction chains. In case that deviating by some player gives him less utility in the long run than before deviation, we say that the original regular sequence of outcomes is far-sighted stable for that player. It is a far-sighted equilibrium if it is far-sighted stable for both players.  相似文献   

11.
Both Popper and Good have noted that a deterministic microscopic physical approach to probability requires subjective assumptions about the statistical distribution of initial conditions. However, they did not use such a fact for defining an a priori probability, but rather recurred to the standard observation of repetitive events. This observational probability may be hard to assess for real-life decision problems under uncertainty that very often are - strictly speaking - non-repetitive, one-time events. This may be a reason for the popularity of subjective probability in decision models. Unfortunately, such subjective probabilities often merely reflect attitudes towards risk, and not the underlying physical processes.In order to get as objective as possible a definition of probability for one-time events, this paper identifies the origin of randomness in individual chance processes. By focusing on the dynamics of the process, rather than on the (static) device, it is found that any process contains two components: observer-independent (= objective) and observer-dependent (= subjective). Randomness, if present, arises from the subjective definition of the rules of the game, and is not - as in Popper's propensity - a physical property of the chance device. In this way, the classical definition of probability is no longer a primitive notion based upon equally possible cases, but is derived from the underlying microscopic processes, plus a subjective, clearly identified, estimate of the branching ratios in an event tree. That is, equipossibility is not an intrinsic property of the system object/subject but is forced upon the system via the rules of the game/measurement.Also, the typically undefined concept of symmetry in games of chance is broken down into objective and subjective components. It is found that macroscopic symmetry may hold under microscopic asymmetry. A similar analysis of urn drawings shows no conceptual difference with other games of chance (contrary to Allais' opinion). Finally, the randomness in Lande's knife problem is not due to objective fortuity (as in Popper's view) but to the rules of the game (the theoretical difficulties arise from intermingling microscopic trajectories and macroscopic events).Dedicated to Professor Maurice Allais on the occasion of the Nobel Prize in Economics awarded December, 1988.  相似文献   

12.
This paper discusses several concepts that can be used to provide a foundation for a unified, theory of rational, economic behavior. First, decision-making is defined to be a process that takes place with reference to both subjective and objective time, that distinguishes between plans and actions, between information and states and that explicitly incorporates the collection and processing of information. This conception of decision making is then related to several important aspects of behavioral economics, the dependence of values on experience, the use of behavioral rules, the occurrence of multiple goals and environmental feedback.Our conclusions are (1) the non-transitivity of observed or revealed preferences is a characteristic of learning and hence is to be expected of rational decision-makers; (2) the learning of values through experience suggests the sensibleness of short time horizons and the making of choices according to flexible utility; (3) certain rules of thumb used to allow for risk are closely related to principles of Safety-First and can also be based directly on the hypothesis that the feeling of risk (the probability of disaster) is identified with extreme departures from recently executed decisions. (4) The maximization of a hierarchy of goals, or of a lexicographical utility function, is closely related to the search for feasibility and the practice of satisficing. (5) When the dim perception of environmental feedback and the effect of learning on values are acknowledged the intertemporal optimality of planned decision trajectories is seen to be a characteristic of subjective not objective time. This explains why decision making is so often best characterized by rolling plans. In short, we find that economic man - like any other - is an existential being whose plans are based on hopes and fears and whose every act involves a leap of faith.This paper is based on a talk presented at the Conference, New Beginnings in Economics, Akron, Ohio, March 15, 1969. Work on this paper was supported by a grant from the National Science Foundation.  相似文献   

13.
This paper studies two models of rational behavior under uncertainty whose predictions are invariant under ordinal transformations of utility. The quantile utility model assumes that the agent maximizes some quantile of the distribution of utility. The utility mass model assumes maximization of the probability of obtaining an outcome whose utility is higher than some fixed critical value. Both models satisfy weak stochastic dominance. Lexicographic refinements satisfy strong dominance.The study of these utility models suggests a significant generalization of traditional ideas of riskiness and risk preference. We define one action to be riskier than another if the utility distribution of the latter crosses that of the former from below. The single crossing property is equivalent to a minmax spread of a random variable. With relative risk defined by the single crossing criterion, the risk preference of a quantile utility maximizer increases with the utility distribution quantile that he maximizes. The risk preference of a utility mass maximizer increases with his critical utility value.  相似文献   

14.
This article reports an experimental study of decision-making outcomes in cooperative non-sidepayment games. The objective of this test was to determine which characteristic function, V (S) or V (S), provides the most accurate basis for payoff predictions from solution concepts. The experiment tested three solution concepts (core, stable set, imputation set) in the context of 5-person, 2-strategy non-sidepayment games. Predictions from each of the three solution concepts were computed on the basis of both V (S) and V (S), making a total of six predictive theories under test. Consistent with earlier studies (Michener et al., 1984a; Michener et al., 1985), two basic findings emerged. First, the data show that for each of the solutions tested, the prediction from any solution concept computed from V(S) was more accurate than the prediction from the same solution concept computed from V (S). Second, the -core was the most accurate of the six theories tested. Overall, these results support the view that V (S) is superior to V (S) as a basis for payoff predictions in cooperative non-sidepayment games.  相似文献   

15.
Focal points in pure coordination games: An experimental investigation   总被引:2,自引:0,他引:2  
This paper reports an experimental investigation of the hypothesis that in coordination games, players draw on shared concepts of salience to identify focal points on which they can coordinate. The experiment involves games in which equilibria can be distinguished from one another only in terms of the way strategies are labelled. The games are designed to test a number of specific hypotheses about the determinants of salience. These hypotheses are generally confirmed by the results of the experiment.  相似文献   

16.
Statistical analysis for negotiation support   总被引:2,自引:0,他引:2  
In this paper we provide an overview of the issues involved in using statistical analysis to support the process of international negotiation. We will illustrate how the approach can contribute to a negotiator's understanding and control of the interactions that occur during the course of a negotiation. The techniques are suited to the analysis of data collected from ongoing discussions and moves made by the parties. The analyses are used to illuminate influences and processes as they operate in particular cases or in negotiations in general. They do not identify a best strategy or outcome from among alternatives suggested either from theoretical assumptions about rationality and information-processing (see Munier and Rullière's paper in this issue), from personal preference structures (see Spector's paper in this issue), or from a rule-based modeling system (see Kersten's paper in this issue). This distinction should be evident in the discussion to follow, organized into several sections: From Empirical to Normative Analysis; Statistical Analysis for Situational Diagnosis; Time-Series Analysis of Cases, and Knowledge as Leverage Over the Negotiation Process. In a final section, we consider the challenge posed by attempts to implement these techniques with practitioners.  相似文献   

17.
This paper studies how an overall fuzzy preference relation can be constructed in the compensatory context of the simple additive difference model, when imprecision on the trade-offs has to be taken into account. Three credibility indices of preferences are analysed and illustrated by a numerical example. Arguments are presented supporting the use of the third index, for which an interesting transitivity property (which was an open problem) is proved.  相似文献   

18.
A rule for the acceptance of scientific hypotheses called the principle of cost-benefit dominance is shown to be more effective and efficient than the well-known principle of the maximization of expected (epistemic) utility. Harvey's defense of his theory of the circulation of blood in animals is examined as a historical paradigm case of a successful defense of a scientific hypothesis and as an implicit application of the cost-benefit dominance rule advocated here. Finally, various concepts of dominance are considered by means of which the effectiveness of our rule may be increased.The number of friends who have kindly given me suggestions and encouragement is almost embarrassingly large, but I would like to express my gratitude to Myles Brand, Cliff Hooker, David Hull, Scott Kleiner, Hugh Lehman, Werner Leinfellner, Andrew McLaughlin and Tom W. Settle.  相似文献   

19.
The random preference, Fechner (or white noise), and constant error (or tremble) models of stochastic choice under risk are compared. Various combinations of these approaches are used with expected utility and rank-dependent theory. The resulting models are estimated in a random effects framework using experimental data from two samples of 46 subjects who each faced 90 pairwise choice problems. The best fitting model uses the random preference approach with a tremble mechanism, in conjunction with rank-dependent theory. As subjects gain experience, trembles become less frequent and there is less deviation from behaviour consistent with expected utility theory.  相似文献   

20.
Singular causal explanations cite explicitly, or may be paraphrased to cite explicitly, a particular factor as the cause of another particular factor. During recent years there has emerged a consensus account of the nature of an important feature of such explanations, the distinction between a factor regarded correctly in a given context of inquiry as the cause of a given result and those other causally relevant factors, sometimes called mere conditions, which are not regarded correctly in that context of inquiry as the cause of that result. In this paper that consensus account is characterized and developed. The developed version is then used to illuminate some recent discussions of singular causal explanations.Work on this paper was supported by a University of Maryland Faculty Research Award. Earlier versions were read at the University of Minnesota and at the 1971 Western Division meetings of the American Philosophical Association. I have profited from criticisms raised on these occasions. I am especially grateful for the comments of James Lesher, Peter Machamer, John Vollrath, and the students in my Macalester College seminar.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号