首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In ‘Semantic Foundations for the Logic of Preference’ (Rescher, ed.,The Logic of Decision and Action, University Press, Pittsburgh, 1967), Nicholas Rescher claims that, on the semantics developed in that paper, a certain principle - call it ‘Q’ turns out to be ‘unacceptable’. I argue, however, that, given certain assumptions that Rescher invokes in that same paper,Q can in fact be shown to be a ‘preference-tautology’, and henceQ should be classified as ‘acceptable’ on Rescher's theory.  相似文献   

2.
There are narrowest bounds for P(h) when P(e)  =  y and P(h/e)  =  x, which bounds collapse to x as y goes to 1. A theorem for these bounds – Bounds for Probable Modus Ponens – entails a principle for updating on possibly uncertain evidence subject to these bounds that is a generalization of the principle for updating by conditioning on certain evidence. This way of updating on possibly uncertain evidence is appropriate when updating by ‘probability kinematics’ or ‘Jeffrey-conditioning’ is, and apparently in countless other cases as well. A more complicated theorem due to Karl Wagner – Bounds for Probable Modus Tollens – registers narrowest bounds for P(∼h) when P(∼e) =  y and P(e/h)  =  x. This theorem serves another principle for updating on possibly uncertain evidence that might be termed ‘contraditioning’, though it is for a way of updating that seems in practice to be frequently not appropriate. It is definitely not a way of putting down a theory – for example, a random-chance theory of the apparent fine-tuning for life of the parameters of standard physics – merely on the ground that the theory made extremely unlikely conditions of which we are now nearly certain. These theorems for bounds and updating are addressed to standard conditional probabilities defined as ratios of probabilities. Adaptations for Hosiasson-Lindenbaum ‘free-standing’ conditional probabilities are provided. The extended on-line version of this article (URL: ) includes appendices and expansions of several notes. Appendix A contains demonstrations and confirmations of elements of those adaptations. Appendix B discusses and elaborates analogues of modus ponens and modus tollens for probabilities and conditional probabilities found in Elliott Sober’s “Intelligent Design and Probability Reasoning.” Appendix C adds to observations made below regarding relations of Probability Kinematics and updating subject to Bounds for Probable Modus Ponens.   相似文献   

3.
Previous work by Diffo Lambo and Moulen [Theory and Decision 53, 313–325 (2002)] and Felsenthal and Machover [The Measurement of Voting Power, Edward Elgar Publishing Limited (1998)], shows that all swap preserving measures of voting power are ordinally equivalent on any swap robust simple voting game. Swap preserving measures include the Banzhaf, the Shapley–Shubik and other commonly used measures of a priori voting power. In this paper, we completely characterize the achievable hierarchies for any such measure on a swap robust simple voting game. Each possible hierarchy can be induced by a weighted voting game and we provide a constructive proof of this result. In particular, the strict hierarchy is always achievable as long as there are at least five players.  相似文献   

4.
We introduce two extreme methods to pairwisely compare ordered lists of the same length, viz. the comonotonic and the countermonotonic comparison method, and show that these methods are, respectively, related to the copula T M (the minimum operator) and the Ł ukasiewicz copula T L used to join marginal cumulative distribution functions into bivariate cumulative distribution functions. Given a collection of ordered lists of the same length, we generate by means of T M and T L two probabilistic relations Q M and Q L and identify their type of transitivity. Finally, it is shown that any probabilistic relation with rational elements on a 3-dimensional space of alternatives which possesses one of these types of transitivity, can be generated by three ordered lists and at least one of the two extreme comparison methods.  相似文献   

5.
In cooperative Cournot oligopoly games, it is known that the β-core is equal to the α-core, and both are non-empty if every individual profit function is continuous and concave (Zhao, Games Econ Behav 27:153–168, 1999b). Following Chander and Tulkens (Int J Game Theory 26:379–401, 1997), we assume that firms react to a deviating coalition by choosing individual best reply strategies. We deal with the problem of the non-emptiness of the induced core, the γ-core, by two different approaches. The first establishes that the associated Cournot oligopoly Transferable Utility (TU)-games are balanced if the inverse demand function is differentiable and every individual profit function is continuous and concave on the set of strategy profiles, which is a step forward beyond Zhao’s core existence result for this class of games. The second approach, restricted to the class of Cournot oligopoly TU-games with linear cost functions, provides a single-valued allocation rule in the γ-core called Nash Pro rata (NP)-value. This result generalizes Funaki and Yamato’s (Int J Game Theory 28:157–171, 1999) core existence result from no capacity constraint to asymmetric capacity constraints. Moreover, we provide an axiomatic characterization of this solution by means of four properties: efficiency, null firm, monotonicity, and non-cooperative fairness.  相似文献   

6.
The Value of a Probability Forecast from Portfolio Theory   总被引:1,自引:0,他引:1  
A probability forecast scored ex post using a probability scoring rule (e.g. Brier) is analogous to a risky financial security. With only superficial adaptation, the same economic logic by which securities are valued ex ante – in particular, portfolio theory and the capital asset pricing model (CAPM) – applies to the valuation of probability forecasts. Each available forecast of a given event is valued relative to each other and to the “market” (all available forecasts). A forecast is seen to be more valuable the higher its expected score and the lower the covariance of its score with the market aggregate score. Forecasts that score highly in trials when others do poorly are appreciated more than those with equal success in “easy” trials where most forecasts score well. The CAPM defines economically rational (equilibrium) forecast prices at which forecasters can trade shares in each other’s ex post score – or associated monetary payoff – thereby balancing forecast risk against return and ultimately forming optimally hedged portfolios. Hedging this way offers risk averse forecasters an “honest” alternative to the ruse of reporting conservative probability assessments.  相似文献   

7.
A note on uncertainty and discounting in models of economic growth   总被引:1,自引:0,他引:1  
The implications of uncertainty for appropriate discounting in models of economic growth have been studied at some length, notably, (Review of Economic Studies, 36:153–163; 1969) and (Journal of Public Economics, 85:149–166; 2002). A detailed account has now appeared in Journal of Risk and Uncertainty, 37:141–169; 2008, sections 4 and 5 (pp. 160–166). One interesting, if perhaps minor, aspect is that under certain circumstances, there appeared to be no solution or at least no satisfactory one. More importantly, the formulas are usually given for the log normal case and are somewhat complicated and hard to interpret intuitively. I show here that assuming a general distribution for returns to capital gives simpler and more understandable results.   相似文献   

8.
On Decomposing Net Final Values: Eva,Sva and Shadow Project   总被引:1,自引:0,他引:1  
A decomposition model of Net Final Values (NFV), named Systemic Value Added (SVA), is proposed for decision-making purposes, based on a systemic approach introduced in Magni [Magni, C. A. (2003), Bulletin of Economic Research 55(2), 149–176; Magni, C. A. (2004) Economic Modelling 21, 595–617]. The model translates the notion of excess profit giving formal expression to a counterfactual alternative available to the decision maker. Relations with other decomposition models are studied, among which Stewart’s [Stewart, G.B. (1991), The Quest for Value: The EVA™ Management Guide, Harper Collins, Publishers Inc]. The index here introduced differs from Stewart’s Economic Value Added (EVA) in that it rests on a different interpretation of the notion of excess profit and is formally connected with the EVA model by means of a shadow project. The SVA is formally and conceptually threefold, in that it is economic, financial, accounting-flavoured. Some results are offered, providing sufficient and necessary conditions for decomposing NFV. Relations between a project’s SVA and its shadow project’s EVA are shown, all results of Pressacco and Stucchi [Pressacco, F. and Stucchi, P. (1997), Rivista di Matematica per le Scienze Economiche e Sociali 20, 165–185] are proved by making use of the systemic approach and the shadow counterparts of those results are also shown.  相似文献   

9.
Scientific ideas neither arise nor develop in a vacuum. They are always nutured against a background of prior, partially conflicting ideas. Systemic hypothesistesting is the problem of testing scientific hypotheses relative to various systems of background knowledge. This paper shows how the problem of systemic hypothesis-testing (Sys HT) can be systematically expressed as a constrained maximimization problem. It is also shown how the error of the third kind (E III) is fundamental to the theory of Sys HT.The error of the third kind is defined as the probability of having solved the ‘wrong’ problem when one should have solved the ‘right’ problem. This paper shows howE III can be given both a systematic as well as a systemic treatment. Sys HT gives rise to a whole host of new decision problems, puzzles, and paradoxes.  相似文献   

10.
This paper examines the existence of strategic solutions to finite normal form games under the assumption that strategy choices can be described as choices among lotteries where players have security- and potential level preferences over lotteries (e.g., Cohen, Theory and Decision, 33, 101–104, 1992, Gilboa, Journal of Mathematical Psychology, 32, 405–420, 1988, Jaffray, Theory and Decision, 24, 169–200, 1988). Since security- and potential level preferences require discontinuous utility representations, standard existence results for Nash equilibria in mixed strategies (Nash, Proceedings of the National Academy of Sciences, 36, 48–49, 1950a, Non-Cooperative Games, Ph.D. Dissertation, Princeton University Press, 1950b) or for equilibria in beliefs (Crawford, Journal of Economic Theory, 50, 127–154, 1990) do not apply. As a key insight this paper proves that non-existence of equilibria in beliefs, and therefore non-existence of Nash equilibria in mixed strategies, is possible in finite games with security- and potential level players. But, as this paper also shows, rationalizable strategies (Bernheim, Econometrica, 52, 1007–1028, 1984, Moulin, Mathematical Social Sciences, 7, 83–102, 1984, Pearce, Econometrica, 52, 1029–1050, 1984) exist for such games. Rationalizability rather than equilibrium in beliefs therefore appears to be a more favorable solution concept for games with security- and potential level players.   相似文献   

11.
This paper shows that a relatively easy algorithm for computing the (unique) outcome of a sophisticated voting procedure called sequential voting by veto (SVV) applies to a more general situation than considered hitherto. According to this procedure a sequence of n voters must select s out of m + s options (s > 0, m 3 n 3 2). The ith voter, when his turn comes, vetoes k i options (k i 1, k i = m). The s remaining non-vetoed options are selected. Every voter is assumed to be fully informed of all other voters total (linear) preference orderings among the competing options, as well as of the order in which the veto votes are cast. This algorithm was proposed by Mueller (1978) for the special case where s and the k i are all equal to 1, and extended by Moulin (1983) to the somewhat more general case where the k i are arbitrary but s is still 1. Some theoretical and practical issues of voting by veto are discussed.  相似文献   

12.
Expected utility maximization problem is one of the most useful tools in mathematical finance, decision analysis and economics. Motivated by statistical model selection, via the principle of expected utility maximization, Friedman and Sandow (J Mach Learn Res 4:257–291, 2003a) considered the model performance question from the point of view of an investor who evaluates models based on the performance of the optimal strategies that the models suggest. They interpreted their performance measures in information theoretic terms and provided new generalizations of Shannon entropy and Kullback–Leibler relative entropy and called them U-entropy and U-relative entropy. In this article, a utility-based criterion for independence of two random variables is defined. Then, Markov’s inequality for probabilities is extended from the U-entropy viewpoint. Moreover, a lower bound for the U-relative entropy is obtained. Finally, a link between conditional U-entropy and conditional Renyi entropy is derived.  相似文献   

13.
A second-order probability Q(P) may be understood as the probability that the true probability of something has the value P. True may be interpreted as the value that would be assigned if certain information were available, including information from reflection, calculation, other people, or ordinary evidence. A rule for combining evidence from two independent sources may be derived, if each source i provides a function Q i (P). Belief functions of the sort proposed by Shafer (1976) also provide a formula for combining independent evidence, Dempster's rule, and a way of representing ignorance of the sort that makes us unsure about the value of P. Dempster's rule is shown to be at best a special case of the rule derived in connection with second-order probabilities. Belief functions thus represent a restriction of a full Bayesian analysis.  相似文献   

14.
There are at least two plausible generalisations of subjective expected utility (SEU) theory: cumulative prospect theory (which relaxes the independence axiom) and Levi’s decision theory (which relaxes at least ordering). These theories call for a re-assessment of the minimal requirements of rational choice. Here, I consider how an analysis of sequential decision making contributes to this assessment. I criticise Hammond’s (Economica 44(176):337–350, 1977; Econ Philos 4:292–297, 1988a; Risk, decision and rationality, 1988b; Theory Decis 25:25–78, 1988c) ‘consequentialist’ argument for the SEU preference axioms, but go on to formulate a related diachronic-Dutch-book-style’ argument that better achieves Hammond’s aims. Some deny the importance of Dutch-book sure losses, however, in which case, Seidenfeld’s (Econ Philos 4:267–290, 1988a) argument that distinguishes between theories that relax independence and those that relax ordering is relevant. I unravel Seidenfeld’s argument in light of the various criticisms of it and show that the crux of the argument is somewhat different and much more persuasive than what others have taken it to be; the critical issue is the modelling of future choices between ‘indifferent’ decision-tree branches in the sequential setting. Finally, I consider how Seidenfeld’s conclusions might nonetheless be resisted.  相似文献   

15.
Given an extensive game, with every node x and every player i a subset k i (x) of the set of terminal nodes is associated, and is given the interpretation of player i's knowledge (or information) at node x. A belief of player i is a function that associates with every node x an element of the set K i (x). A belief system is an n-tuple of beliefs, one for each player. A belief system is rational if it satisfies some natural consistency properties. The main result of the paper is that the notion of rational belief system gives rise to a refinement of the notion of subgame-perfect equilibrium.  相似文献   

16.
Ranking finite subsets of a given set X of elements is the formal object of analysis in this article. This problem has found a wide range of economic interpretations in the literature. The focus of the article is on the family of rankings that are additively representable. Existing characterizations are too complex and hard to grasp in decisional contexts. Furthermore, Fishburn (1996), Journal of Mathematical Psychology 40, 64–77 showed that the number of sufficient and necessary conditions that are needed to characterize such a family has no upper bound as the cardinality of X increases. In turn, this article proposes a way to overcome these difficulties and allows for the characterization of a meaningful (sub)family of additively representable rankings of sets by means of a few simple axioms. Pattanaik and Xu’s (1990), Recherches Economiques de Louvain 56, 383–390) characterization of the cardinality-based rule will be derived from our main result, and other new rules that stem from our general proposal are discussed and characterized in even simpler terms. In particular, we analyze restricted-cardinality based rules, where the set of “focal” elements is not given ex-ante; but brought out by the axioms.   相似文献   

17.
McGarvey (Econometrica, 21(4), 608–610, 1953) has shown that any irreflexive and anti-symmetric relation can be obtained as a relation induced by majority rule. We address the analogous issue for dominance relations of finite cooperative games with non-transferable utility (coalitional NTU games). We find any irreflexive relation over a finite set can be obtained as the dominance relation of some finite coalitional NTU game. We also show that any such dominance relation is induced by a non-cooperative game through β-effectivity. Dominance relations obtainable through α-effectivity, however, have to comply with a more restrictive condition, which we refer to as the edge-mapping property.  相似文献   

18.
Envy is sometimes suggested as an underlying motive in the assessment of different economic allocations. In the theoretical literature on fair division, following Foley [Foley, D. (1967), Yale Economic Essays, 7, 45–98], the term “envy” refers to an intrapersonal comparison of different consumption bundles. By contrast, in its everyday use “envy” involves interpersonal comparisons of well-being. We present, discuss results from free-form bargaining experiments on fair division problems in which inter-and intrapersonal criteria can be distinguished. We find that interpersonal comparisons play the dominant role. The effect of the intrapersonal criterion of envy freeness is limited to situations in which other fairness criteria are not applicable.   相似文献   

19.
Signaling games with reinforcement learning have been used to model the evolution of term languages (Lewis 1969, Convention. Cambridge, MA: Harvard University Press; Skyrms 2006, “Signals” Presidential Address. Philosophy of Science Association for PSA). In this article, syntactic games, extensions of David Lewis’s original sender–receiver game, are used to illustrate how a language that exploits available syntactic structure might evolve to code for states of the world. The evolution of a language occurs in the context of available vocabulary and syntax—the role played by each component is compared in the context of simple reinforcement learning.  相似文献   

20.
It is observed that the measure S u  = u′′′/u′ − (3/2)(u′′/u′)2, previously shown to be a relevant measure of the degree of downside risk aversion, is known in the mathematics literature as the Schwarzian derivative. The Schwarzian derivative has invariance properties under composition of functions that make it particularly well-behaved as a ranking of downside risk aversion. Indeed, it has the same invariance properties as the measure R u  = −u′′/u′, familiar to economists as a ranking of utility functions by degree of Arrow-Pratt risk aversion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号