首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
A decision maker (DM) is characterized by two binary relations. The first reflects choices that are rational in an “objective” sense: the DM can convince others that she is right in making them. The second relation models choices that are rational in a “subjective” sense: the DM cannot be convinced that she is wrong in making them. In the context of decision under uncertainty, we propose axioms that the two notions of rationality might satisfy. These axioms allow a joint representation by a single set of prior probabilities and a single utility index. It is “objectively rational” to choose f in the presence of g if and only if the expected utility of f is at least as high as that of g given each and every prior in the set. It is “subjectively rational” to choose f rather than g if and only if the minimal expected utility of f (with respect to all priors in the set) is at least as high as that of g. In other words, the objective and subjective rationality relations admit, respectively, a representation à la Bewley (2002) and à la Gilboa and Schmeidler (1989). Our results thus provide a bridge between these two classic models, as well as a novel foundation for the latter.  相似文献   

2.
Harsanyi (1974) criticized the von Neumann–Morgenstern (vNM) stable set for its presumption that coalitions are myopic about their prospects. He proposed a new dominance relation incorporating farsightedness, but retained another feature of the stable set: that a coalition S can impose any imputation as long as its restriction to S is feasible for it. This implicitly gives an objecting coalition complete power to arrange the payoffs of players elsewhere, which is clearly unsatisfactory. While this assumption is largely innocuous for myopic dominance, it is of crucial significance for its farsighted counterpart. Our modification of the Harsanyi set respects “coalitional sovereignty.” The resulting farsighted stable set is very different from both the Harsanyi and the vNM sets. We provide a necessary and sufficient condition for the existence of a farsighted stable set containing just a single‐payoff allocation. This condition roughly establishes an equivalence between core allocations and the union of allocations over all single‐payoff farsighted stable sets. We then conduct a comprehensive analysis of the existence and structure of farsighted stable sets in simple games. This last exercise throws light on both single‐payoff and multi‐payoff stable sets, and suggests that they do not coexist.  相似文献   

3.
Empirical studies using survey data on expectations have frequently observed that forecasts are biased and have concluded that agents are not rational. We establish that existing rationality tests are not robust to even small deviations from symmetric loss and hence have little ability to tell whether the forecaster is irrational or the loss function is asymmetric. We quantify the trade‐off between forecast inefficiency and asymmetric loss leading to identical outcomes of standard rationality tests and explore new and more general methods for testing forecast rationality jointly with flexible families of loss functions that embed squared loss as a special case. Empirical applications to survey data on forecasts of real output growth and inflation suggest that rejections of rationality may largely have been driven by the assumption of squared loss. Moreover, our results suggest that agents are averse to “bad” outcomes such as lower‐than‐expected real output growth and higher‐than‐expected inflation and that they incorporate such loss aversion into their forecasts. (JEL: C22, C53, E37)  相似文献   

4.
In this paper we view bargaining and cooperation as an interaction superimposed on a game in strategic form. A multistage bargaining procedure for N players, the “proposer commitment” procedure, is presented. It is inspired by Nash's two‐player variable‐threat model; a key feature is the commitment to “threats.” We establish links to classical cooperative game theory solutions, such as the Shapley value in the transferable utility case. However, we show that even in standard pure exchange economies, the traditional coalitional function may not be adequate when utilities are not transferable. (JEL: C70, C71, C78, D70)  相似文献   

5.
We prove the folk theorem for discounted repeated games under private, almost‐perfect monitoring. Our result covers all finite, n‐player games that satisfy the usual full‐dimensionality condition. Mixed strategies are allowed in determining the individually rational payoffs. We assume no cheap‐talk communication between players and no public randomization device.  相似文献   

6.
Mechanism design enables a social planner to obtain a desired outcome by leveraging the players' rationality and their beliefs. It is thus a fundamental, but yet unproven, intuition that the higher the level of rationality of the players, the better the set of obtainable outcomes. In this paper, we prove this fundamental intuition for players with possibilistic beliefs, a model long considered in epistemic game theory. Specifically, • We define a sequence of monotonically increasing revenue benchmarks for single‐good auctions, G0G1G2≤⋯, where each Gi is defined over the players' beliefs and G0 is the second‐highest valuation (i.e., the revenue benchmark achieved by the second‐price mechanism). • We (1) construct a single, interim individually rational, auction mechanism that, without any clue about the rationality level of the players, guarantees revenue Gk if all players have rationality levels ≥k+1, and (2) prove that no such mechanism can guarantee revenue even close to Gk when at least two players are at most level‐k rational.  相似文献   

7.
In this article, we study behavior in a series of two‐player supply chain game experiments. Each player simultaneously chooses a capacity before demand is realized, and sales are given by the minimum of realized demand and chosen capacities. We focus on the differences in behavior under fixed pairs and random rematching. Intuition suggests that long‐run relations should lead to more profitable outcomes. However, our results go against this intuition. While subjects' capacity choices are better aligned (i.e., closer together) under fixed pairs, average profits are more variable. Moreover, learning is slower under fixed pairs—so much so that over the last five periods, average profits are actually higher under random rematching. The underlying cause for this finding appears to be a “first‐impressions” bias, present only under fixed matching, in which the greater the misalignment in initial choices, the lower are average profits.  相似文献   

8.
This paper develops a nonparametric theory of preferences over one's own and others' monetary payoffs. We introduce “more altruistic than” (MAT), a partial ordering over such preferences, and interpret it with known parametric models. We also introduce and illustrate “more generous than” (MGT), a partial ordering over opportunity sets. Several recent studies focus on two‐player extensive form games of complete information in which the first mover (FM) chooses a more or less generous opportunity set for the second mover (SM). Here reciprocity can be formalized as the assertion that an MGT choice by the FM will elicit MAT preferences in the SM. A further assertion is that the effect on preferences is stronger for acts of commission by FM than for acts of omission. We state and prove propositions on the observable consequences of these assertions. Finally, empirical support for the propositions is found in existing data from investment and dictator games, the carrot and stick game, and the Stackelberg duopoly game and in new data from Stackelberg mini‐games.  相似文献   

9.
Many violations of the independence axiom of expected utility can be traced to subjects' attraction to risk‐free prospects. The key axiom in this paper, negative certainty independence ([Dillenberger, 2010]), formalizes this tendency. Our main result is a utility representation of all preferences over monetary lotteries that satisfy negative certainty independence together with basic rationality postulates. Such preferences can be represented as if the agent were unsure of how to evaluate a given lottery p; instead, she has in mind a set of possible utility functions over outcomes and displays a cautious behavior: she computes the certainty equivalent of p with respect to each possible function in the set and picks the smallest one. The set of utilities is unique in a well defined sense. We show that our representation can also be derived from a “cautious” completion of an incomplete preference relation.  相似文献   

10.
Mortality effects of exposure to air pollution and other environmental hazards are often described by the estimated number of “premature” or “attributable” deaths and the economic value of a reduction in exposure as the product of an estimate of “statistical lives saved” and a “value per statistical life.” These terms can be misleading because the number of deaths advanced by exposure cannot be determined from mortality data alone, whether from epidemiology or randomized trials (it is not statistically identified). The fraction of deaths “attributed” to exposure is conventionally derived as the hazard fraction (R – 1)/R, where R is the relative risk of mortality between high and low exposure levels. The fraction of deaths advanced by exposure (the “etiologic” fraction) can be substantially larger or smaller: it can be as large as one and as small as 1/e (≈0.37) times the hazard fraction (if the association is causal and zero otherwise). Recent literature reveals misunderstanding about these concepts. Total life years lost in a population due to exposure can be estimated but cannot be disaggregated by age or cause of death. Economic valuation of a change in exposure-related mortality risk to a population is not affected by inability to know the fraction of deaths that are etiologic. When individuals facing larger or smaller changes in mortality risk cannot be identified, the mean change in population hazard is sufficient for valuation; otherwise, the economic value can depend on the distribution of risk reductions.  相似文献   

11.
We introduce a class of strategies that generalizes examples constructed in two‐player games under imperfect private monitoring. A sequential equilibrium is belief‐free if, after every private history, each player's continuation strategy is optimal independently of his belief about his opponents' private histories. We provide a simple and sharp characterization of equilibrium payoffs using those strategies. While such strategies support a large set of payoffs, they are not rich enough to generate a folk theorem in most games besides the prisoner's dilemma, even when noise vanishes.  相似文献   

12.
In this study, we examined optimal pricing strategies for “pay‐per‐time,” “pay‐per‐volume,” and “pay‐per‐both‐time‐and‐volume” based leasing of data networks in a monopoly environment. Conventionally, network capacity distribution includes short‐/long‐term bandwidth and/or usage time leasing. When customers choose connection‐time–based pricing, their rational behavior is to fully utilize the bandwidth capacity within a fixed time period, which may cause the network to burst (or overload). Conversely, when customers choose volume‐based strategies their rational behavior is to send only the minimum bytes necessary (even for time‐fixed tasks for real time applications), causing the quality of the task to decrease, which in turn creates an opportunity cost for the provider. Choosing a pay‐per time and volume hybridized pricing scheme allows customers to take advantage of both pricing strategies while lessening the disadvantages of each, because consumers generally have both time‐ and size‐fixed tasks such as batch data transactions. One of the key contributions of this study is to show that pay‐per both time and volume pricing is a viable and often preferable alternative to the offerings based on only time or volume, and that judicious use of such a pricing policy is profitable to the network provider.  相似文献   

13.
Consider a two‐person intertemporal bargaining problem in which players choose actions and offers each period, and collect payoffs (as a function of that period's actions) while bargaining proceeds. This can alternatively be viewed as an infinitely repeated game wherein players can offer one another enforceable contracts that govern play for the rest of the game. Theory is silent with regard to how the surplus is likely to be split, because a folk theorem applies. Perturbing such a game with a rich set of behavioral types for each player yields a specific asymptotic prediction for how the surplus will be divided, as the perturbation probabilities approach zero. Behavioral types may follow nonstationary strategies and respond to the opponent's play. In equilibrium, rational players initially choose a behavioral type to imitate and a war of attrition ensues. How much should a player try to get and how should she behave while waiting for the resolution of bargaining? In both respects she should build her strategy around the advice given by the “Nash bargaining with threats” (NBWT) theory developed for two‐stage games. In any perfect Bayesian equilibrium, she can guarantee herself virtually her NBWT payoff by imitating a behavioral type with the following simple strategy: in every period, ask for (and accept nothing less than) that player's NBWT share and, while waiting for the other side to concede, take the action Nash recommends as a threat in his two‐stage game. The results suggest that there are forces at work in some dynamic games that favor certain payoffs over all others. This is in stark contrast to the classic folk theorems, to the further folk theorems established for repeated games with two‐sided reputational perturbations, and to the permissive results obtained in the literature on bargaining with payoffs as you go.  相似文献   

14.
We study network games in which users choose routes in computerized networks susceptible to congestion. In the “unsplittable” condition, route choices are completely unregulated, players are symmetric, each player controls a single unit of flow and chooses a single origin–destination (OD) path. In the “splittable” condition, which is the main focus of this study, route choices are partly regulated, players are asymmetric, each player controls multiple units of flow and chooses multiple O–D paths to distribute her fleet. In each condition, users choose routes in two types of network: a basic network with three parallel routes and an augmented network with five routes sharing joint links. We construct and subsequently test equilibrium solutions for each combination of condition and network type, and then propose a Markov revision protocol to account for the dynamics of play. In both conditions, route choice behavior approaches equilibrium and the Braess Paradox is clearly manifested.  相似文献   

15.
This paper proposes a structural nonequilibrium model of initial responses to incomplete‐information games based on “level‐k” thinking, which describes behavior in many experiments with complete‐information games. We derive the model's implications in first‐ and second‐price auctions with general information structures, compare them to equilibrium and Eyster and Rabin's (2005) “cursed equilibrium,” and evaluate the model's potential to explain nonequilibrium bidding in auction experiments. The level‐k model generalizes many insights from equilibrium auction theory. It allows a unified explanation of the winner's curse in common‐value auctions and overbidding in those independent‐private‐value auctions without the uniform value distributions used in most experiments.  相似文献   

16.
Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson‐distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional “single‐hit” dose‐response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose‐response models in terms of probability generating functions. It is shown formally that the theoretical single‐hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single‐hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single‐hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose‐response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose‐response assessment as well as practical risk characterization are discussed.  相似文献   

17.
While the literature on nonclassical measurement error traditionally relies on the availability of an auxiliary data set containing correctly measured observations, we establish that the availability of instruments enables the identification of a large class of nonclassical nonlinear errors‐in‐variables models with continuously distributed variables. Our main identifying assumption is that, conditional on the value of the true regressors, some “measure of location” of the distribution of the measurement error (e.g., its mean, mode, or median) is equal to zero. The proposed approach relies on the eigenvalue–eigenfunction decomposition of an integral operator associated with specific joint probability densities. The main identifying assumption is used to “index” the eigenfunctions so that the decomposition is unique. We propose a convenient sieve‐based estimator, derive its asymptotic properties, and investigate its finite‐sample behavior through Monte Carlo simulations.  相似文献   

18.
Human subjects in the newsvendor game place suboptimal orders: orders are typically between the expected profit‐maximizing quantity and mean demand (“pull‐to‐center bias”). In previous work, we have shown that impulse balance equilibrium (IBE), which is based on a simple ex post rationality principle along with an equilibrium condition, can predict ordering decisions in the laboratory. In this study, we extend IBE to standing orders and multiple‐period feedback and show that it predicts—in line with previous findings—that constraining newsvendors to make a standing order for a sequence of periods moves the average of submitted orders toward the optimum.  相似文献   

19.
Past research in modeling human judgments has been accompanied by continuing debate as to the necessity and effectiveness of using “configural” models, vis-a-vis“first-order” models, to represent complex decision processes. The resounding power of first-order models adequately to represent apparently configural processes repeatedly has been demonstrated to the frustration of those researchers who intuitively feel that man decides in a complex and configural manner. This paper presents the theory that the apparent weakness of configural models may be attributed to the assumption that interaction effects are “continuous” phenomena when in fact they are “discrete” or local interactions. The definition of subspaces of the predictor set, over which “local” first-order hyperplanes may be used, is investigated as a viable means of representing “discrete” interactions while preserving some of the parsimony of the “continuous” formulations. The application of the Automatic Interaction Detection technique to define subspaces and local models is attempted with positive results for an installment loan officer. Then a comparison with the results from a recent study which used various “continuous” formulations shows a definite superiority of the “local” modeling approach.  相似文献   

20.
We develop an equilibrium framework that relaxes the standard assumption that people have a correctly specified view of their environment. Each player is characterized by a (possibly misspecified) subjective model, which describes the set of feasible beliefs over payoff‐relevant consequences as a function of actions. We introduce the notion of a Berk–Nash equilibrium: Each player follows a strategy that is optimal given her belief, and her belief is restricted to be the best fit among the set of beliefs she considers possible. The notion of best fit is formalized in terms of minimizing the Kullback–Leibler divergence, which is endogenous and depends on the equilibrium strategy profile. Standard solution concepts such as Nash equilibrium and self‐confirming equilibrium constitute special cases where players have correctly specified models. We provide a learning foundation for Berk–Nash equilibrium by extending and combining results from the statistics literature on misspecified learning and the economics literature on learning in games.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号