首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We axiomatize preferences that can be represented by a monotonic aggregation of subjective expected utilities generated by a utility function and some set of i.i.d. probability measures over a product state space, S. For such preferences, we define relevant measures, show that they are treated as if they were the only marginals possibly governing the state space, and connect them with the measures appearing in the aforementioned representation. These results allow us to interpret relevant measures as reflecting part of perceived ambiguity, meaning subjective uncertainty about probabilities over states. Under mild conditions, we show that increases or decreases in ambiguity aversion cannot affect the relevant measures. This property, necessary for the conclusion that these measures reflect only perceived ambiguity, distinguishes the set of relevant measures from the leading alternative in the literature. We apply our findings to a number of well‐known models of ambiguity‐sensitive preferences. For each model, we identify the set of relevant measures and the implications of comparative ambiguity aversion.  相似文献   

2.
Consider a group of individuals with unobservable perspectives (subjective prior beliefs) about a sequence of states. In each period, each individual receives private information about the current state and forms an opinion (a posterior belief). She also chooses a target individual and observes the target's opinion. This choice involves a trade‐off between well‐informed targets, whose signals are precise, and well‐understood targets, whose perspectives are well known. Opinions are informative about the target's perspective, so observed individuals become better understood over time. We identify a simple condition under which long‐run behavior is history independent. When this fails, each individual restricts attention to a small set of experts and observes the most informed among these. A broad range of observational patterns can arise with positive probability, including opinion leadership and information segregation. In an application to areas of expertise, we show how these mechanisms generate own field bias and large field dominance.  相似文献   

3.
We characterize optimal mechanisms for the multiple‐good monopoly problem and provide a framework to find them. We show that a mechanism is optimal if and only if a measure μ derived from the buyer's type distribution satisfies certain stochastic dominance conditions. This measure expresses the marginal change in the seller's revenue under marginal changes in the rent paid to subsets of buyer types. As a corollary, we characterize the optimality of grand‐bundling mechanisms, strengthening several results in the literature, where only sufficient optimality conditions have been derived. As an application, we show that the optimal mechanism for n independent uniform items each supported on [c,c+1] is a grand‐bundling mechanism, as long as c is sufficiently large, extending Pavlov's result for two items Pavlov, 2011. At the same time, our characterization also implies that, for all c and for all sufficiently large n, the optimal mechanism for n independent uniform items supported on [c,c+1] is not a grand‐bundling mechanism.  相似文献   

4.
An extension to Ellsberg's experiment demonstrates that attitudes to ambiguity and compound objective lotteries are tightly associated. The sample is decomposed into three main groups: subjective expected utility subjects, who reduce compound objective lotteries and are ambiguity neutral, and two groups that exhibit different forms of association between preferences over compound lotteries and ambiguity, corresponding to alternative theoretical models that account for ambiguity averse or seeking behavior.  相似文献   

5.
This paper considers mechanism design problems in environments with ambiguity‐sensitive individuals. The novel idea is to introduce ambiguity in mechanisms so as to exploit the ambiguity sensitivity of individuals. Deliberate engineering of ambiguity, through ambiguous mediated communication, can allow (partial) implementation of social choice functions that are not incentive compatible with respect to prior beliefs. We provide a complete characterization of social choice functions partially implementable by ambiguous mechanisms.  相似文献   

6.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

7.
In this article, we study the competitive interactions between a firm producing standard products and a firm producing custom products. Consumers with heterogeneous preferences choose between n standard products, which may not meet their preferences exactly but are available immediately, and a custom product, available only after a certain lead time l. Standard products incur a variety cost that increases with n and custom products incur a lead time cost that is decreasing in the lead time l. We consider a two‐stage game wherein at stage 1, the standard product firm chooses the variety and the custom firm chooses the lead time and then both firms set prices simultaneously. We characterize the subgame‐perfect Nash equilibrium of the game. We find that both firms can coexist in equilibrium, either sharing the market as local monopolists or in a price‐competitive mode. The standard product firm may offer significant or minimal variety depending on the equilibrium outcome. We provide several interesting insights on the variety, lead time, and prices of the products offered and on the impact of problem parameters on the equilibrium outcomes. For instance, we show that the profit margin and price of the custom product are likely to be higher than that of standard products in equilibrium under certain conditions. Also, custom firms are more likely to survive and succeed in product markets with larger potential market sizes. Another interesting insight is that increased consumer sensitivity to product fit may result in lower lead time for the custom product.  相似文献   

8.
Two fundamental axioms in social choice theory are consistency with respect to a variable electorate and consistency with respect to components of similar alternatives. In the context of traditional non‐probabilistic social choice, these axioms are incompatible with each other. We show that in the context of probabilistic social choice, these axioms uniquely characterize a function proposed by Fishburn (1984). Fishburn's function returns so‐called maximal lotteries, that is, lotteries that correspond to optimal mixed strategies in the symmetric zero‐sum game induced by the pairwise majority margins. Maximal lotteries are guaranteed to exist due to von Neumann's Minimax Theorem, are almost always unique, and can be efficiently computed using linear programming.  相似文献   

9.
We develop an extension of Luce's random choice model to study violations of the weak axiom of revealed preference. We introduce the notion of a stochastic preference and show that it implies the Luce model. Then, to address well‐known difficulties of the Luce model, we define the attribute rule and establish that the existence of a well‐defined stochastic preference over attributes characterizes it. We prove that the set of attribute rules and random utility maximizers are essentially the same. Finally, we show that both the Luce and attribute rules have a unique consistent extension to dynamic problems.  相似文献   

10.
We consider an agent who chooses an option after receiving some private information. This information, however, is unobserved by an analyst, so from the latter's perspective, choice is probabilistic or random. We provide a theory in which information can be fully identified from random choice. In addition, the analyst can perform the following inferences even when information is unobservable: (1) directly compute ex ante valuations of menus from random choice and vice versa, (2) assess which agent has better information by using choice dispersion as a measure of informativeness, (3) determine if the agent's beliefs about information are dynamically consistent, and (4) test to see if these beliefs are well‐calibrated or rational.  相似文献   

11.
Differences in preferences are important to explain variation in individuals' behavior. There is, however, no consensus on how to take these differences into account when evaluating policies. While prominent in the economic literature, the standard utilitarian criterion is controversial. According to some, interpersonal comparability of utilities involves value judgments with little objective basis. Others argue that social justice is primarily about the distribution of commodities assigned to individuals, rather than their subjective satisfaction or happiness. In this paper, we propose and axiomatically characterize a criterion, named opportunity‐equivalent utilitarian, that addresses these claims. First, our criterion ranks social alternatives on the basis of individuals' ordinal preferences. Second, it compares individuals based on the fairness of their assignments. Opportunity‐equivalent utilitarianism requires society to maximize the sum of specific indices of well‐being that are cardinal, interpersonally comparable, and represent each individual's preferences.  相似文献   

12.
This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill‐posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root‐n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root‐n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug‐in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug‐in PSMD estimator, and hence the asymptotic chi‐square distribution of the sieve Wald statistic; (3) the asymptotic chi‐square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non‐optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.  相似文献   

13.
We develop a theory of how the value of an agent's information advantage depends on the persistence of information. We focus on strategic situations with strict conflict of interest, formalized as stochastic zero‐sum games where only one of the players observes the state that evolves according to a Markov operator. Operator Q is said to be better for the informed player than operator P if the value of the game under Q is higher than under P regardless of the stage game. We show that this defines a convex partial order on the space of ergodic Markov operators. Our main result is a full characterization of this partial order, intepretable as an ordinal notion of persistence relevant for games. The analysis relies on a novel characterization of the value of a stochastic game with incomplete information.  相似文献   

14.
A sequence of experiments documents static and dynamic “preference reversals” between sooner‐smaller and later‐larger rewards, when the sooner reward could be immediate. The theoretically motivated design permits separate identification of time consistent, stationary, and time invariant choices. At least half of the subjects are time consistent, but only three‐quarters of them exhibit stationary choices. About half of subjects with time inconsistent choices have stationary preferences. These results challenge the view that present‐bias preferences are the main source of time inconsistent choices.  相似文献   

15.
This paper makes the following original contributions to the literature. (i) We develop a simpler analytical characterization and numerical algorithm for Bayesian inference in structural vector autoregressions (VARs) that can be used for models that are overidentified, just‐identified, or underidentified. (ii) We analyze the asymptotic properties of Bayesian inference and show that in the underidentified case, the asymptotic posterior distribution of contemporaneous coefficients in an n‐variable VAR is confined to the set of values that orthogonalize the population variance–covariance matrix of ordinary least squares residuals, with the height of the posterior proportional to the height of the prior at any point within that set. For example, in a bivariate VAR for supply and demand identified solely by sign restrictions, if the population correlation between the VAR residuals is positive, then even if one has available an infinite sample of data, any inference about the demand elasticity is coming exclusively from the prior distribution. (iii) We provide analytical characterizations of the informative prior distributions for impulse‐response functions that are implicit in the traditional sign‐restriction approach to VARs, and we note, as a special case of result (ii), that the influence of these priors does not vanish asymptotically. (iv) We illustrate how Bayesian inference with informative priors can be both a strict generalization and an unambiguous improvement over frequentist inference in just‐identified models. (v) We propose that researchers need to explicitly acknowledge and defend the role of prior beliefs in influencing structural conclusions and we illustrate how this could be done using a simple model of the U.S. labor market.  相似文献   

16.
We propose a novel technique to boost the power of testing a high‐dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated by only a few components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high‐dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component,” which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. The proposed methods are then applied to testing the factor pricing models and validating the cross‐sectional independence in panel data models.  相似文献   

17.
We consider empirical measurement of equivalent variation (EV) and compensating variation (CV) resulting from price change of a discrete good using individual‐level data when there is unobserved heterogeneity in preferences. We show that for binary and unordered multinomial choice, the marginal distributions of EV and CV can be expressed as simple closed‐form functionals of conditional choice probabilities under essentially unrestricted preference distributions. These results hold even when the distribution and dimension of unobserved heterogeneity are neither known nor identified, and utilities are neither quasilinear nor parametrically specified. The welfare distributions take simple forms that are easy to compute in applications. In particular, average EV for a price rise equals the change in average Marshallian consumer surplus and is smaller than average CV for a normal good. These nonparametric point‐identification results fail for ordered choice if the unit price is identical for all alternatives, thereby providing a connection to Hausman–Newey's (2014) partial identification results for the limiting case of continuous choice.  相似文献   

18.
Mechanism design enables a social planner to obtain a desired outcome by leveraging the players' rationality and their beliefs. It is thus a fundamental, but yet unproven, intuition that the higher the level of rationality of the players, the better the set of obtainable outcomes. In this paper, we prove this fundamental intuition for players with possibilistic beliefs, a model long considered in epistemic game theory. Specifically, • We define a sequence of monotonically increasing revenue benchmarks for single‐good auctions, G0G1G2≤⋯, where each Gi is defined over the players' beliefs and G0 is the second‐highest valuation (i.e., the revenue benchmark achieved by the second‐price mechanism). • We (1) construct a single, interim individually rational, auction mechanism that, without any clue about the rationality level of the players, guarantees revenue Gk if all players have rationality levels ≥k+1, and (2) prove that no such mechanism can guarantee revenue even close to Gk when at least two players are at most level‐k rational.  相似文献   

19.
The demand for assets as prices and initial wealth vary identifies beliefs and attitudes towards risk. We derive conditions that guarantee identification with no knowledge either of the cardinal utility index (attitudes towards risk) or of the distribution of future endowments or payoffs of assets; the argument applies even if the asset market is incomplete and demand is observed only locally.  相似文献   

20.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号