首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 339 毫秒
1.
The demand for assets as prices and initial wealth vary identifies beliefs and attitudes towards risk. We derive conditions that guarantee identification with no knowledge either of the cardinal utility index (attitudes towards risk) or of the distribution of future endowments or payoffs of assets; the argument applies even if the asset market is incomplete and demand is observed only locally.  相似文献   

2.
We axiomatize preferences that can be represented by a monotonic aggregation of subjective expected utilities generated by a utility function and some set of i.i.d. probability measures over a product state space, S. For such preferences, we define relevant measures, show that they are treated as if they were the only marginals possibly governing the state space, and connect them with the measures appearing in the aforementioned representation. These results allow us to interpret relevant measures as reflecting part of perceived ambiguity, meaning subjective uncertainty about probabilities over states. Under mild conditions, we show that increases or decreases in ambiguity aversion cannot affect the relevant measures. This property, necessary for the conclusion that these measures reflect only perceived ambiguity, distinguishes the set of relevant measures from the leading alternative in the literature. We apply our findings to a number of well‐known models of ambiguity‐sensitive preferences. For each model, we identify the set of relevant measures and the implications of comparative ambiguity aversion.  相似文献   

3.
Many violations of the independence axiom of expected utility can be traced to subjects' attraction to risk‐free prospects. The key axiom in this paper, negative certainty independence ([Dillenberger, 2010]), formalizes this tendency. Our main result is a utility representation of all preferences over monetary lotteries that satisfy negative certainty independence together with basic rationality postulates. Such preferences can be represented as if the agent were unsure of how to evaluate a given lottery p; instead, she has in mind a set of possible utility functions over outcomes and displays a cautious behavior: she computes the certainty equivalent of p with respect to each possible function in the set and picks the smallest one. The set of utilities is unique in a well defined sense. We show that our representation can also be derived from a “cautious” completion of an incomplete preference relation.  相似文献   

4.
We develop a new parametric estimation procedure for option panels observed with error. We exploit asymptotic approximations assuming an ever increasing set of option prices in the moneyness (cross‐sectional) dimension, but with a fixed time span. We develop consistent estimators for the parameters and the dynamic realization of the state vector governing the option price dynamics. The estimators converge stably to a mixed‐Gaussian law and we develop feasible estimators for the limiting variance. We also provide semiparametric tests for the option price dynamics based on the distance between the spot volatility extracted from the options and one constructed nonparametrically from high‐frequency data on the underlying asset. Furthermore, we develop new tests for the day‐by‐day model fit over specific regions of the volatility surface and for the stability of the risk‐neutral dynamics over time. A comprehensive Monte Carlo study indicates that the inference procedures work well in empirically realistic settings. In an empirical application to S&P 500 index options, guided by the new diagnostic tests, we extend existing asset pricing models by allowing for a flexible dynamic relation between volatility and priced jump tail risk. Importantly, we document that the priced jump tail risk typically responds in a more pronounced and persistent manner than volatility to large negative market shocks.  相似文献   

5.
Modeling intergenerational altruism is crucial to evaluate the long‐term consequences of current decisions, and requires a set of principles guiding such altruism. We axiomatically develop a theory of pure, direct altruism: Altruism is pure if it concerns the total utility (rather than the mere consumption utility) of future generations, and direct if it directly incorporates the utility of all future generations. Our axioms deliver a new class of altruistic, forward‐looking preferences, whose weight put on the consumption of a future generation generally depends on the consumption of other generations. The only preferences lacking this dependence correspond to the quasi‐hyperbolic discounting model, which our theory characterizes. Our approach provides a framework to analyze welfare in the presence of altruistic preferences and addresses technical challenges stemming from the interdependent nature of such preferences.  相似文献   

6.
This article considers a class of fresh‐product supply chains in which products need to be transported by the upstream producer from a production base to a distant retail market. Due to high perishablility a portion of the products being shipped may decay during transportation, and therefore, become unsaleable. We consider a supply chain consisting of a single producer and a single distributor, and investigate two commonly adopted business models: (i) In the “pull” model, the distributor places an order, then the producer determines the shipping quantity, taking into account potential product decay during transportation, and transports the products to the destination market of the distributor; (ii) In the “push” model, the producer ships a batch of products to a distant wholesale market, and then the distributor purchases and resells to end customers. By considering a price‐sensitive end‐customer demand, we investigate the optimal decisions for supply chain members, including order quantity, shipping quantity, and retail price. Our research shows that both the producer and distributor (and thus the supply chain) will perform better if the pull model is adopted. To improve the supply chain performance, we propose a fixed inventory‐plus factor (FIPF) strategy, in which the producer announces a pre‐determined inventory‐plus factor and the distributor compensates the producer for any surplus inventory that would otherwise be wasted. We show that this strategy is a Pareto improvement over the pull and push models for both parties. Finally, numerical experiments are conducted, which reveal some interesting managerial insights on the comparison between different business models.  相似文献   

7.
A mixed manna contains goods (that everyone likes) and bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homogeneous of degree 1 and concave (and monotone), the competitive division maximizes the Nash product of utilities (Gale–Eisenberg): hence it is welfarist (determined by the set of feasible utility profiles), unique, continuous, and easy to compute. We show that the competitive division of a mixed manna is still welfarist. If the zero utility profile is Pareto dominated, the competitive profile is strictly positive and still uniquely maximizes the product of utilities. If the zero profile is unfeasible (for instance, if all items are bads), the competitive profiles are strictly negative and are the critical points of the product of disutilities on the efficiency frontier. The latter allows for multiple competitive utility profiles, from which no single‐valued selection can be continuous or resource monotonic. Thus the implementation of competitive fairness under linear preferences in interactive platforms like SPLIDDIT will be more difficult when the manna contains bads that overwhelm the goods.  相似文献   

8.
Subjects were instructed on how to use simple subjective probability and utility scales, and they were asked to actively role-play a decision maker in seven risk-dilemma situations. Each scenario provided subjects with specific subjective expected utility (SEU) information for both a certain and uncertain decision alternative, but left out one critical SEU component. Subjects supplied either the lowest probability or the lowest utility for success that they found necessary before they would select the uncertain over the certain alternative in each dilemma. Three experiments examined: (a) the degree to which Ss' estimations deviated from a pattern predicted by SEU models; (b) differences in choice patterns induced by response format variations (e.g., probability vs. utility estimation); (c) the effects of sex of S; and (d) the effects of the sex-role framing of the decision problems. Ss generally chose in accord with SEU maximization principles and did so with decreasing deviations from theoretical values as practice over situations increased (Experiments I, II and III). Decisions were initially more conservative on items requesting probability estimates (Experiment I), but this effect washed out over situations. Sex differences were revealed (Experiments I and III), but in limited fashion. Rather, a replicable (Experiments I, II and III) sex-by-sex role appropriateness by response format interaction was found, in which females responded “rationally” under both probability and utility estimation conditions and under both role sets (male and female). Males, however, responded extremely conservatively under female-framed, probability estimate conditions. Ss' choices were stable over a three-week interval (Experiment III).  相似文献   

9.
This paper proposes a method for aggregating individual preferences in the context of uncertainty. Individuals are assumed to abide by Savage's model of Subjective Expected Utility, in which everyone has his/her own utility and subjective probability. Disagreement on probabilities among individuals gives rise to uncertainty at the societal level, and thus society may entertain a set of probabilities rather than only one. We assume that social preference admits a Maxmin Expected Utility representation. In this context, two Pareto‐type conditions are shown to be equivalent to social utility being a weighted average of individual utilities and the social set of priors containing only weighted averages of individual priors. Thus, society respects consensus among individuals' beliefs and does not add ambiguity beyond disagreement on beliefs. We also deal with the case in which society does not rule out any individual belief.  相似文献   

10.
Our paper provides a complete characterization of leverage and default in binomial economies with financial assets serving as collateral. Our Binomial No‐Default Theorem states that any equilibrium is equivalent (in real allocations and prices) to another equilibrium in which there is no default. Thus actual default is irrelevant, though the potential for default drives the equilibrium and limits borrowing. This result is valid with arbitrary preferences and endowments, contingent or noncontingent promises, many assets and consumption goods, production, and multiple periods. We also show that only no‐default equilibria would be selected if there were the slightest cost of using collateral or handling default. Our Binomial Leverage Theorem shows that equilibrium Loan to Value (LTV) for noncontingent debt contracts is the ratio of the worst‐case return of the asset to the riskless gross rate of interest. In binomial economies, leverage is determined by down risk and not by volatility.  相似文献   

11.
We explore the set of preferences defined over temporal lotteries in an infinite horizon setting. We provide utility representations for all preferences that are both recursive and monotone. Our results indicate that the class of monotone recursive preferences includes Uzawa–Epstein preferences and risk‐sensitive preferences, but leaves aside several of the recursive models suggested by Epstein and Zin (1989) and Weil (1990). Our representation result is derived in great generality using Lundberg's (1982, 1985) work on functional equations.  相似文献   

12.
Consider a group of individuals with unobservable perspectives (subjective prior beliefs) about a sequence of states. In each period, each individual receives private information about the current state and forms an opinion (a posterior belief). She also chooses a target individual and observes the target's opinion. This choice involves a trade‐off between well‐informed targets, whose signals are precise, and well‐understood targets, whose perspectives are well known. Opinions are informative about the target's perspective, so observed individuals become better understood over time. We identify a simple condition under which long‐run behavior is history independent. When this fails, each individual restricts attention to a small set of experts and observes the most informed among these. A broad range of observational patterns can arise with positive probability, including opinion leadership and information segregation. In an application to areas of expertise, we show how these mechanisms generate own field bias and large field dominance.  相似文献   

13.
We develop an econometric methodology to infer the path of risk premia from a large unbalanced panel of individual stock returns. We estimate the time‐varying risk premia implied by conditional linear asset pricing models where the conditioning includes both instruments common to all assets and asset‐specific instruments. The estimator uses simple weighted two‐pass cross‐sectional regressions, and we show its consistency and asymptotic normality under increasing cross‐sectional and time series dimensions. We address consistent estimation of the asymptotic variance by hard thresholding, and testing for asset pricing restrictions induced by the no‐arbitrage assumption. We derive the restrictions given by a continuum of assets in a multi‐period economy under an approximate factor structure robust to asset repackaging. The empirical analysis on returns for about ten thousand U.S. stocks from July 1964 to December 2009 shows that risk premia are large and volatile in crisis periods. They exhibit large positive and negative strays from time‐invariant estimates, follow the macroeconomic cycles, and do not match risk premia estimates on standard sets of portfolios. The asset pricing restrictions are rejected for a conditional four‐factor model capturing market, size, value, and momentum effects.  相似文献   

14.
We develop an equilibrium framework that relaxes the standard assumption that people have a correctly specified view of their environment. Each player is characterized by a (possibly misspecified) subjective model, which describes the set of feasible beliefs over payoff‐relevant consequences as a function of actions. We introduce the notion of a Berk–Nash equilibrium: Each player follows a strategy that is optimal given her belief, and her belief is restricted to be the best fit among the set of beliefs she considers possible. The notion of best fit is formalized in terms of minimizing the Kullback–Leibler divergence, which is endogenous and depends on the equilibrium strategy profile. Standard solution concepts such as Nash equilibrium and self‐confirming equilibrium constitute special cases where players have correctly specified models. We provide a learning foundation for Berk–Nash equilibrium by extending and combining results from the statistics literature on misspecified learning and the economics literature on learning in games.  相似文献   

15.
We develop an extension of Luce's random choice model to study violations of the weak axiom of revealed preference. We introduce the notion of a stochastic preference and show that it implies the Luce model. Then, to address well‐known difficulties of the Luce model, we define the attribute rule and establish that the existence of a well‐defined stochastic preference over attributes characterizes it. We prove that the set of attribute rules and random utility maximizers are essentially the same. Finally, we show that both the Luce and attribute rules have a unique consistent extension to dynamic problems.  相似文献   

16.
We consider a decision maker who ranks actions according to the smooth ambiguity criterion of Klibanoff, Marinacci, and Mukerji (2005). An action is justifiable if it is a best reply to some belief over probabilistic models. We show that higher ambiguity aversion expands the set of justifiable actions. A similar result holds for risk aversion. Our results follow from a generalization of the duality lemma of Wald (1949) and Pearce (1984).  相似文献   

17.
We propose a semiparametric two‐step inference procedure for a finite‐dimensional parameter based on moment conditions constructed from high‐frequency data. The population moment conditions take the form of temporally integrated functionals of state‐variable processes that include the latent stochastic volatility process of an asset. In the first step, we nonparametrically recover the volatility path from high‐frequency asset returns. The nonparametric volatility estimator is then used to form sample moment functions in the second‐step GMM estimation, which requires the correction of a high‐order nonlinearity bias from the first step. We show that the proposed estimator is consistent and asymptotically mixed Gaussian and propose a consistent estimator for the conditional asymptotic variance. We also construct a Bierens‐type consistent specification test. These infill asymptotic results are based on a novel empirical‐process‐type theory for general integrated functionals of noisy semimartingale processes.  相似文献   

18.
This paper makes the following original contributions to the literature. (i) We develop a simpler analytical characterization and numerical algorithm for Bayesian inference in structural vector autoregressions (VARs) that can be used for models that are overidentified, just‐identified, or underidentified. (ii) We analyze the asymptotic properties of Bayesian inference and show that in the underidentified case, the asymptotic posterior distribution of contemporaneous coefficients in an n‐variable VAR is confined to the set of values that orthogonalize the population variance–covariance matrix of ordinary least squares residuals, with the height of the posterior proportional to the height of the prior at any point within that set. For example, in a bivariate VAR for supply and demand identified solely by sign restrictions, if the population correlation between the VAR residuals is positive, then even if one has available an infinite sample of data, any inference about the demand elasticity is coming exclusively from the prior distribution. (iii) We provide analytical characterizations of the informative prior distributions for impulse‐response functions that are implicit in the traditional sign‐restriction approach to VARs, and we note, as a special case of result (ii), that the influence of these priors does not vanish asymptotically. (iv) We illustrate how Bayesian inference with informative priors can be both a strict generalization and an unambiguous improvement over frequentist inference in just‐identified models. (v) We propose that researchers need to explicitly acknowledge and defend the role of prior beliefs in influencing structural conclusions and we illustrate how this could be done using a simple model of the U.S. labor market.  相似文献   

19.
We propose a novel model of stochastic choice: the single‐crossing random utility model (SCRUM). This is a random utility model in which the collection of preferences satisfies the single‐crossing property. We offer a characterization of SCRUMs based on two easy‐to‐check properties: the classic Monotonicity property and a novel condition, Centrality. The identified collection of preferences and associated probabilities is unique. We show that SCRUMs nest both single‐peaked and single‐dipped random utility models and establish a stochastic monotone comparative result for the case of SCRUMs.  相似文献   

20.
We study the incentives that drive an online firm to make various types of innovations in a competitive environment. We develop and use a simplified price competition model between two retailers, one online and one offline. A given fraction of consumers, called the Internet penetration, comparison shop online, independent of their customer type, thereby creating two markets for the offline retailer, a captive market and a competitive market. The online product has the steeper of the two linear utility functions, which means that the customers who buy online in our model are high end. We focus on the competitive region in which both retailers are (strictly) profitable in the competitive market and consider innovations that increase high‐end appeal, low‐end appeal, and/or reduce unit cost. We find that the online firm has a strong incentive to invest in innovations that either reduce unit cost and/or, equivalently, increase the appeal to all consumers equally. Investments of this type are strategic complements: implementing one increases the value of another, so the value of two innovations of this type is more than the sum of the values of each individually. We identify a relative strength measure of the online firm such that, as its high‐end appeal increases and/or its unit cost decreases, we say that the online firm is stronger. This strength measure facilitates drawing an explicit dividing line between strong and weak online firms. If Internet penetration increases, the online firm's profits increase if and only if it is strong. If penetration increases over time, it is possible for a strong firm to turn weak and see its profits decrease and possibly disappear completely. A strong online firm has more opportunity to profit from low‐end innovations than does a weak one, while the opposite is true for high‐end innovations. Interestingly, some innovations may actually decrease the online firm's profits. We discuss the implications of our results for existing and future online innovations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号