首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper extends the long‐term factorization of the stochastic discount factor introduced and studied by Alvarez and Jermann (2005) in discrete‐time ergodic environments and by Hansen and Scheinkman (2009) and Hansen (2012) in Markovian environments to general semimartingale environments. The transitory component discounts at the stochastic rate of return on the long bond and is factorized into discounting at the long‐term yield and a positive semimartingale that extends the principal eigenfunction of Hansen and Scheinkman (2009) to the semimartingale setting. The permanent component is a martingale that accomplishes a change of probabilities to the long forward measure, the limit of T‐forward measures. The change of probabilities from the data‐generating to the long forward measure absorbs the long‐term risk‐return trade‐off and interprets the latter as the long‐term risk‐neutral measure.  相似文献   

2.
This paper develops a theory of optimal provision of commitment devices to people who value both commitment and flexibility and whose preferences differ in the degree of time inconsistency. If time inconsistency is observable, both a planner and a monopolist provide devices that help each person commit to the efficient level of flexibility. However, the combination of unobservable time inconsistency and preference for flexibility causes an adverse‐selection problem. To solve this problem, the monopolist and (possibly) the planner curtail flexibility in the device for a more inconsistent person at both ends of the efficient choice range; moreover, they may have to add unused options to the device for a less inconsistent person and also distort his actual choices. This theory has normative and positive implications for private and public provision of commitment devices.  相似文献   

3.
A major challenge for managers in turbulent environments is to make sound decisions quickly. Dynamic capabilities have been proposed as a means for addressing turbulent environments by helping managers extend, modify, and reconfigure existing operational capabilities into new ones that better match the environment. However, because dynamic capabilities have been viewed as an elusive black box, it is difficult for managers to make sound decisions in turbulent environments if they cannot effectively measure dynamic capabilities. Therefore, we first seek to propose a measurable model of dynamic capabilities by conceptualizing, operationalizing, and measuring dynamic capabilities. Specifically, drawing upon the dynamic capabilities literature, we identify a set of capabilities—sensing the environment, learning, coordinating, and integrating—that help reconfigure existing operational capabilities into new ones that better match the environment. Second, we propose a structural model where dynamic capabilities influence performance by reconfiguring existing operational capabilities in the context of new product development (NPD). Data from 180 NPD units support both the measurable model of dynamic capabilities and also the structural model by which dynamic capabilities influence performance in NPD by reconfiguring operational capabilities, particularly in higher levels of environmental turbulence. The study's implications for managerial decision making in turbulent environments by capturing the elusive black box of dynamic capabilities are discussed.  相似文献   

4.
We provide general conditions under which principal‐agent problems with either one or multiple agents admit mechanisms that are optimal for the principal. Our results cover as special cases pure moral hazard and pure adverse selection. We allow multidimensional types, actions, and signals, as well as both financial and non‐financial rewards. Our results extend to situations in which there are ex ante or interim restrictions on the mechanism, and allow the principal to have decisions in addition to choosing the agent's contract. Beyond measurability, we require no a priori restrictions on the space of mechanisms. It is not unusual for randomization to be necessary for optimality and so it (should be and) is permitted. Randomization also plays an essential role in our proof. We also provide conditions under which some forms of randomization are unnecessary.  相似文献   

5.
A mixed manna contains goods (that everyone likes) and bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homogeneous of degree 1 and concave (and monotone), the competitive division maximizes the Nash product of utilities (Gale–Eisenberg): hence it is welfarist (determined by the set of feasible utility profiles), unique, continuous, and easy to compute. We show that the competitive division of a mixed manna is still welfarist. If the zero utility profile is Pareto dominated, the competitive profile is strictly positive and still uniquely maximizes the product of utilities. If the zero profile is unfeasible (for instance, if all items are bads), the competitive profiles are strictly negative and are the critical points of the product of disutilities on the efficiency frontier. The latter allows for multiple competitive utility profiles, from which no single‐valued selection can be continuous or resource monotonic. Thus the implementation of competitive fairness under linear preferences in interactive platforms like SPLIDDIT will be more difficult when the manna contains bads that overwhelm the goods.  相似文献   

6.
We develop an econometric methodology to infer the path of risk premia from a large unbalanced panel of individual stock returns. We estimate the time‐varying risk premia implied by conditional linear asset pricing models where the conditioning includes both instruments common to all assets and asset‐specific instruments. The estimator uses simple weighted two‐pass cross‐sectional regressions, and we show its consistency and asymptotic normality under increasing cross‐sectional and time series dimensions. We address consistent estimation of the asymptotic variance by hard thresholding, and testing for asset pricing restrictions induced by the no‐arbitrage assumption. We derive the restrictions given by a continuum of assets in a multi‐period economy under an approximate factor structure robust to asset repackaging. The empirical analysis on returns for about ten thousand U.S. stocks from July 1964 to December 2009 shows that risk premia are large and volatile in crisis periods. They exhibit large positive and negative strays from time‐invariant estimates, follow the macroeconomic cycles, and do not match risk premia estimates on standard sets of portfolios. The asset pricing restrictions are rejected for a conditional four‐factor model capturing market, size, value, and momentum effects.  相似文献   

7.
What does contract negotiation look like when some parties hold private information and negotiation frictions are negligible? This paper analyzes this question and provides a foundation for renegotiation‐proof contracts in this environment. The model extends the framework of the Coase conjecture to situations in which the quantity or quality of the good is endogenously determined and to more general environments in which preferences are nonseparable in the traded goods. As frictions become negligible, all equilibria converge to a unique outcome which is separating, efficient, and straightforward to characterize.  相似文献   

8.
This paper analyzes a sequential search model with adverse selection. We study information aggregation by the price—how close the equilibrium prices are to the full‐information prices—when search frictions are small. We identify circumstances under which prices fail to aggregate information well even when search frictions are small. We trace this to a strong form of the winner's curse that is present in the sequential search model. The failure of information aggregation may result in inefficient allocations.  相似文献   

9.
We study a dynamic principal–agent relationship with adverse selection and limited commitment. We show that when the relationship is subject to productivity shocks, the principal may be able to improve her value over time by progressively learning the agent's private information. She may even achieve her first‐best payoff in the long run. The relationship may also exhibit path dependence, with early shocks determining the principal's long‐run value. These findings contrast sharply with the results of the ratchet effect literature, in which the principal persistently obtains low payoffs, giving up substantial informational rents to the agent.  相似文献   

10.
We extend Kyle's (1985) model of insider trading to the case where noise trading volatility follows a general stochastic process. We determine conditions under which, in equilibrium, price impact and price volatility are both stochastic, driven by shocks to uninformed volume even though the fundamental value is constant. The volatility of price volatility appears ‘excessive’ because insiders choose to trade more aggressively (and thus more information is revealed) when uninformed volume is higher and price impact is lower. This generates a positive relation between price volatility and trading volume, giving rise to an endogenous subordinate stochastic process for prices.  相似文献   

11.
Consider a group of individuals with unobservable perspectives (subjective prior beliefs) about a sequence of states. In each period, each individual receives private information about the current state and forms an opinion (a posterior belief). She also chooses a target individual and observes the target's opinion. This choice involves a trade‐off between well‐informed targets, whose signals are precise, and well‐understood targets, whose perspectives are well known. Opinions are informative about the target's perspective, so observed individuals become better understood over time. We identify a simple condition under which long‐run behavior is history independent. When this fails, each individual restricts attention to a small set of experts and observes the most informed among these. A broad range of observational patterns can arise with positive probability, including opinion leadership and information segregation. In an application to areas of expertise, we show how these mechanisms generate own field bias and large field dominance.  相似文献   

12.
This paper axiomatizes an intertemporal version of the maxmin expected‐utility model. It employs two axioms specific to a dynamic setting. The first requires that smoothing consumption across states of the world is more beneficial to the individual than smoothing consumption across time. Such behavior is viewed as the intertemporal manifestation of ambiguity aversion. The second axiom extends Koopmans' notion of stationarity from deterministic to stochastic environments.  相似文献   

13.
We study the estimation of (joint) moments of microstructure noise based on high frequency data. The estimation is conducted under a nonparametric setting, which allows the underlying price process to have jumps, the observation times to be irregularly spaced, and the noise to be dependent on the price process and to have diurnal features. Estimators of arbitrary orders of (joint) moments are provided, for which we establish consistency as well as central limit theorems. In particular, we provide estimators of autocovariances and autocorrelations of the noise. Simulation studies demonstrate excellent performance of our estimators in the presence of jumps, irregular observation times, and even rounding. Empirical studies reveal (moderate) positive autocorrelations of microstructure noise for the stocks tested.  相似文献   

14.
This paper studies regulated health insurance markets known as exchanges, motivated by the increasingly important role they play in both public and private insurance provision. We develop a framework that combines data on health outcomes and insurance plan choices for a population of insured individuals with a model of a competitive insurance exchange to predict outcomes under different exchange designs. We apply this framework to examine the effects of regulations that govern insurers' ability to use health status information in pricing. We investigate the welfare implications of these regulations with an emphasis on two potential sources of inefficiency: (i) adverse selection and (ii) premium reclassification risk. We find substantial adverse selection leading to full unraveling of our simulated exchange, even when age can be priced. While the welfare cost of adverse selection is substantial when health status cannot be priced, that of reclassification risk is five times larger when insurers can price based on some health status information. We investigate several extensions including (i) contract design regulation, (ii) self‐insurance through saving and borrowing, and (iii) insurer risk adjustment transfers.  相似文献   

15.
Deviations from requirements during the product development process can be considered as glitches. Fixing glitches, or problems, during the product development process consumes valuable resources, which may adversely affect product development time and hamper the firm's goal to pursue a first‐mover advantage. It is posited that an integrated organizational response can diminish incidences of glitches and improve the ability of the firm to respond to engineering changes, subsequently leading to improved market success. This organizational response frequently includes heavyweight product development managers who are seen as essential catalysts for internal integration. Though internal integration is vital, it is equally important to integrate with customers and suppliers alike because such network partners can provide access to information, knowledge, and unique and complementary resources that are otherwise unavailable to the firm. Findings, which are based on a sample of 191 product development projects in the automotive industry, suggest that some integration routines have a positive impact on product development outcomes and market success, while other routines can in fact hamper the collective effort.  相似文献   

16.
This paper introduces time‐varying grouped patterns of heterogeneity in linear panel data models. A distinctive feature of our approach is that group membership is left unrestricted. We estimate the parameters of the model using a “grouped fixed‐effects” estimator that minimizes a least squares criterion with respect to all possible groupings of the cross‐sectional units. Recent advances in the clustering literature allow for fast and efficient computation. We provide conditions under which our estimator is consistent as both dimensions of the panel tend to infinity, and we develop inference methods. Finally, we allow for grouped patterns of unobserved heterogeneity in the study of the link between income and democracy across countries.  相似文献   

17.
This paper concerns the two‐stage game introduced in Nash (1953). It formalizes a suggestion made (but not pursued) by Nash regarding equilibrium selection in that game, and hence offers an arguably more solid foundation for the “Nash bargaining with endogenous threats” solution. Analogous reasoning is then applied to an infinite horizon game to provide equilibrium selection in two‐person repeated games with contracts. In this setting, issues about enforcement of threats are much less problematic than in Nash's static setting. The analysis can be extended to stochastic games with contracts.  相似文献   

18.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

19.
This paper proposes a perfectly competitive model of a market with adverse selection. Prices are determined by zero‐profit conditions, and the set of traded contracts is determined by free entry. Crucially for applications, contract characteristics are endogenously determined, consumers may have multiple dimensions of private information, and an equilibrium always exists. Equilibrium corresponds to the limit of a differentiated products Bertrand game. We apply the model to establish theoretical results on the equilibrium effects of mandates. Mandates can increase efficiency but have unintended consequences. With adverse selection, an insurance mandate reduces the price of low‐coverage policies, which necessarily has indirect effects such as increasing adverse selection on the intensive margin and causing some consumers to purchase less coverage.  相似文献   

20.
Online markets, like eBay, Amazon, and others rely on electronic reputation or feedback systems to curtail adverse selection and moral hazard risks and promote trust among participants in the marketplace. These systems are based on the idea that providing information about a trader's past behavior (performance on previous market transactions) allows market participants to form judgments regarding the trustworthiness of potential interlocutors in the marketplace. It is often assumed, however, that traders correctly process the data presented by these systems when updating their initial beliefs. In this article, we demonstrate that this assumption does not hold. Using a controlled laboratory experiment simulating an online auction site with 127 participants acting as buyers, we find that participants interpret seller feedback information in a biased (non‐Bayesian) fashion, overemphasizing the compositional strength (i.e., the proportion of positive ratings) of the reputational information and underemphasizing the weight (predictive validity) of the evidence as represented by the total number of transactions rated. Significantly, we also find that the degree to which buyers misweigh seller feedback information is moderated by the presentation format of the feedback system as well as attitudinal and psychological attributes of the buyer. Specifically, we find that buyers process feedback data presented in an Amazon‐like format—a format that more prominently emphasizes the strength dimension of feedback information—in a more biased (less‐Bayesian) manner than identical ratings data presented using an eBay‐like format. We further find that participants with greater institution‐based trust (i.e., structural assurance) and prior online shopping experience interpreted feedback data in a more biased (less‐Bayesian) manner. The implications of these findings for both research and practice are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号