首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We characterize a generalization of discounted logistic choice that incorporates a parameter to capture different views the agent might have about the costs and benefits of larger choice sets. The discounted logit model used in the empirical literature is the special case that displays a “preference for flexibility” in the sense that the agent always prefers to add additional items to a menu. Other cases display varying levels of “choice aversion,” where the agent prefers to remove items from a menu if their ex ante value is below a threshold. We show that higher choice aversion, as measured by dislike of bigger menus, also corresponds to an increased preference for putting off decisions as late as possible.  相似文献   

2.
Supply chain risk uncertainty can create severe repercussions, thus it is not surprising that research interest in supply chain risk has been growing. While extant inquiry is informative, there is a lack of investigations that center on supply chain investment decisions when facing high levels of risk uncertainty. Given the potential dollar value involved in these decisions, an understanding of how these supply chain decisions are made is of significant theoretical and practical importance. Real options theory, with its focus on decision making under conditions of uncertainty, is an appealing theoretical lens for this endeavor. In essence, real options theory asserts that managerial decisions center on creating and then exercising or not exercising certain opportunities. To date, theorizing about and investigations of real options have used firms as their focus. Not yet examined are real options within supply chains that cross firm boundaries and drive much of the competitive activity in the modern economy. Accordingly, we extend real options theory to the supply chain context by examining how different types of options are approached relative to supply chain project investments. Specifically, we theorize how the options will be related to perceived value under conditions of high supply chain risk uncertainty. Overall, our investigation builds knowledge by extending real options theory to the supply chain context and by providing evidence suggesting some options operate differently in supply chains than they do in firms.  相似文献   

3.
This paper develops a dynamic model of neighborhood choice along with a computationally light multi‐step estimator. The proposed empirical framework captures observed and unobserved preference heterogeneity across households and locations in a flexible way. We estimate the model using a newly assembled data set that matches demographic information from mortgage applications to the universe of housing transactions in the San Francisco Bay Area from 1994 to 2004. The results provide the first estimates of the marginal willingness to pay for several non‐marketed amenities—neighborhood air pollution, violent crime, and racial composition—in a dynamic framework. Comparing these estimates with those from a static version of the model highlights several important biases that arise when dynamic considerations are ignored.  相似文献   

4.
This paper studies the impact of time‐varying idiosyncratic risk at the establishment level on unemployment fluctuations over 1972–2009. I build a tractable directed search model with firm dynamics and time‐varying idiosyncratic volatility. The model allows for endogenous separations, entry and exit, and job‐to‐job transitions. I show that the model can replicate salient features of the microeconomic behavior of firms and that the introduction of volatility improves the fit of the model for standard business cycle moments. In a series of counterfactual experiments, I show that time‐varying risk is important to account for the magnitude of fluctuations in aggregate unemployment for past U.S. recessions. Though the model can account for about 40% of the total increase in unemployment for the 2007–2009 recession, uncertainty alone is not sufficient to explain the magnitude and persistence of unemployment during that episode.  相似文献   

5.
Can increased uncertainty about the future cause a contraction in output and its components? An identified uncertainty shock in the data causes significant declines in output, consumption, investment, and hours worked. Standard general‐equilibrium models with flexible prices cannot reproduce this comovement. However, uncertainty shocks can easily generate comovement with countercyclical markups through sticky prices. Monetary policy plays a key role in offsetting the negative impact of uncertainty shocks during normal times. Higher uncertainty has even more negative effects if monetary policy can no longer perform its usual stabilizing function because of the zero lower bound. We calibrate our uncertainty shock process using fluctuations in implied stock market volatility, and show that the model with nominal price rigidity is consistent with empirical evidence from a structural vector autoregression. We argue that increased uncertainty about the future likely played a role in worsening the Great Recession. The economic mechanism we identify applies to a large set of shocks that change expectations of the future without changing current fundamentals.  相似文献   

6.
I introduce a model of undirected dyadic link formation which allows for assortative matching on observed agent characteristics (homophily) as well as unrestricted agent‐level heterogeneity in link surplus (degree heterogeneity). Like in fixed effects panel data analyses, the joint distribution of observed and unobserved agent‐level characteristics is left unrestricted. Two estimators for the (common) homophily parameter, β0, are developed and their properties studied under an asymptotic sequence involving a single network growing large. The first, tetrad logit (TL), estimator conditions on a sufficient statistic for the degree heterogeneity. The second, joint maximum likelihood (JML), estimator treats the degree heterogeneity {Ai0}i = 1N as additional (incidental) parameters to be estimated. The TL estimate is consistent under both sparse and dense graph sequences, whereas consistency of the JML estimate is shown only under dense graph sequences.  相似文献   

7.
The paper analyzes dynamic principal–agent models with short period lengths. The two main contributions are: (i) an analytic characterization of the values of optimal contracts in the limit as the period length goes to 0, and (ii) the construction of relatively simple (almost) optimal contracts for fixed period lengths. Our setting is flexible and includes the pure hidden action or pure hidden information models as special cases. We show how such details of the underlying information structure affect the optimal provision of incentives and the value of the contracts. The dependence is very tractable and we obtain sharp comparative statics results. The results are derived with a novel method that uses a quadratic approximation of the Pareto boundary of the equilibrium value set.  相似文献   

8.
This paper axiomatizes an intertemporal version of the maxmin expected‐utility model. It employs two axioms specific to a dynamic setting. The first requires that smoothing consumption across states of the world is more beneficial to the individual than smoothing consumption across time. Such behavior is viewed as the intertemporal manifestation of ambiguity aversion. The second axiom extends Koopmans' notion of stationarity from deterministic to stochastic environments.  相似文献   

9.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

10.
This paper extends the long‐term factorization of the stochastic discount factor introduced and studied by Alvarez and Jermann (2005) in discrete‐time ergodic environments and by Hansen and Scheinkman (2009) and Hansen (2012) in Markovian environments to general semimartingale environments. The transitory component discounts at the stochastic rate of return on the long bond and is factorized into discounting at the long‐term yield and a positive semimartingale that extends the principal eigenfunction of Hansen and Scheinkman (2009) to the semimartingale setting. The permanent component is a martingale that accomplishes a change of probabilities to the long forward measure, the limit of T‐forward measures. The change of probabilities from the data‐generating to the long forward measure absorbs the long‐term risk‐return trade‐off and interprets the latter as the long‐term risk‐neutral measure.  相似文献   

11.
U.S. data reveal three facts: (1) the share of goods in total expenditure declines at a constant rate over time, (2) the price of goods relative to services declines at a constant rate over time, and (3) poor households spend a larger fraction of their budget on goods than do rich households. I provide a macroeconomic model with non‐Gorman preferences that rationalizes these facts, along with the aggregate Kaldor facts. The model is parsimonious and admits an analytical solution. Its functional form allows a decomposition of U.S. structural change into an income and substitution effect. Estimates from micro data show each of these effects to be of roughly equal importance.  相似文献   

12.
We solve a general class of dynamic rational inattention problems in which an agent repeatedly acquires costly information about an evolving state and selects actions. The solution resembles the choice rule in a dynamic logit model, but it is biased toward an optimal default rule that is independent of the realized state. The model provides the same fit to choice data as dynamic logit, but, because of the bias, yields different counterfactual predictions. We apply the general solution to the study of (i) the status quo bias; (ii) inertia in actions leading to lagged adjustments to shocks; and (iii) the tradeoff between accuracy and delay in decision‐making.  相似文献   

13.
We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (2012). Their implementation of the nested fixed point algorithm used successive approximations to solve the inner fixed point problem (NFXP‐SA). We redo their comparison using the more efficient version of NFXP proposed by Rust (1987), which combines successive approximations and Newton–Kantorovich iterations to solve the fixed point problem (NFXP‐NK). We show that MPEC and NFXP are similar in speed and numerical performance when the more efficient NFXP‐NK variant is used.  相似文献   

14.
Two fundamental axioms in social choice theory are consistency with respect to a variable electorate and consistency with respect to components of similar alternatives. In the context of traditional non‐probabilistic social choice, these axioms are incompatible with each other. We show that in the context of probabilistic social choice, these axioms uniquely characterize a function proposed by Fishburn (1984). Fishburn's function returns so‐called maximal lotteries, that is, lotteries that correspond to optimal mixed strategies in the symmetric zero‐sum game induced by the pairwise majority margins. Maximal lotteries are guaranteed to exist due to von Neumann's Minimax Theorem, are almost always unique, and can be efficiently computed using linear programming.  相似文献   

15.
We develop a theory of parent‐child relations that rationalizes the choice between alternative parenting styles (as set out in Baumrind, 1967). Parents maximize an objective function that combines Beckerian altruism and paternalism towards children. They can affect their children's choices via two channels: either by influencing children's preferences or by imposing direct restrictions on their choice sets. Different parenting styles (authoritarian, authoritative, and permissive) emerge as equilibrium outcomes and are affected both by parental preferences and by the socioeconomic environment. Parenting style, in turn, feeds back into the children's welfare and economic success. The theory is consistent with the decline of authoritarian parenting observed in industrialized countries and with the greater prevalence of more permissive parenting in countries characterized by low inequality.  相似文献   

16.
We explore the impact of private information in sealed‐bid first‐price auctions. For a given symmetric and arbitrarily correlated prior distribution over values, we characterize the lowest winning‐bid distribution that can arise across all information structures and equilibria. The information and equilibrium attaining this minimum leave bidders indifferent between their equilibrium bids and all higher bids. Our results provide lower bounds for bids and revenue with asymmetric distributions over values. We also report further characterizations of revenue and bidder surplus including upper bounds on revenue. Our work has implications for the identification of value distributions from data on winning bids and for the informationally robust comparison of alternative auction mechanisms.  相似文献   

17.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

18.
The theory of continuous time games (Simon and Stinchcombe (1989), Bergin and MacLeod (1993)) shows that continuous time interactions can generate very different equilibrium behavior than conventional discrete time interactions. We introduce new laboratory methods that allow us to eliminate natural inertia in subjects' decisions in continuous time experiments, thereby satisfying critical premises of the theory and enabling a first‐time direct test. Applying these new methods to a simple timing game, we find strikingly large gaps in behavior between discrete and continuous time as the theory suggests. Reintroducing natural inertia into these games causes continuous time behavior to collapse to discrete time‐like levels in some settings as predicted by subgame perfect Nash equilibrium. However, contra this prediction, the strength of this effect is fundamentally shaped by the severity of inertia: behavior tends towards discrete time benchmarks as inertia grows large and perfectly continuous time benchmarks as it falls towards zero. We provide evidence that these results are due to changes in the nature of strategic uncertainty as inertia approaches the continuous limit.  相似文献   

19.
A firm's distribution channels represent a key portfolio of resources that can be leveraged for competitive advantage. One approach to this portfolio that has become increasingly important in recent years is multichannel distribution (MCD). While this strategy has important benefits in terms of market coverage and firm performance, the use of multiple channels seriously affects downstream channel roles such as service delivery, as the financial rewards to channel members and the services they offer are separated. A channel member who offers poor or no service can free‐ride on the services offered to the same customer from a different channel. We draw on agency theory to explain these negative consequences. Additionally, the resource‐based view of the firm along with capabilities theory provides two key means of alleviating these consequences: channel tracking capabilities and reward alignment capabilities. The study, conducted in an industry facing serious MCD issues (the outdoor sporting goods industry), used key informant data matched to secondary data. Our results show that managers can reap the performance rewards of MCD strategies while minimizing its negative consequences. In particular, monitoring practices such as frequent site visits and phone contact with customers develop the firm's channel tracking capabilities, allowing managers to better monitor downstream activities. This becomes particularly important as the complexity from having multiple channels increases. Likewise, reward alignment capabilities such as retail price maintenance agreements and cooperative advertising enable the manager to minimize conflict among channel participants by ensuring sufficient profitability for all channel members.  相似文献   

20.
Mechanism design enables a social planner to obtain a desired outcome by leveraging the players' rationality and their beliefs. It is thus a fundamental, but yet unproven, intuition that the higher the level of rationality of the players, the better the set of obtainable outcomes. In this paper, we prove this fundamental intuition for players with possibilistic beliefs, a model long considered in epistemic game theory. Specifically, • We define a sequence of monotonically increasing revenue benchmarks for single‐good auctions, G0G1G2≤⋯, where each Gi is defined over the players' beliefs and G0 is the second‐highest valuation (i.e., the revenue benchmark achieved by the second‐price mechanism). • We (1) construct a single, interim individually rational, auction mechanism that, without any clue about the rationality level of the players, guarantees revenue Gk if all players have rationality levels ≥k+1, and (2) prove that no such mechanism can guarantee revenue even close to Gk when at least two players are at most level‐k rational.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号