首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The past forty years have seen a rapid rise in top income inequality in the United States. While there is a large number of existing theories of the Pareto tail of the long‐run income distributions, almost none of these address the fast rise in top inequality observed in the data. We show that standard theories, which build on a random growth mechanism, generate transition dynamics that are too slow relative to those observed in the data. We then suggest two parsimonious deviations from the canonical model that can explain such changes: “scale dependence” that may arise from changes in skill prices, and “type dependence,” that is, the presence of some “high‐growth types.” These deviations are consistent with theories in which the increase in top income inequality is driven by the rise of “superstar” entrepreneurs or managers.  相似文献   

2.
We provide the first analysis of altruism in networks. Agents are embedded in a fixed network and care about the well‐being of their network neighbors. Depending on incomes, they may provide financial support to their poorer friends. We study the Nash equilibria of the resulting game of transfers. We show that equilibria maximize a concave potential function. We establish existence, uniqueness of equilibrium consumption, and generic uniqueness of equilibrium transfers. We characterize the geometry of the network of transfers and highlight the key role played by transfer intermediaries. We then study comparative statics. A positive income shock to an individual benefits all. For small changes in incomes, agents in a component of the network of transfers act as if they were organized in an income‐pooling community. A decrease in income inequality or expansion of the altruism network may increase consumption inequality.  相似文献   

3.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

4.
Consider a group of individuals with unobservable perspectives (subjective prior beliefs) about a sequence of states. In each period, each individual receives private information about the current state and forms an opinion (a posterior belief). She also chooses a target individual and observes the target's opinion. This choice involves a trade‐off between well‐informed targets, whose signals are precise, and well‐understood targets, whose perspectives are well known. Opinions are informative about the target's perspective, so observed individuals become better understood over time. We identify a simple condition under which long‐run behavior is history independent. When this fails, each individual restricts attention to a small set of experts and observes the most informed among these. A broad range of observational patterns can arise with positive probability, including opinion leadership and information segregation. In an application to areas of expertise, we show how these mechanisms generate own field bias and large field dominance.  相似文献   

5.
We propose a novel technique to boost the power of testing a high‐dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated by only a few components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high‐dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component,” which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. The proposed methods are then applied to testing the factor pricing models and validating the cross‐sectional independence in panel data models.  相似文献   

6.
We argue that poverty can perpetuate itself by undermining the capacity for self‐control. In line with a distinguished psychological literature, we consider modes of self‐control that involve the self‐imposed use of contingent punishments and rewards. We study settings in which consumers with quasi‐hyperbolic preferences confront an otherwise standard intertemporal allocation problem with credit constraints. Our main result demonstrates that low initial assets can limit self‐control, trapping people in poverty, while individuals with high initial assets can accumulate indefinitely. Thus, even temporary policies that initiate accumulation among the poor may be effective. We examine implications concerning the effect of access to credit on saving, the demand for commitment devices, the design of financial accounts to promote accumulation, and the variation of the marginal propensity to consume across income from different sources. We also explore the nature of optimal self‐control, demonstrating that it has a simple and behaviorally plausible structure that is immune to self‐renegotiation.  相似文献   

7.
This paper uses the information contained in the joint dynamics of individuals' labor earnings and consumption‐choice decisions to quantify both the amount of income risk that individuals face and the extent to which they have access to informal insurance against this risk. We accomplish this task by using indirect inference to estimate a structural consumption–savings model, in which individuals both learn about the nature of their income process and partly insure shocks via informal mechanisms. In this framework, we estimate (i) the degree of partial insurance, (ii) the extent of systematic differences in income growth rates, (iii) the precision with which individuals know their own income growth rates when they begin their working lives, (iv) the persistence of typical labor income shocks, (v) the tightness of borrowing constraints, and (vi) the amount of measurement error in the data. In implementing indirect inference, we find that an auxiliary model that approximates the true structural equations of the model (which are not estimable) works very well, with negligible small sample bias. The main substantive findings are that income shocks are moderately persistent, systematic differences in income growth rates are large, individuals have substantial amounts of information about their income growth rates, and about one‐half of income shocks are smoothed via partial insurance. Putting these findings together, the amount of uninsurable lifetime income risk that individuals perceive is substantially smaller than what is typically assumed in calibrated macroeconomic models with incomplete markets.  相似文献   

8.
Stochastic discount factor (SDF) processes in dynamic economies admit a permanent‐transitory decomposition in which the permanent component characterizes pricing over long investment horizons. This paper introduces an empirical framework to analyze the permanent‐transitory decomposition of SDF processes. Specifically, we show how to estimate nonparametrically the solution to the Perron–Frobenius eigenfunction problem of Hansen and Scheinkman, 2009. Our empirical framework allows researchers to (i) construct time series of the estimated permanent and transitory components and (ii) estimate the yield and the change of measure which characterize pricing over long investment horizons. We also introduce nonparametric estimators of the continuation value function in a class of models with recursive preferences by reinterpreting the value function recursion as a nonlinear Perron–Frobenius problem. We establish consistency and convergence rates of the eigenfunction estimators and asymptotic normality of the eigenvalue estimator and estimators of related functionals. As an application, we study an economy where the representative agent is endowed with recursive preferences, allowing for general (nonlinear) consumption and earnings growth dynamics.  相似文献   

9.
Many violations of the independence axiom of expected utility can be traced to subjects' attraction to risk‐free prospects. The key axiom in this paper, negative certainty independence ([Dillenberger, 2010]), formalizes this tendency. Our main result is a utility representation of all preferences over monetary lotteries that satisfy negative certainty independence together with basic rationality postulates. Such preferences can be represented as if the agent were unsure of how to evaluate a given lottery p; instead, she has in mind a set of possible utility functions over outcomes and displays a cautious behavior: she computes the certainty equivalent of p with respect to each possible function in the set and picks the smallest one. The set of utilities is unique in a well defined sense. We show that our representation can also be derived from a “cautious” completion of an incomplete preference relation.  相似文献   

10.
In this paper, I construct players' prior beliefs and show that these prior beliefs lead the players to learn to play an approximate Nash equilibrium uniformly in any infinitely repeated slightly perturbed game with discounting and perfect monitoring. That is, given any ε > 0, there exists a (single) profile of players' prior beliefs that leads play to almost surely converge to an ε‐Nash equilibrium uniformly for any (finite normal form) stage game with slight payoff perturbation and any discount factor less than 1.  相似文献   

11.
In this article, we study the competitive interactions between a firm producing standard products and a firm producing custom products. Consumers with heterogeneous preferences choose between n standard products, which may not meet their preferences exactly but are available immediately, and a custom product, available only after a certain lead time l. Standard products incur a variety cost that increases with n and custom products incur a lead time cost that is decreasing in the lead time l. We consider a two‐stage game wherein at stage 1, the standard product firm chooses the variety and the custom firm chooses the lead time and then both firms set prices simultaneously. We characterize the subgame‐perfect Nash equilibrium of the game. We find that both firms can coexist in equilibrium, either sharing the market as local monopolists or in a price‐competitive mode. The standard product firm may offer significant or minimal variety depending on the equilibrium outcome. We provide several interesting insights on the variety, lead time, and prices of the products offered and on the impact of problem parameters on the equilibrium outcomes. For instance, we show that the profit margin and price of the custom product are likely to be higher than that of standard products in equilibrium under certain conditions. Also, custom firms are more likely to survive and succeed in product markets with larger potential market sizes. Another interesting insight is that increased consumer sensitivity to product fit may result in lower lead time for the custom product.  相似文献   

12.
Our paper provides a complete characterization of leverage and default in binomial economies with financial assets serving as collateral. Our Binomial No‐Default Theorem states that any equilibrium is equivalent (in real allocations and prices) to another equilibrium in which there is no default. Thus actual default is irrelevant, though the potential for default drives the equilibrium and limits borrowing. This result is valid with arbitrary preferences and endowments, contingent or noncontingent promises, many assets and consumption goods, production, and multiple periods. We also show that only no‐default equilibria would be selected if there were the slightest cost of using collateral or handling default. Our Binomial Leverage Theorem shows that equilibrium Loan to Value (LTV) for noncontingent debt contracts is the ratio of the worst‐case return of the asset to the riskless gross rate of interest. In binomial economies, leverage is determined by down risk and not by volatility.  相似文献   

13.
We characterize optimal mechanisms for the multiple‐good monopoly problem and provide a framework to find them. We show that a mechanism is optimal if and only if a measure μ derived from the buyer's type distribution satisfies certain stochastic dominance conditions. This measure expresses the marginal change in the seller's revenue under marginal changes in the rent paid to subsets of buyer types. As a corollary, we characterize the optimality of grand‐bundling mechanisms, strengthening several results in the literature, where only sufficient optimality conditions have been derived. As an application, we show that the optimal mechanism for n independent uniform items each supported on [c,c+1] is a grand‐bundling mechanism, as long as c is sufficiently large, extending Pavlov's result for two items Pavlov, 2011. At the same time, our characterization also implies that, for all c and for all sufficiently large n, the optimal mechanism for n independent uniform items supported on [c,c+1] is not a grand‐bundling mechanism.  相似文献   

14.
We develop a theory of parent‐child relations that rationalizes the choice between alternative parenting styles (as set out in Baumrind, 1967). Parents maximize an objective function that combines Beckerian altruism and paternalism towards children. They can affect their children's choices via two channels: either by influencing children's preferences or by imposing direct restrictions on their choice sets. Different parenting styles (authoritarian, authoritative, and permissive) emerge as equilibrium outcomes and are affected both by parental preferences and by the socioeconomic environment. Parenting style, in turn, feeds back into the children's welfare and economic success. The theory is consistent with the decline of authoritarian parenting observed in industrialized countries and with the greater prevalence of more permissive parenting in countries characterized by low inequality.  相似文献   

15.
This paper develops the fixed‐smoothing asymptotics in a two‐step generalized method of moments (GMM) framework. Under this type of asymptotics, the weighting matrix in the second‐step GMM criterion function converges weakly to a random matrix and the two‐step GMM estimator is asymptotically mixed normal. Nevertheless, the Wald statistic, the GMM criterion function statistic, and the Lagrange multiplier statistic remain asymptotically pivotal. It is shown that critical values from the fixed‐smoothing asymptotic distribution are high order correct under the conventional increasing‐smoothing asymptotics. When an orthonormal series covariance estimator is used, the critical values can be approximated very well by the quantiles of a noncentral F distribution. A simulation study shows that statistical tests based on the new fixed‐smoothing approximation are much more accurate in size than existing tests.  相似文献   

16.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

17.
Most countries have automatic rules in their tax‐and‐transfer systems that are partly intended to stabilize economic fluctuations. This paper measures their effect on the dynamics of the business cycle. We put forward a model that merges the standard incomplete‐markets model of consumption and inequality with the new Keynesian model of nominal rigidities and business cycles, and that includes most of the main potential stabilizers in the U.S. data and the theoretical channels by which they may work. We find that the conventional argument that stabilizing disposable income will stabilize aggregate demand plays a negligible role in the dynamics of the business cycle, whereas tax‐and‐transfer programs that affect inequality and social insurance can have a larger effect on aggregate volatility. However, as currently designed, the set of stabilizers in place in the United States has had little effect on the volatility of aggregate output fluctuations or on their welfare costs despite stabilizing aggregate consumption. The stabilizers have a more important role when monetary policy is constrained by the zero lower bound, and they affect welfare significantly through the provision of social insurance.  相似文献   

18.
We show that deterioration in household balance sheets, or the housing net worth channel, played a significant role in the sharp decline in U.S. employment between 2007 and 2009. Counties with a larger decline in housing net worth experience a larger decline in non‐tradable employment. This result is not driven by industry‐specific supply‐side shocks, exposure to the construction sector, policy‐induced business uncertainty, or contemporaneous credit supply tightening. We find little evidence of labor market adjustment in response to the housing net worth shock. There is no significant expansion of the tradable sector in counties with the largest decline in housing net worth. Further, there is little evidence of wage adjustment within or emigration out of the hardest hit counties.  相似文献   

19.
I introduce a model of undirected dyadic link formation which allows for assortative matching on observed agent characteristics (homophily) as well as unrestricted agent‐level heterogeneity in link surplus (degree heterogeneity). Like in fixed effects panel data analyses, the joint distribution of observed and unobserved agent‐level characteristics is left unrestricted. Two estimators for the (common) homophily parameter, β0, are developed and their properties studied under an asymptotic sequence involving a single network growing large. The first, tetrad logit (TL), estimator conditions on a sufficient statistic for the degree heterogeneity. The second, joint maximum likelihood (JML), estimator treats the degree heterogeneity {Ai0}i = 1N as additional (incidental) parameters to be estimated. The TL estimate is consistent under both sparse and dense graph sequences, whereas consistency of the JML estimate is shown only under dense graph sequences.  相似文献   

20.
We develop a continuum player timing game that subsumes standard wars of attrition and pre‐emption games, and introduces a new rushes phenomenon. Payoffs are continuous and single‐peaked functions of the stopping time and stopping quantile. We show that if payoffs are hump‐shaped in the quantile, then a sudden “rush” of players stops in any Nash or subgame perfect equilibrium. Fear relaxes the first mover advantage in pre‐emption games, asking that the least quantile beat the average; greed relaxes the last mover advantage in wars of attrition, asking just that the last quantile payoff exceed the average. With greed, play is inefficiently late: an accelerating war of attrition starting at optimal time, followed by a rush. With fear, play is inefficiently early: a slowing pre‐emption game, ending at the optimal time, preceded by a rush. The theory predicts the length, duration, and intensity of stopping, and the size and timing of rushes, and offers insights for many common timing games.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号