首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We develop a new quantile‐based panel data framework to study the nature of income persistence and the transmission of income shocks to consumption. Log‐earnings are the sum of a general Markovian persistent component and a transitory innovation. The persistence of past shocks to earnings is allowed to vary according to the size and sign of the current shock. Consumption is modeled as an age‐dependent nonlinear function of assets, unobservable tastes, and the two earnings components. We establish the nonparametric identification of the nonlinear earnings process and of the consumption policy rule. Exploiting the enhanced consumption and asset data in recent waves of the Panel Study of Income Dynamics, we find that the earnings process features nonlinear persistence and conditional skewness. We confirm these results using population register data from Norway. We then show that the impact of earnings shocks varies substantially across earnings histories, and that this nonlinearity drives heterogeneous consumption responses. The framework provides new empirical measures of partial insurance in which the transmission of income shocks to consumption varies systematically with assets, the level of the shock, and the history of past shocks.  相似文献   

2.
We analyze the implications of household‐level adjustment costs for the dynamics of aggregate consumption. We show that an economy in which agents have “consumption commitments” is approximately equivalent to a habit formation model in which the habit stock is a weighted average of past consumption if idiosyncratic risk is large relative to aggregate risk. Consumption commitments can thus explain the empirical regularity that consumption is excessively sensitive and excessively smooth, findings that are typically attributed to habit formation. Unlike habit formation and other theories, but consistent with empirical evidence, the consumption commitments model also predicts that excess sensitivity and smoothness vanish for large shocks. These results suggest that behavior previously attributed to habit formation may be better explained by adjustment costs. We develop additional testable predictions to further distinguish the commitment and habit models and show that the two models have different welfare implications.  相似文献   

3.
The past forty years have seen a rapid rise in top income inequality in the United States. While there is a large number of existing theories of the Pareto tail of the long‐run income distributions, almost none of these address the fast rise in top inequality observed in the data. We show that standard theories, which build on a random growth mechanism, generate transition dynamics that are too slow relative to those observed in the data. We then suggest two parsimonious deviations from the canonical model that can explain such changes: “scale dependence” that may arise from changes in skill prices, and “type dependence,” that is, the presence of some “high‐growth types.” These deviations are consistent with theories in which the increase in top income inequality is driven by the rise of “superstar” entrepreneurs or managers.  相似文献   

4.
This paper shows that the problem of testing hypotheses in moment condition models without any assumptions about identification may be considered as a problem of testing with an infinite‐dimensional nuisance parameter. We introduce a sufficient statistic for this nuisance parameter in a Gaussian problem and propose conditional tests. These conditional tests have uniformly correct asymptotic size for a large class of models and test statistics. We apply our approach to construct tests based on quasi‐likelihood ratio statistics, which we show are efficient in strongly identified models and perform well relative to existing alternatives in two examples.  相似文献   

5.
We propose a method to set identify bounds on the sharing rule for a general collective household consumption model. Unlike the effects of distribution factors, the level of the sharing rule cannot be uniquely identified without strong assumptions on preferences across households. Our new results show that, though not point identified without these assumptions, strong bounds on the sharing rule can be obtained. We get these bounds by applying revealed preference restrictions implied by the collective model to the household's continuous aggregate demand functions. We obtain informative bounds even if nothing is known about whether each good is public, private, or assignable within the household, though having such information tightens the bounds. We apply our method to US PSID data, obtaining narrow bounds that yield useful conclusions regarding the effects of income and wages on intrahousehold resource sharing, and on the prevalence of individual (as opposed to household level) poverty.  相似文献   

6.
It is costly to learn about market conditions elsewhere, especially in developing countries. This paper examines how such information frictions affect trade. Using data on regional agricultural trade in the Philippines, I first document a number of observed patterns in trade flows and prices that suggest the presence of information frictions. I then incorporate information frictions into a perfect competition trade model by embedding a process whereby heterogeneous producers engage in a costly sequential search process to determine where to sell their produce. I show that introducing information frictions reconciles the theory with the observed patterns in the data. Structural estimation of the model finds that information frictions are quantitatively important: roughly half the observed regional price dispersion is due to information frictions. Furthermore, incorporating information frictions improves the out‐of‐sample predictive power of the model.  相似文献   

7.
In the regression‐discontinuity (RD) design, units are assigned to treatment based on whether their value of an observed covariate exceeds a known cutoff. In this design, local polynomial estimators are now routinely employed to construct confidence intervals for treatment effects. The performance of these confidence intervals in applications, however, may be seriously hampered by their sensitivity to the specific bandwidth employed. Available bandwidth selectors typically yield a “large” bandwidth, leading to data‐driven confidence intervals that may be biased, with empirical coverage well below their nominal target. We propose new theory‐based, more robust confidence interval estimators for average treatment effects at the cutoff in sharp RD, sharp kink RD, fuzzy RD, and fuzzy kink RD designs. Our proposed confidence intervals are constructed using a bias‐corrected RD estimator together with a novel standard error estimator. For practical implementation, we discuss mean squared error optimal bandwidths, which are by construction not valid for conventional confidence intervals but are valid with our robust approach, and consistent standard error estimators based on our new variance formulas. In a special case of practical interest, our procedure amounts to running a quadratic instead of a linear local regression. More generally, our results give a formal justification to simple inference procedures based on increasing the order of the local polynomial estimator employed. We find in a simulation study that our confidence intervals exhibit close‐to‐correct empirical coverage and good empirical interval length on average, remarkably improving upon the alternatives available in the literature. All results are readily available in R and STATA using our companion software packages described in Calonico, Cattaneo, and Titiunik (2014d, 2014b).  相似文献   

8.
This paper provides conditions under which the inequality constraints generated by either single agent optimizing behavior or the best response condition of multiple agent problems can be used as a basis for estimation and inference. An application illustrates how the use of these inequality constraints can simplify the analysis of complex behavioral models.  相似文献   

9.
Internet advertising has been the fastest growing advertising channel in recent years, with paid search ads comprising the bulk of this revenue. We present results from a series of large‐scale field experiments done at eBay that were designed to measure the causal effectiveness of paid search ads. Because search clicks and purchase intent are correlated, we show that returns from paid search are a fraction of non‐experimental estimates. As an extreme case, we show that brand keyword ads have no measurable short‐term benefits. For non‐brand keywords, we find that new and infrequent users are positively influenced by ads but that more frequent users whose purchasing behavior is not influenced by ads account for most of the advertising expenses, resulting in average returns that are negative.  相似文献   

10.
A growing number of school districts use centralized assignment mechanisms to allocate school seats in a manner that reflects student preferences and school priorities. Many of these assignment schemes use lotteries to ration seats when schools are oversubscribed. The resulting random assignment opens the door to credible quasi‐experimental research designs for the evaluation of school effectiveness. Yet the question of how best to separate the lottery‐generated randomization integral to such designs from non‐random preferences and priorities remains open. This paper develops easily‐implemented empirical strategies that fully exploit the random assignment embedded in a wide class of mechanisms, while also revealing why seats are randomized at one school but not another. We use these methods to evaluate charter schools in Denver, one of a growing number of districts that combine charter and traditional public schools in a unified assignment system. The resulting estimates show large achievement gains from charter school attendance. Our approach generates efficiency gains over ad hoc methods, such as those that focus on schools ranked first, while also identifying a more representative average causal effect. We also show how to use centralized assignment mechanisms to identify causal effects in models with multiple school sectors.  相似文献   

11.
The bootstrap is a convenient tool for calculating standard errors of the parameter estimates of complicated econometric models. Unfortunately, the fact that these models are complicated often makes the bootstrap extremely slow or even practically infeasible. This paper proposes an alternative to the bootstrap that relies only on the estimation of one‐dimensional parameters. We introduce the idea in the context of M and GMM estimators. A modification of the approach can be used to estimate the variance of two‐step estimators.  相似文献   

12.
We develop a model of the market for federal funds that explicitly accounts for its two distinctive features: banks have to search for a suitable counterparty, and once they meet, both parties negotiate the size of the loan and the repayment. The theory is used to answer a number of positive and normative questions: What are the determinants of the fed funds rate? How does the market reallocate funds? Is the market able to achieve an efficient reallocation of funds? We also use the model for theoretical and quantitative analyses of policy issues facing modern central banks.  相似文献   

13.
Propensity score matching estimators (Rosenbaum and Rubin (1983)) are widely used in evaluation research to estimate average treatment effects. In this article, we derive the large sample distribution of propensity score matching estimators. Our derivations take into account that the propensity score is itself estimated in a first step, prior to matching. We prove that first step estimation of the propensity score affects the large sample distribution of propensity score matching estimators, and derive adjustments to the large sample variances of propensity score matching estimators of the average treatment effect (ATE) and the average treatment effect on the treated (ATET). The adjustment for the ATE estimator is negative (or zero in some special cases), implying that matching on the estimated propensity score is more efficient than matching on the true propensity score in large samples. However, for the ATET estimator, the sign of the adjustment term depends on the data generating process, and ignoring the estimation error in the propensity score may lead to confidence intervals that are either too large or too small.  相似文献   

14.
We solve a general class of dynamic rational inattention problems in which an agent repeatedly acquires costly information about an evolving state and selects actions. The solution resembles the choice rule in a dynamic logit model, but it is biased toward an optimal default rule that is independent of the realized state. The model provides the same fit to choice data as dynamic logit, but, because of the bias, yields different counterfactual predictions. We apply the general solution to the study of (i) the status quo bias; (ii) inertia in actions leading to lagged adjustments to shocks; and (iii) the tradeoff between accuracy and delay in decision‐making.  相似文献   

15.
Both aristocratic privileges and constitutional constraints in traditional monarchies can be derived from a ruler's incentive to minimize expected costs of moral‐hazard rents for high officials. We consider a dynamic moral‐hazard model of governors serving a sovereign prince, who must deter them from rebellion and hidden corruption which could cause costly crises. To minimize costs, a governor's rewards for good performance should be deferred up to the maximal credit that the prince can be trusted to pay. In the long run, we find that high officials can become an entrenched aristocracy with low turnover and large claims on the ruler. Dismissals for bad performance should be randomized to avoid inciting rebellions, but the prince can profit from reselling vacant offices, and so his decisions to dismiss high officials require institutionalized monitoring. A soft budget constraint that forgives losses for low‐credit governors can become efficient when costs of corruption are low.  相似文献   

16.
We analyze the internal consistency of using the market price of a firm's equity to trigger a contractual change in the firm's capital structure, given that the value of the equity itself depends on the firm's capital structure. Of particular interest is the case of contingent capital for banks, in the form of debt that converts to equity, when conversion is triggered by a decline in the bank's stock price. We analyze the problem of existence and uniqueness of equilibrium values for a firm's liabilities in this context, meaning values consistent with a market‐price trigger. Discrete‐time dynamics allow multiple equilibria. In contrast, we show that the possibility of multiple equilibria can largely be ruled out in continuous time, where the price of the triggering security adjusts in anticipation of breaching the trigger. Our main condition for existence of an equilibrium requires that the consequences of triggering a conversion be consistent with the direction in which the trigger is crossed. For the design of contingent capital with a stock price trigger, this condition may be interpreted to mean that conversion should be disadvantageous to shareholders, and it is satisfied by setting the trigger sufficiently high. Uniqueness follows provided the trigger is sufficiently accessible by all candidate equilibria. We illustrate precise formulations of these conditions with a variety of applications.  相似文献   

17.
We study how long it takes for large populations of interacting agents to come close to Nash equilibrium when they adapt their behavior using a stochastic better reply dynamic. Prior work considers this question mainly for 2 × 2 games and potential games; here we characterize convergence times for general weakly acyclic games, including coordination games, dominance solvable games, games with strategic complementarities, potential games, and many others with applications in economics, biology, and distributed control. If players' better replies are governed by idiosyncratic shocks, the convergence time can grow exponentially in the population size; moreover, this is true even in games with very simple payoff structures. However, if their responses are sufficiently correlated due to aggregate shocks, the convergence time is greatly accelerated; in fact, it is bounded for all sufficiently large populations. We provide explicit bounds on the speed of convergence as a function of key structural parameters including the number of strategies, the length of the better reply paths, the extent to which players can influence the payoffs of others, and the desired degree of approximation to Nash equilibrium.  相似文献   

18.
We present a flexible and scalable method for computing global solutions of high‐dimensional stochastic dynamic models. Within a time iteration or value function iteration setup, we interpolate functions using an adaptive sparse grid algorithm. With increasing dimensions, sparse grids grow much more slowly than standard tensor product grids. Moreover, adaptivity adds a second layer of sparsity, as grid points are added only where they are most needed, for instance, in regions with steep gradients or at nondifferentiabilities. To further speed up the solution process, our implementation is fully hybrid parallel, combining distributed and shared memory parallelization paradigms, and thus permits an efficient use of high‐performance computing architectures. To demonstrate the broad applicability of our method, we solve two very different types of dynamic models: first, high‐dimensional international real business cycle models with capital adjustment costs and irreversible investment; second, multiproduct menu‐cost models with temporary sales and economies of scope in price setting.  相似文献   

19.
We develop a new parametric estimation procedure for option panels observed with error. We exploit asymptotic approximations assuming an ever increasing set of option prices in the moneyness (cross‐sectional) dimension, but with a fixed time span. We develop consistent estimators for the parameters and the dynamic realization of the state vector governing the option price dynamics. The estimators converge stably to a mixed‐Gaussian law and we develop feasible estimators for the limiting variance. We also provide semiparametric tests for the option price dynamics based on the distance between the spot volatility extracted from the options and one constructed nonparametrically from high‐frequency data on the underlying asset. Furthermore, we develop new tests for the day‐by‐day model fit over specific regions of the volatility surface and for the stability of the risk‐neutral dynamics over time. A comprehensive Monte Carlo study indicates that the inference procedures work well in empirically realistic settings. In an empirical application to S&P 500 index options, guided by the new diagnostic tests, we extend existing asset pricing models by allowing for a flexible dynamic relation between volatility and priced jump tail risk. Importantly, we document that the priced jump tail risk typically responds in a more pronounced and persistent manner than volatility to large negative market shocks.  相似文献   

20.
This paper makes the following original contributions to the literature. (i) We develop a simpler analytical characterization and numerical algorithm for Bayesian inference in structural vector autoregressions (VARs) that can be used for models that are overidentified, just‐identified, or underidentified. (ii) We analyze the asymptotic properties of Bayesian inference and show that in the underidentified case, the asymptotic posterior distribution of contemporaneous coefficients in an n‐variable VAR is confined to the set of values that orthogonalize the population variance–covariance matrix of ordinary least squares residuals, with the height of the posterior proportional to the height of the prior at any point within that set. For example, in a bivariate VAR for supply and demand identified solely by sign restrictions, if the population correlation between the VAR residuals is positive, then even if one has available an infinite sample of data, any inference about the demand elasticity is coming exclusively from the prior distribution. (iii) We provide analytical characterizations of the informative prior distributions for impulse‐response functions that are implicit in the traditional sign‐restriction approach to VARs, and we note, as a special case of result (ii), that the influence of these priors does not vanish asymptotically. (iv) We illustrate how Bayesian inference with informative priors can be both a strict generalization and an unambiguous improvement over frequentist inference in just‐identified models. (v) We propose that researchers need to explicitly acknowledge and defend the role of prior beliefs in influencing structural conclusions and we illustrate how this could be done using a simple model of the U.S. labor market.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号