首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 30 毫秒
1.
2.
The past forty years have seen a rapid rise in top income inequality in the United States. While there is a large number of existing theories of the Pareto tail of the long‐run income distributions, almost none of these address the fast rise in top inequality observed in the data. We show that standard theories, which build on a random growth mechanism, generate transition dynamics that are too slow relative to those observed in the data. We then suggest two parsimonious deviations from the canonical model that can explain such changes: “scale dependence” that may arise from changes in skill prices, and “type dependence,” that is, the presence of some “high‐growth types.” These deviations are consistent with theories in which the increase in top income inequality is driven by the rise of “superstar” entrepreneurs or managers.  相似文献   

3.
Jump Regressions     
We develop econometric tools for studying jump dependence of two processes from high‐frequency observations on a fixed time interval. In this context, only segments of data around a few outlying observations are informative for the inference. We derive an asymptotically valid test for stability of a linear jump relation over regions of the jump size domain. The test has power against general forms of nonlinearity in the jump dependence as well as temporal instabilities. We further propose an efficient estimator for the linear jump regression model that is formed by optimally weighting the detected jumps with weights based on the diffusive volatility around the jump times. We derive the asymptotic limit of the estimator, a semiparametric lower efficiency bound for the linear jump regression, and show that our estimator attains the latter. The analysis covers both deterministic and random jump arrivals. In an empirical application, we use the developed inference techniques to test the temporal stability of market jump betas.  相似文献   

4.
We propose a novel technique to boost the power of testing a high‐dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated by only a few components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high‐dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component,” which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. The proposed methods are then applied to testing the factor pricing models and validating the cross‐sectional independence in panel data models.  相似文献   

5.
We provide general conditions under which principal‐agent problems with either one or multiple agents admit mechanisms that are optimal for the principal. Our results cover as special cases pure moral hazard and pure adverse selection. We allow multidimensional types, actions, and signals, as well as both financial and non‐financial rewards. Our results extend to situations in which there are ex ante or interim restrictions on the mechanism, and allow the principal to have decisions in addition to choosing the agent's contract. Beyond measurability, we require no a priori restrictions on the space of mechanisms. It is not unusual for randomization to be necessary for optimality and so it (should be and) is permitted. Randomization also plays an essential role in our proof. We also provide conditions under which some forms of randomization are unnecessary.  相似文献   

6.
We estimate demand for residential broadband using high‐frequency data from subscribers facing a three‐part tariff. The three‐part tariff makes data usage during the billing cycle a dynamic problem, thus generating variation in the (shadow) price of usage. We provide evidence that subscribers respond to this variation, and we use their dynamic decisions to estimate a flexible distribution of willingness to pay for different plan characteristics. Using the estimates, we simulate demand under alternative pricing and find that usage‐based pricing eliminates low‐value traffic. Furthermore, we show that the costs associated with investment in fiber‐optic networks are likely recoverable in some markets, but that there is a large gap between social and private incentives to invest.  相似文献   

7.
We axiomatize preferences that can be represented by a monotonic aggregation of subjective expected utilities generated by a utility function and some set of i.i.d. probability measures over a product state space, S. For such preferences, we define relevant measures, show that they are treated as if they were the only marginals possibly governing the state space, and connect them with the measures appearing in the aforementioned representation. These results allow us to interpret relevant measures as reflecting part of perceived ambiguity, meaning subjective uncertainty about probabilities over states. Under mild conditions, we show that increases or decreases in ambiguity aversion cannot affect the relevant measures. This property, necessary for the conclusion that these measures reflect only perceived ambiguity, distinguishes the set of relevant measures from the leading alternative in the literature. We apply our findings to a number of well‐known models of ambiguity‐sensitive preferences. For each model, we identify the set of relevant measures and the implications of comparative ambiguity aversion.  相似文献   

8.
We present new identification results for nonparametric models of differentiated products markets, using only market level observables. We specify a nonparametric random utility discrete choice model of demand allowing rich preference heterogeneity, product/market unobservables, and endogenous prices. Our supply model posits nonparametric cost functions, allows latent cost shocks, and nests a range of standard oligopoly models. We consider identification of demand, identification of changes in aggregate consumer welfare, identification of marginal costs, identification of firms' marginal cost functions, and discrimination between alternative models of firm conduct. We explore two complementary approaches. The first demonstrates identification under the same nonparametric instrumental variables conditions required for identification of regression models. The second treats demand and supply in a system of nonparametric simultaneous equations, leading to constructive proofs exploiting exogenous variation in demand shifters and cost shifters. We also derive testable restrictions that provide the first general formalization of Bresnahan's (1982) intuition for empirically distinguishing between alternative models of oligopoly competition. From a practical perspective, our results clarify the types of instrumental variables needed with market level data, including tradeoffs between functional form and exclusion restrictions.  相似文献   

9.
The ill‐posedness of the nonparametric instrumental variable (NPIV) model leads to estimators that may suffer from poor statistical performance. In this paper, we explore the possibility of imposing shape restrictions to improve the performance of the NPIV estimators. We assume that the function to be estimated is monotone and consider a sieve estimator that enforces this monotonicity constraint. We define a constrained measure of ill‐posedness that is relevant for the constrained estimator and show that, under a monotone IV assumption and certain other mild regularity conditions, this measure is bounded uniformly over the dimension of the sieve space. This finding is in stark contrast to the well‐known result that the unconstrained sieve measure of ill‐posedness that is relevant for the unconstrained estimator grows to infinity with the dimension of the sieve space. Based on this result, we derive a novel non‐asymptotic error bound for the constrained estimator. The bound gives a set of data‐generating processes for which the monotonicity constraint has a particularly strong regularization effect and considerably improves the performance of the estimator. The form of the bound implies that the regularization effect can be strong even in large samples and even if the function to be estimated is steep, particularly so if the NPIV model is severely ill‐posed. Our simulation study confirms these findings and reveals the potential for large performance gains from imposing the monotonicity constraint.  相似文献   

10.
The bootstrap is a convenient tool for calculating standard errors of the parameter estimates of complicated econometric models. Unfortunately, the fact that these models are complicated often makes the bootstrap extremely slow or even practically infeasible. This paper proposes an alternative to the bootstrap that relies only on the estimation of one‐dimensional parameters. We introduce the idea in the context of M and GMM estimators. A modification of the approach can be used to estimate the variance of two‐step estimators.  相似文献   

11.
We develop a new quantile‐based panel data framework to study the nature of income persistence and the transmission of income shocks to consumption. Log‐earnings are the sum of a general Markovian persistent component and a transitory innovation. The persistence of past shocks to earnings is allowed to vary according to the size and sign of the current shock. Consumption is modeled as an age‐dependent nonlinear function of assets, unobservable tastes, and the two earnings components. We establish the nonparametric identification of the nonlinear earnings process and of the consumption policy rule. Exploiting the enhanced consumption and asset data in recent waves of the Panel Study of Income Dynamics, we find that the earnings process features nonlinear persistence and conditional skewness. We confirm these results using population register data from Norway. We then show that the impact of earnings shocks varies substantially across earnings histories, and that this nonlinearity drives heterogeneous consumption responses. The framework provides new empirical measures of partial insurance in which the transmission of income shocks to consumption varies systematically with assets, the level of the shock, and the history of past shocks.  相似文献   

12.
We explore the impact of private information in sealed‐bid first‐price auctions. For a given symmetric and arbitrarily correlated prior distribution over values, we characterize the lowest winning‐bid distribution that can arise across all information structures and equilibria. The information and equilibrium attaining this minimum leave bidders indifferent between their equilibrium bids and all higher bids. Our results provide lower bounds for bids and revenue with asymmetric distributions over values. We also report further characterizations of revenue and bidder surplus including upper bounds on revenue. Our work has implications for the identification of value distributions from data on winning bids and for the informationally robust comparison of alternative auction mechanisms.  相似文献   

13.
This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill‐posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root‐n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root‐n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug‐in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug‐in PSMD estimator, and hence the asymptotic chi‐square distribution of the sieve Wald statistic; (3) the asymptotic chi‐square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non‐optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.  相似文献   

14.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

15.
A principal wishes to screen an agent along several dimensions of private information simultaneously. The agent has quasilinear preferences that are additively separable across the various components. We consider a robust version of the principal's problem, in which she knows the marginal distribution of each component of the agent's type, but does not know the joint distribution. Any mechanism is evaluated by its worst‐case expected profit, over all joint distributions consistent with the known marginals. We show that the optimum for the principal is simply to screen along each component separately. This result does not require any assumptions (such as single crossing) on the structure of preferences within each component. The proof technique involves a generalization of the concept of virtual values to arbitrary screening problems. Sample applications include monopoly pricing and a stylized dynamic taxation model.  相似文献   

16.
Call an economic model incomplete if it does not generate a probabilistic prediction even given knowledge of all parameter values. We propose a method of inference about unknown parameters for such models that is robust to heterogeneity and dependence of unknown form. The key is a Central Limit Theorem for belief functions; robust confidence regions are then constructed in a fashion paralleling the classical approach. Monte Carlo simulations support tractability of the method and demonstrate its enhanced robustness relative to existing methods.  相似文献   

17.
Internet advertising has been the fastest growing advertising channel in recent years, with paid search ads comprising the bulk of this revenue. We present results from a series of large‐scale field experiments done at eBay that were designed to measure the causal effectiveness of paid search ads. Because search clicks and purchase intent are correlated, we show that returns from paid search are a fraction of non‐experimental estimates. As an extreme case, we show that brand keyword ads have no measurable short‐term benefits. For non‐brand keywords, we find that new and infrequent users are positively influenced by ads but that more frequent users whose purchasing behavior is not influenced by ads account for most of the advertising expenses, resulting in average returns that are negative.  相似文献   

18.
In this study, we examine how the different incentive structures inherent in two primary contract types—time and materials (T&M) and fixed price (FP)—influence the quality provided by the vendor in the software development outsourcing industry. We argue that the incentive structure of FP contracts motivates a vendor to be more efficient in the software development process, which results in higher quality as compared to projects executed under a T&M contract. We thus argue that vendors consistently staff FP projects with better trained personnel because they face the most risk on these contracts, resulting in better outcomes on these projects. We extend our analysis to propose that providing higher quality is associated with higher profit margins for the vendor only for FP contracts. We develop and test these hypotheses on data collected from 100 software projects completed by a leading Indian offshore vendor. The results provide strong support for our fundamental thesis that the drivers of and returns to quality vary by contract type. We discuss the implications of our research for both researchers and practitioners.  相似文献   

19.
An endogenous growth model is developed where each period firms invest in researching and developing new ideas. An idea increases a firm's productivity. By how much depends on the technological propinquity between an idea and the firm's line of business. Ideas can be bought and sold on a market for patents. A firm can sell an idea that is not relevant to its business or buy one if it fails to innovate. The developed model is matched up with stylized facts about the market for patents in the United States. The analysis gauges how efficiency in the patent market affects growth.  相似文献   

20.
This paper provides a novel mechanism for identifying and estimating latent group structures in panel data using penalized techniques. We consider both linear and nonlinear models where the regression coefficients are heterogeneous across groups but homogeneous within a group and the group membership is unknown. Two approaches are considered—penalized profile likelihood (PPL) estimation for the general nonlinear models without endogenous regressors, and penalized GMM (PGMM) estimation for linear models with endogeneity. In both cases, we develop a new variant of Lasso called classifier‐Lasso (C‐Lasso) that serves to shrink individual coefficients to the unknown group‐specific coefficients. C‐Lasso achieves simultaneous classification and consistent estimation in a single step and the classification exhibits the desirable property of uniform consistency. For PPL estimation, C‐Lasso also achieves the oracle property so that group‐specific parameter estimators are asymptotically equivalent to infeasible estimators that use individual group identity information. For PGMM estimation, the oracle property of C‐Lasso is preserved in some special cases. Simulations demonstrate good finite‐sample performance of the approach in both classification and estimation. Empirical applications to both linear and nonlinear models are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号