首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
The ill‐posedness of the nonparametric instrumental variable (NPIV) model leads to estimators that may suffer from poor statistical performance. In this paper, we explore the possibility of imposing shape restrictions to improve the performance of the NPIV estimators. We assume that the function to be estimated is monotone and consider a sieve estimator that enforces this monotonicity constraint. We define a constrained measure of ill‐posedness that is relevant for the constrained estimator and show that, under a monotone IV assumption and certain other mild regularity conditions, this measure is bounded uniformly over the dimension of the sieve space. This finding is in stark contrast to the well‐known result that the unconstrained sieve measure of ill‐posedness that is relevant for the unconstrained estimator grows to infinity with the dimension of the sieve space. Based on this result, we derive a novel non‐asymptotic error bound for the constrained estimator. The bound gives a set of data‐generating processes for which the monotonicity constraint has a particularly strong regularization effect and considerably improves the performance of the estimator. The form of the bound implies that the regularization effect can be strong even in large samples and even if the function to be estimated is steep, particularly so if the NPIV model is severely ill‐posed. Our simulation study confirms these findings and reveals the potential for large performance gains from imposing the monotonicity constraint.  相似文献   

2.
We introduce methods for estimating nonparametric, nonadditive models with simultaneity. The methods are developed by directly connecting the elements of the structural system to be estimated with features of the density of the observable variables, such as ratios of derivatives or averages of products of derivatives of this density. The estimators are therefore easily computed functionals of a nonparametric estimator of the density of the observable variables. We consider in detail a model where to each structural equation there corresponds an exclusive regressor and a model with one equation of interest and one instrument that is included in a second equation. For both models, we provide new characterizations of observational equivalence on a set, in terms of the density of the observable variables and derivatives of the structural functions. Based on those characterizations, we develop two estimation methods. In the first method, the estimators of the structural derivatives are calculated by a simple matrix inversion and matrix multiplication, analogous to a standard least squares estimator, but with the elements of the matrices being averages of products of derivatives of nonparametric density estimators. In the second method, the estimators of the structural derivatives are calculated in two steps. In a first step, values of the instrument are found at which the density of the observable variables satisfies some properties. In the second step, the estimators are calculated directly from the values of derivatives of the density of the observable variables evaluated at the found values of the instrument. We show that both pointwise estimators are consistent and asymptotically normal.  相似文献   

3.
In the regression‐discontinuity (RD) design, units are assigned to treatment based on whether their value of an observed covariate exceeds a known cutoff. In this design, local polynomial estimators are now routinely employed to construct confidence intervals for treatment effects. The performance of these confidence intervals in applications, however, may be seriously hampered by their sensitivity to the specific bandwidth employed. Available bandwidth selectors typically yield a “large” bandwidth, leading to data‐driven confidence intervals that may be biased, with empirical coverage well below their nominal target. We propose new theory‐based, more robust confidence interval estimators for average treatment effects at the cutoff in sharp RD, sharp kink RD, fuzzy RD, and fuzzy kink RD designs. Our proposed confidence intervals are constructed using a bias‐corrected RD estimator together with a novel standard error estimator. For practical implementation, we discuss mean squared error optimal bandwidths, which are by construction not valid for conventional confidence intervals but are valid with our robust approach, and consistent standard error estimators based on our new variance formulas. In a special case of practical interest, our procedure amounts to running a quadratic instead of a linear local regression. More generally, our results give a formal justification to simple inference procedures based on increasing the order of the local polynomial estimator employed. We find in a simulation study that our confidence intervals exhibit close‐to‐correct empirical coverage and good empirical interval length on average, remarkably improving upon the alternatives available in the literature. All results are readily available in R and STATA using our companion software packages described in Calonico, Cattaneo, and Titiunik (2014d, 2014b).  相似文献   

4.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

5.
We present a methodology for estimating the distributional effects of an endogenous treatment that varies at the group level when there are group‐level unobservables, a quantile extension of Hausman and Taylor, 1981. Because of the presence of group‐level unobservables, standard quantile regression techniques are inconsistent in our setting even if the treatment is independent of unobservables. In contrast, our estimation technique is consistent as well as computationally simple, consisting of group‐by‐group quantile regression followed by two‐stage least squares. Using the Bahadur representation of quantile estimators, we derive weak conditions on the growth of the number of observations per group that are sufficient for consistency and asymptotic zero‐mean normality of our estimator. As in Hausman and Taylor, 1981, micro‐level covariates can be used as internal instruments for the endogenous group‐level treatment if they satisfy relevance and exogeneity conditions. Our approach applies to a broad range of settings including labor, public finance, industrial organization, urban economics, and development; we illustrate its usefulness with several such examples. Finally, an empirical application of our estimator finds that low‐wage earners in the United States from 1990 to 2007 were significantly more affected by increased Chinese import competition than high‐wage earners.  相似文献   

6.
We propose a semiparametric two‐step inference procedure for a finite‐dimensional parameter based on moment conditions constructed from high‐frequency data. The population moment conditions take the form of temporally integrated functionals of state‐variable processes that include the latent stochastic volatility process of an asset. In the first step, we nonparametrically recover the volatility path from high‐frequency asset returns. The nonparametric volatility estimator is then used to form sample moment functions in the second‐step GMM estimation, which requires the correction of a high‐order nonlinearity bias from the first step. We show that the proposed estimator is consistent and asymptotically mixed Gaussian and propose a consistent estimator for the conditional asymptotic variance. We also construct a Bierens‐type consistent specification test. These infill asymptotic results are based on a novel empirical‐process‐type theory for general integrated functionals of noisy semimartingale processes.  相似文献   

7.
The present paper examines the wage effects of continuous training programs using individual-level data from the German Socio Economic Panel (GSOEP). In order to account for selectivity in training participation we estimate average treatment effects (ATE and ATT) of general and firm-specific continuous training programs using several state-of-the-art propensity score matching (PSM) estimators. Additionally, we also apply a combined matching difference-in-differences (MDiD) estimator to account for unobserved individual characteristics (e.g. motivation, ability). While the estimated ATE and ATT for general training are significant ranging between about 4 and 7.5%, the corresponding wage effects of firm-specific training are mostly insignificant. Using the more appropriate MDiD estimator, however, we find a more precise and highly significant wage effect of about 5–6%, though only for general training and not for firm-specific training. These results are consistent with standard human capital theory insofar as general training is associated with larger wage increases than firm-specific training. Furthermore, we conclude that firms may intend to use specific training to adjust to new job requirements, while career-relevant changes may be conditioned to general training.
Bernd SchauenbergEmail:
  相似文献   

8.
I introduce a model of undirected dyadic link formation which allows for assortative matching on observed agent characteristics (homophily) as well as unrestricted agent‐level heterogeneity in link surplus (degree heterogeneity). Like in fixed effects panel data analyses, the joint distribution of observed and unobserved agent‐level characteristics is left unrestricted. Two estimators for the (common) homophily parameter, β0, are developed and their properties studied under an asymptotic sequence involving a single network growing large. The first, tetrad logit (TL), estimator conditions on a sufficient statistic for the degree heterogeneity. The second, joint maximum likelihood (JML), estimator treats the degree heterogeneity {Ai0}i = 1N as additional (incidental) parameters to be estimated. The TL estimate is consistent under both sparse and dense graph sequences, whereas consistency of the JML estimate is shown only under dense graph sequences.  相似文献   

9.
Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not apply to matching estimators with a fixed number of matches because such estimators are highly nonsmooth functionals of the data. In this article we develop new methods for analyzing the large sample properties of matching estimators and establish a number of new results. We focus on matching with replacement with a fixed number of matches. First, we show that matching estimators are not N1/2‐consistent in general and describe conditions under which matching estimators do attain N1/2‐consistency. Second, we show that even in settings where matching estimators are N1/2‐consistent, simple matching estimators with a fixed number of matches do not attain the semiparametric efficiency bound. Third, we provide a consistent estimator for the large sample variance that does not require consistent nonparametric estimation of unknown functions. Software for implementing these methods is available in Matlab, Stata, and R.  相似文献   

10.
This paper provides a novel mechanism for identifying and estimating latent group structures in panel data using penalized techniques. We consider both linear and nonlinear models where the regression coefficients are heterogeneous across groups but homogeneous within a group and the group membership is unknown. Two approaches are considered—penalized profile likelihood (PPL) estimation for the general nonlinear models without endogenous regressors, and penalized GMM (PGMM) estimation for linear models with endogeneity. In both cases, we develop a new variant of Lasso called classifier‐Lasso (C‐Lasso) that serves to shrink individual coefficients to the unknown group‐specific coefficients. C‐Lasso achieves simultaneous classification and consistent estimation in a single step and the classification exhibits the desirable property of uniform consistency. For PPL estimation, C‐Lasso also achieves the oracle property so that group‐specific parameter estimators are asymptotically equivalent to infeasible estimators that use individual group identity information. For PGMM estimation, the oracle property of C‐Lasso is preserved in some special cases. Simulations demonstrate good finite‐sample performance of the approach in both classification and estimation. Empirical applications to both linear and nonlinear models are presented.  相似文献   

11.
We study the asymptotic distribution of three‐step estimators of a finite‐dimensional parameter vector where the second step consists of one or more nonparametric regressions on a regressor that is estimated in the first step. The first‐step estimator is either parametric or nonparametric. Using Newey's (1994) path‐derivative method, we derive the contribution of the first‐step estimator to the influence function. In this derivation, it is important to account for the dual role that the first‐step estimator plays in the second‐step nonparametric regression, that is, that of conditioning variable and that of argument.  相似文献   

12.
The bootstrap is a convenient tool for calculating standard errors of the parameter estimates of complicated econometric models. Unfortunately, the fact that these models are complicated often makes the bootstrap extremely slow or even practically infeasible. This paper proposes an alternative to the bootstrap that relies only on the estimation of one‐dimensional parameters. We introduce the idea in the context of M and GMM estimators. A modification of the approach can be used to estimate the variance of two‐step estimators.  相似文献   

13.
In this paper, we study the least squares (LS) estimator in a linear panel regression model with unknown number of factors appearing as interactive fixed effects. Assuming that the number of factors used in estimation is larger than the true number of factors in the data, we establish the limiting distribution of the LS estimator for the regression coefficients as the number of time periods and the number of cross‐sectional units jointly go to infinity. The main result of the paper is that under certain assumptions, the limiting distribution of the LS estimator is independent of the number of factors used in the estimation as long as this number is not underestimated. The important practical implication of this result is that for inference on the regression coefficients, one does not necessarily need to estimate the number of interactive fixed effects consistently.  相似文献   

14.
In weighted moment condition models, we show a subtle link between identification and estimability that limits the practical usefulness of estimators based on these models. In particular, if it is necessary for (point) identification that the weights take arbitrarily large values, then the parameter of interest, though point identified, cannot be estimated at the regular (parametric) rate and is said to be irregularly identified. This rate depends on relative tail conditions and can be as slow in some examples as n−1/4. This nonstandard rate of convergence can lead to numerical instability and/or large standard errors. We examine two weighted model examples: (i) the binary response model under mean restriction introduced by Lewbel (1997) and further generalized to cover endogeneity and selection, where the estimator in this class of models is weighted by the density of a special regressor, and (ii) the treatment effect model under exogenous selection (Rosenbaum and Rubin (1983)), where the resulting estimator of the average treatment effect is one that is weighted by a variant of the propensity score. Without strong relative support conditions, these models, similar to well known “identified at infinity” models, lead to estimators that converge at slower than parametric rate, since essentially, to ensure point identification, one requires some variables to take values on sets with arbitrarily small probabilities, or thin sets. For the two models above, we derive some rates of convergence and propose that one conducts inference using rate adaptive procedures that are analogous to Andrews and Schafgans (1998) for the sample selection model.  相似文献   

15.
We develop an econometric methodology to infer the path of risk premia from a large unbalanced panel of individual stock returns. We estimate the time‐varying risk premia implied by conditional linear asset pricing models where the conditioning includes both instruments common to all assets and asset‐specific instruments. The estimator uses simple weighted two‐pass cross‐sectional regressions, and we show its consistency and asymptotic normality under increasing cross‐sectional and time series dimensions. We address consistent estimation of the asymptotic variance by hard thresholding, and testing for asset pricing restrictions induced by the no‐arbitrage assumption. We derive the restrictions given by a continuum of assets in a multi‐period economy under an approximate factor structure robust to asset repackaging. The empirical analysis on returns for about ten thousand U.S. stocks from July 1964 to December 2009 shows that risk premia are large and volatile in crisis periods. They exhibit large positive and negative strays from time‐invariant estimates, follow the macroeconomic cycles, and do not match risk premia estimates on standard sets of portfolios. The asset pricing restrictions are rejected for a conditional four‐factor model capturing market, size, value, and momentum effects.  相似文献   

16.
This paper develops the fixed‐smoothing asymptotics in a two‐step generalized method of moments (GMM) framework. Under this type of asymptotics, the weighting matrix in the second‐step GMM criterion function converges weakly to a random matrix and the two‐step GMM estimator is asymptotically mixed normal. Nevertheless, the Wald statistic, the GMM criterion function statistic, and the Lagrange multiplier statistic remain asymptotically pivotal. It is shown that critical values from the fixed‐smoothing asymptotic distribution are high order correct under the conventional increasing‐smoothing asymptotics. When an orthonormal series covariance estimator is used, the critical values can be approximated very well by the quantiles of a noncentral F distribution. A simulation study shows that statistical tests based on the new fixed‐smoothing approximation are much more accurate in size than existing tests.  相似文献   

17.
Stochastic discount factor (SDF) processes in dynamic economies admit a permanent‐transitory decomposition in which the permanent component characterizes pricing over long investment horizons. This paper introduces an empirical framework to analyze the permanent‐transitory decomposition of SDF processes. Specifically, we show how to estimate nonparametrically the solution to the Perron–Frobenius eigenfunction problem of Hansen and Scheinkman, 2009. Our empirical framework allows researchers to (i) construct time series of the estimated permanent and transitory components and (ii) estimate the yield and the change of measure which characterize pricing over long investment horizons. We also introduce nonparametric estimators of the continuation value function in a class of models with recursive preferences by reinterpreting the value function recursion as a nonlinear Perron–Frobenius problem. We establish consistency and convergence rates of the eigenfunction estimators and asymptotic normality of the eigenvalue estimator and estimators of related functionals. As an application, we study an economy where the representative agent is endowed with recursive preferences, allowing for general (nonlinear) consumption and earnings growth dynamics.  相似文献   

18.
We develop a new parametric estimation procedure for option panels observed with error. We exploit asymptotic approximations assuming an ever increasing set of option prices in the moneyness (cross‐sectional) dimension, but with a fixed time span. We develop consistent estimators for the parameters and the dynamic realization of the state vector governing the option price dynamics. The estimators converge stably to a mixed‐Gaussian law and we develop feasible estimators for the limiting variance. We also provide semiparametric tests for the option price dynamics based on the distance between the spot volatility extracted from the options and one constructed nonparametrically from high‐frequency data on the underlying asset. Furthermore, we develop new tests for the day‐by‐day model fit over specific regions of the volatility surface and for the stability of the risk‐neutral dynamics over time. A comprehensive Monte Carlo study indicates that the inference procedures work well in empirically realistic settings. In an empirical application to S&P 500 index options, guided by the new diagnostic tests, we extend existing asset pricing models by allowing for a flexible dynamic relation between volatility and priced jump tail risk. Importantly, we document that the priced jump tail risk typically responds in a more pronounced and persistent manner than volatility to large negative market shocks.  相似文献   

19.
We consider forecasting with uncertainty about the choice of predictor variables. The researcher wants to select a model, estimate the parameters, and use the parameter estimates for forecasting. We investigate the distributional properties of a number of different schemes for model choice and parameter estimation, including: in‐sample model selection using the Akaike information criterion; out‐of‐sample model selection; and splitting the data into subsamples for model selection and parameter estimation. Using a weak‐predictor local asymptotic scheme, we provide a representation result that facilitates comparison of the distributional properties of the procedures and their associated forecast risks. This representation isolates the source of inefficiency in some of these procedures. We develop a simulation procedure that improves the accuracy of the out‐of‐sample and split‐sample methods uniformly over the local parameter space. We also examine how bootstrap aggregation (bagging) affects the local asymptotic risk of the estimators and their associated forecasts. Numerically, we find that for many values of the local parameter, the out‐of‐sample and split‐sample schemes perform poorly if implemented in the conventional way. But they perform well, if implemented in conjunction with our risk‐reduction method or bagging.  相似文献   

20.
Jump Regressions     
We develop econometric tools for studying jump dependence of two processes from high‐frequency observations on a fixed time interval. In this context, only segments of data around a few outlying observations are informative for the inference. We derive an asymptotically valid test for stability of a linear jump relation over regions of the jump size domain. The test has power against general forms of nonlinearity in the jump dependence as well as temporal instabilities. We further propose an efficient estimator for the linear jump regression model that is formed by optimally weighting the detected jumps with weights based on the diffusive volatility around the jump times. We derive the asymptotic limit of the estimator, a semiparametric lower efficiency bound for the linear jump regression, and show that our estimator attains the latter. The analysis covers both deterministic and random jump arrivals. In an empirical application, we use the developed inference techniques to test the temporal stability of market jump betas.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号