首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Jump Regressions     
We develop econometric tools for studying jump dependence of two processes from high‐frequency observations on a fixed time interval. In this context, only segments of data around a few outlying observations are informative for the inference. We derive an asymptotically valid test for stability of a linear jump relation over regions of the jump size domain. The test has power against general forms of nonlinearity in the jump dependence as well as temporal instabilities. We further propose an efficient estimator for the linear jump regression model that is formed by optimally weighting the detected jumps with weights based on the diffusive volatility around the jump times. We derive the asymptotic limit of the estimator, a semiparametric lower efficiency bound for the linear jump regression, and show that our estimator attains the latter. The analysis covers both deterministic and random jump arrivals. In an empirical application, we use the developed inference techniques to test the temporal stability of market jump betas.  相似文献   

2.
We develop a new parametric estimation procedure for option panels observed with error. We exploit asymptotic approximations assuming an ever increasing set of option prices in the moneyness (cross‐sectional) dimension, but with a fixed time span. We develop consistent estimators for the parameters and the dynamic realization of the state vector governing the option price dynamics. The estimators converge stably to a mixed‐Gaussian law and we develop feasible estimators for the limiting variance. We also provide semiparametric tests for the option price dynamics based on the distance between the spot volatility extracted from the options and one constructed nonparametrically from high‐frequency data on the underlying asset. Furthermore, we develop new tests for the day‐by‐day model fit over specific regions of the volatility surface and for the stability of the risk‐neutral dynamics over time. A comprehensive Monte Carlo study indicates that the inference procedures work well in empirically realistic settings. In an empirical application to S&P 500 index options, guided by the new diagnostic tests, we extend existing asset pricing models by allowing for a flexible dynamic relation between volatility and priced jump tail risk. Importantly, we document that the priced jump tail risk typically responds in a more pronounced and persistent manner than volatility to large negative market shocks.  相似文献   

3.
This paper considers inference on functionals of semi/nonparametric conditional moment restrictions with possibly nonsmooth generalized residuals, which include all of the (nonlinear) nonparametric instrumental variables (IV) as special cases. These models are often ill‐posed and hence it is difficult to verify whether a (possibly nonlinear) functional is root‐n estimable or not. We provide computationally simple, unified inference procedures that are asymptotically valid regardless of whether a functional is root‐n estimable or not. We establish the following new useful results: (1) the asymptotic normality of a plug‐in penalized sieve minimum distance (PSMD) estimator of a (possibly nonlinear) functional; (2) the consistency of simple sieve variance estimators for the plug‐in PSMD estimator, and hence the asymptotic chi‐square distribution of the sieve Wald statistic; (3) the asymptotic chi‐square distribution of an optimally weighted sieve quasi likelihood ratio (QLR) test under the null hypothesis; (4) the asymptotic tight distribution of a non‐optimally weighted sieve QLR statistic under the null; (5) the consistency of generalized residual bootstrap sieve Wald and QLR tests; (6) local power properties of sieve Wald and QLR tests and of their bootstrap versions; (7) asymptotic properties of sieve Wald and SQLR for functionals of increasing dimension. Simulation studies and an empirical illustration of a nonparametric quantile IV regression are presented.  相似文献   

4.
We develop an econometric methodology to infer the path of risk premia from a large unbalanced panel of individual stock returns. We estimate the time‐varying risk premia implied by conditional linear asset pricing models where the conditioning includes both instruments common to all assets and asset‐specific instruments. The estimator uses simple weighted two‐pass cross‐sectional regressions, and we show its consistency and asymptotic normality under increasing cross‐sectional and time series dimensions. We address consistent estimation of the asymptotic variance by hard thresholding, and testing for asset pricing restrictions induced by the no‐arbitrage assumption. We derive the restrictions given by a continuum of assets in a multi‐period economy under an approximate factor structure robust to asset repackaging. The empirical analysis on returns for about ten thousand U.S. stocks from July 1964 to December 2009 shows that risk premia are large and volatile in crisis periods. They exhibit large positive and negative strays from time‐invariant estimates, follow the macroeconomic cycles, and do not match risk premia estimates on standard sets of portfolios. The asset pricing restrictions are rejected for a conditional four‐factor model capturing market, size, value, and momentum effects.  相似文献   

5.
Stochastic discount factor (SDF) processes in dynamic economies admit a permanent‐transitory decomposition in which the permanent component characterizes pricing over long investment horizons. This paper introduces an empirical framework to analyze the permanent‐transitory decomposition of SDF processes. Specifically, we show how to estimate nonparametrically the solution to the Perron–Frobenius eigenfunction problem of Hansen and Scheinkman, 2009. Our empirical framework allows researchers to (i) construct time series of the estimated permanent and transitory components and (ii) estimate the yield and the change of measure which characterize pricing over long investment horizons. We also introduce nonparametric estimators of the continuation value function in a class of models with recursive preferences by reinterpreting the value function recursion as a nonlinear Perron–Frobenius problem. We establish consistency and convergence rates of the eigenfunction estimators and asymptotic normality of the eigenvalue estimator and estimators of related functionals. As an application, we study an economy where the representative agent is endowed with recursive preferences, allowing for general (nonlinear) consumption and earnings growth dynamics.  相似文献   

6.
We present a methodology for estimating the distributional effects of an endogenous treatment that varies at the group level when there are group‐level unobservables, a quantile extension of Hausman and Taylor, 1981. Because of the presence of group‐level unobservables, standard quantile regression techniques are inconsistent in our setting even if the treatment is independent of unobservables. In contrast, our estimation technique is consistent as well as computationally simple, consisting of group‐by‐group quantile regression followed by two‐stage least squares. Using the Bahadur representation of quantile estimators, we derive weak conditions on the growth of the number of observations per group that are sufficient for consistency and asymptotic zero‐mean normality of our estimator. As in Hausman and Taylor, 1981, micro‐level covariates can be used as internal instruments for the endogenous group‐level treatment if they satisfy relevance and exogeneity conditions. Our approach applies to a broad range of settings including labor, public finance, industrial organization, urban economics, and development; we illustrate its usefulness with several such examples. Finally, an empirical application of our estimator finds that low‐wage earners in the United States from 1990 to 2007 were significantly more affected by increased Chinese import competition than high‐wage earners.  相似文献   

7.
This paper considers the problem of testing a finite number of moment inequalities. We propose a two‐step approach. In the first step, a confidence region for the moments is constructed. In the second step, this set is used to provide information about which moments are “negative.” A Bonferonni‐type correction is used to account for the fact that, with some probability, the moments may not lie in the confidence region. It is shown that the test controls size uniformly over a large class of distributions for the observed data. An important feature of the proposal is that it remains computationally feasible, even when the number of moments is large. The finite‐sample properties of the procedure are examined via a simulation study, which demonstrates, among other things, that the proposal remains competitive with existing procedures while being computationally more attractive.  相似文献   

8.
The ill‐posedness of the nonparametric instrumental variable (NPIV) model leads to estimators that may suffer from poor statistical performance. In this paper, we explore the possibility of imposing shape restrictions to improve the performance of the NPIV estimators. We assume that the function to be estimated is monotone and consider a sieve estimator that enforces this monotonicity constraint. We define a constrained measure of ill‐posedness that is relevant for the constrained estimator and show that, under a monotone IV assumption and certain other mild regularity conditions, this measure is bounded uniformly over the dimension of the sieve space. This finding is in stark contrast to the well‐known result that the unconstrained sieve measure of ill‐posedness that is relevant for the unconstrained estimator grows to infinity with the dimension of the sieve space. Based on this result, we derive a novel non‐asymptotic error bound for the constrained estimator. The bound gives a set of data‐generating processes for which the monotonicity constraint has a particularly strong regularization effect and considerably improves the performance of the estimator. The form of the bound implies that the regularization effect can be strong even in large samples and even if the function to be estimated is steep, particularly so if the NPIV model is severely ill‐posed. Our simulation study confirms these findings and reveals the potential for large performance gains from imposing the monotonicity constraint.  相似文献   

9.
We introduce methods for estimating nonparametric, nonadditive models with simultaneity. The methods are developed by directly connecting the elements of the structural system to be estimated with features of the density of the observable variables, such as ratios of derivatives or averages of products of derivatives of this density. The estimators are therefore easily computed functionals of a nonparametric estimator of the density of the observable variables. We consider in detail a model where to each structural equation there corresponds an exclusive regressor and a model with one equation of interest and one instrument that is included in a second equation. For both models, we provide new characterizations of observational equivalence on a set, in terms of the density of the observable variables and derivatives of the structural functions. Based on those characterizations, we develop two estimation methods. In the first method, the estimators of the structural derivatives are calculated by a simple matrix inversion and matrix multiplication, analogous to a standard least squares estimator, but with the elements of the matrices being averages of products of derivatives of nonparametric density estimators. In the second method, the estimators of the structural derivatives are calculated in two steps. In a first step, values of the instrument are found at which the density of the observable variables satisfies some properties. In the second step, the estimators are calculated directly from the values of derivatives of the density of the observable variables evaluated at the found values of the instrument. We show that both pointwise estimators are consistent and asymptotically normal.  相似文献   

10.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

11.
This paper develops the fixed‐smoothing asymptotics in a two‐step generalized method of moments (GMM) framework. Under this type of asymptotics, the weighting matrix in the second‐step GMM criterion function converges weakly to a random matrix and the two‐step GMM estimator is asymptotically mixed normal. Nevertheless, the Wald statistic, the GMM criterion function statistic, and the Lagrange multiplier statistic remain asymptotically pivotal. It is shown that critical values from the fixed‐smoothing asymptotic distribution are high order correct under the conventional increasing‐smoothing asymptotics. When an orthonormal series covariance estimator is used, the critical values can be approximated very well by the quantiles of a noncentral F distribution. A simulation study shows that statistical tests based on the new fixed‐smoothing approximation are much more accurate in size than existing tests.  相似文献   

12.
This paper shows that the problem of testing hypotheses in moment condition models without any assumptions about identification may be considered as a problem of testing with an infinite‐dimensional nuisance parameter. We introduce a sufficient statistic for this nuisance parameter in a Gaussian problem and propose conditional tests. These conditional tests have uniformly correct asymptotic size for a large class of models and test statistics. We apply our approach to construct tests based on quasi‐likelihood ratio statistics, which we show are efficient in strongly identified models and perform well relative to existing alternatives in two examples.  相似文献   

13.
以测量误差的分布理论为基础,本文将微观结构噪声的影响引入到测量误差的方差中,构建了包含微观结构噪声影响的HARQ-N模型。使用蒙特卡洛模拟与中国股市的高频数据对HAR、HARQ、HARQ-N模型与HAR-RV-N-CJ模型的估计和预测进行了比较,研究发现,HARQ模型和HARQ-N模型的测量误差修正项对波动率的影响系数统计显著为负,HARQ-N模型的测量误差项影响系数远大于HARQ模型,更大程度地减弱当期微观结构噪声和测量误差的影响。并且,考虑微观结构噪声和测量误差的HARQ-N模型样本内和样本外预测效果在统计上显著优于HAR模型、HARQ模型与HAR-RV-N-CJ模型。  相似文献   

14.
This paper introduces time‐varying grouped patterns of heterogeneity in linear panel data models. A distinctive feature of our approach is that group membership is left unrestricted. We estimate the parameters of the model using a “grouped fixed‐effects” estimator that minimizes a least squares criterion with respect to all possible groupings of the cross‐sectional units. Recent advances in the clustering literature allow for fast and efficient computation. We provide conditions under which our estimator is consistent as both dimensions of the panel tend to infinity, and we develop inference methods. Finally, we allow for grouped patterns of unobserved heterogeneity in the study of the link between income and democracy across countries.  相似文献   

15.
I introduce a model of undirected dyadic link formation which allows for assortative matching on observed agent characteristics (homophily) as well as unrestricted agent‐level heterogeneity in link surplus (degree heterogeneity). Like in fixed effects panel data analyses, the joint distribution of observed and unobserved agent‐level characteristics is left unrestricted. Two estimators for the (common) homophily parameter, β0, are developed and their properties studied under an asymptotic sequence involving a single network growing large. The first, tetrad logit (TL), estimator conditions on a sufficient statistic for the degree heterogeneity. The second, joint maximum likelihood (JML), estimator treats the degree heterogeneity {Ai0}i = 1N as additional (incidental) parameters to be estimated. The TL estimate is consistent under both sparse and dense graph sequences, whereas consistency of the JML estimate is shown only under dense graph sequences.  相似文献   

16.
In this paper, we provide efficient estimators and honest confidence bands for a variety of treatment effects including local average (LATE) and local quantile treatment effects (LQTE) in data‐rich environments. We can handle very many control variables, endogenous receipt of treatment, heterogeneous treatment effects, and function‐valued outcomes. Our framework covers the special case of exogenous receipt of treatment, either conditional on controls or unconditionally as in randomized control trials. In the latter case, our approach produces efficient estimators and honest bands for (functional) average treatment effects (ATE) and quantile treatment effects (QTE). To make informative inference possible, we assume that key reduced‐form predictive relationships are approximately sparse. This assumption allows the use of regularization and selection methods to estimate those relations, and we provide methods for post‐regularization and post‐selection inference that are uniformly valid (honest) across a wide range of models. We show that a key ingredient enabling honest inference is the use of orthogonal or doubly robust moment conditions in estimating certain reduced‐form functional parameters. We illustrate the use of the proposed methods with an application to estimating the effect of 401(k) eligibility and participation on accumulated assets. The results on program evaluation are obtained as a consequence of more general results on honest inference in a general moment‐condition framework, which arises from structural equation models in econometrics. Here, too, the crucial ingredient is the use of orthogonal moment conditions, which can be constructed from the initial moment conditions. We provide results on honest inference for (function‐valued) parameters within this general framework where any high‐quality, machine learning methods (e.g., boosted trees, deep neural networks, random forest, and their aggregated and hybrid versions) can be used to learn the nonparametric/high‐dimensional components of the model. These include a number of supporting auxiliary results that are of major independent interest: namely, we (1) prove uniform validity of a multiplier bootstrap, (2) offer a uniformly valid functional delta method, and (3) provide results for sparsity‐based estimation of regression functions for function‐valued outcomes.  相似文献   

17.
This paper develops a dynamic model of neighborhood choice along with a computationally light multi‐step estimator. The proposed empirical framework captures observed and unobserved preference heterogeneity across households and locations in a flexible way. We estimate the model using a newly assembled data set that matches demographic information from mortgage applications to the universe of housing transactions in the San Francisco Bay Area from 1994 to 2004. The results provide the first estimates of the marginal willingness to pay for several non‐marketed amenities—neighborhood air pollution, violent crime, and racial composition—in a dynamic framework. Comparing these estimates with those from a static version of the model highlights several important biases that arise when dynamic considerations are ignored.  相似文献   

18.
The econometric literature of high frequency data often relies on moment estimators which are derived from assuming local constancy of volatility and related quantities. We here study this local‐constancy approximation as a general approach to estimation in such data. We show that the technique yields asymptotic properties (consistency, normality) that are correct subject to an ex post adjustment involving asymptotic likelihood ratios. These adjustments are derived and documented. Several examples of estimation are provided: powers of volatility, leverage effect, and integrated betas. The first order approximations based on local constancy can be over the period of one observation or over blocks of successive observations. It has the advantage of gaining in transparency in defining and analyzing estimators. The theory relies heavily on the interplay between stable convergence and measure change, and on asymptotic expansions for martingales.  相似文献   

19.
This paper proposes an asymptotically efficient method for estimating models with conditional moment restrictions. Our estimator generalizes the maximum empirical likelihood estimator (MELE) of Qin and Lawless (1994). Using a kernel smoothing method, we efficiently incorporate the information implied by the conditional moment restrictions into our empirical likelihood‐based procedure. This yields a one‐step estimator which avoids estimating optimal instruments. Our likelihood ratio‐type statistic for parametric restrictions does not require the estimation of variance, and achieves asymptotic pivotalness implicitly. The estimation and testing procedures we propose are normalization invariant. Simulation results suggest that our new estimator works remarkably well in finite samples.  相似文献   

20.
估计带跳资产价格的时点波动时,需要用门限过滤方法消除跳的影响。在有限样本下,门限过滤会产生错滤偏误和漏虑偏误,降低估计精度。跳错滤产生的偏误可通过对错滤样本进行补足的方法进行纠偏,但由于发生时点未知,跳漏滤产生的偏误无法纠正,只能通过估计量设计来减少漏滤偏误。本文首次提出基于门限双幂变差的时点波动估计量,采用核平滑方法对资产价格时点波动进行非参数估计,有效减少跳错滤导致的偏误。采用随机阵列极限理论,本文证明了估计量的一致性和渐进正态性,在分析有限样本偏误的基础上,给出估计量的纠偏方法。蒙特卡洛模拟表明,本文给出的估计量,漏滤偏误明显小于基于二次变差构造的估计量,对时点波动估计的性质具有实质改进。采用Kupiec动态VaR精度检验对沪深300指数的实证分析表明,本文给出的时点波动估计更能描述资产收益的波动特征。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号