首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
This paper studies nonparametric estimation of conditional moment restrictions in which the generalized residual functions can be nonsmooth in the unknown functions of endogenous variables. This is a nonparametric nonlinear instrumental variables (IV) problem. We propose a class of penalized sieve minimum distance (PSMD) estimators, which are minimizers of a penalized empirical minimum distance criterion over a collection of sieve spaces that are dense in the infinite‐dimensional function parameter space. Some of the PSMD procedures use slowly growing finite‐dimensional sieves with flexible penalties or without any penalty; others use large dimensional sieves with lower semicompact and/or convex penalties. We establish their consistency and the convergence rates in Banach space norms (such as a sup‐norm or a root mean squared norm), allowing for possibly noncompact infinite‐dimensional parameter spaces. For both mildly and severely ill‐posed nonlinear inverse problems, our convergence rates in Hilbert space norms (such as a root mean squared norm) achieve the known minimax optimal rate for the nonparametric mean IV regression. We illustrate the theory with a nonparametric additive quantile IV regression. We present a simulation study and an empirical application of estimating nonparametric quantile IV Engel curves.  相似文献   

2.
This paper considers model averaging as a way to construct optimal instruments for the two‐stage least squares (2SLS), limited information maximum likelihood (LIML), and Fuller estimators in the presence of many instruments. We propose averaging across least squares predictions of the endogenous variables obtained from many different choices of instruments and then use the average predicted value of the endogenous variables in the estimation stage. The weights for averaging are chosen to minimize the asymptotic mean squared error of the model averaging version of the 2SLS, LIML, or Fuller estimator. This can be done by solving a standard quadratic programming problem.  相似文献   

3.
We consider nonparametric estimation of a regression function that is identified by requiring a specified quantile of the regression “error” conditional on an instrumental variable to be zero. The resulting estimating equation is a nonlinear integral equation of the first kind, which generates an ill‐posed inverse problem. The integral operator and distribution of the instrumental variable are unknown and must be estimated nonparametrically. We show that the estimator is mean‐square consistent, derive its rate of convergence in probability, and give conditions under which this rate is optimal in a minimax sense. The results of Monte Carlo experiments show that the estimator behaves well in finite samples.  相似文献   

4.
In econometrics there are many occasions where knowledge of the structural relationship among dependent variables is required to answer questions of interest. This paper gives identification and estimation results for nonparametric conditional moment restrictions. We characterize identification of structural functions as completeness of certain conditional distributions, and give sufficient identification conditions for exponential families and discrete variables. We also give a consistent, nonparametric estimator of the structural function. The estimator is nonparametric two‐stage least squares based on series approximation, which overcomes an ill‐posed inverse problem by placing bounds on integrals of higher‐order derivatives.  相似文献   

5.
We present a methodology for estimating the distributional effects of an endogenous treatment that varies at the group level when there are group‐level unobservables, a quantile extension of Hausman and Taylor, 1981. Because of the presence of group‐level unobservables, standard quantile regression techniques are inconsistent in our setting even if the treatment is independent of unobservables. In contrast, our estimation technique is consistent as well as computationally simple, consisting of group‐by‐group quantile regression followed by two‐stage least squares. Using the Bahadur representation of quantile estimators, we derive weak conditions on the growth of the number of observations per group that are sufficient for consistency and asymptotic zero‐mean normality of our estimator. As in Hausman and Taylor, 1981, micro‐level covariates can be used as internal instruments for the endogenous group‐level treatment if they satisfy relevance and exogeneity conditions. Our approach applies to a broad range of settings including labor, public finance, industrial organization, urban economics, and development; we illustrate its usefulness with several such examples. Finally, an empirical application of our estimator finds that low‐wage earners in the United States from 1990 to 2007 were significantly more affected by increased Chinese import competition than high‐wage earners.  相似文献   

6.
Counterfactual distributions are important ingredients for policy analysis and decomposition analysis in empirical economics. In this article, we develop modeling and inference tools for counterfactual distributions based on regression methods. The counterfactual scenarios that we consider consist of ceteris paribus changes in either the distribution of covariates related to the outcome of interest or the conditional distribution of the outcome given covariates. For either of these scenarios, we derive joint functional central limit theorems and bootstrap validity results for regression‐based estimators of the status quo and counterfactual outcome distributions. These results allow us to construct simultaneous confidence sets for function‐valued effects of the counterfactual changes, including the effects on the entire distribution and quantile functions of the outcome as well as on related functionals. These confidence sets can be used to test functional hypotheses such as no‐effect, positive effect, or stochastic dominance. Our theory applies to general counterfactual changes and covers the main regression methods including classical, quantile, duration, and distribution regressions. We illustrate the results with an empirical application to wage decompositions using data for the United States. As a part of developing the main results, we introduce distribution regression as a comprehensive and flexible tool for modeling and estimating the entire conditional distribution. We show that distribution regression encompasses the Cox duration regression and represents a useful alternative to quantile regression. We establish functional central limit theorems and bootstrap validity results for the empirical distribution regression process and various related functionals.  相似文献   

7.
条件偏度是金融市场典型特征之一,忽略条件偏度的组合投资决策往往难以有效地分散金融风险。为此,本文构建了包含条件偏度的组合投资模型,并给出其建模方法。首先,运用MIDAS-QR模型,改善条件偏度测度效果;其次,基于CRRA效用函数,将组合投资权重设计为条件偏度和特征变量的线性组合,建立组合投资模型并给出求解方案;最后,从沪深300指数中选取10支代表性成分股进行实证研究,从收益、风险和Sharpe比率等方面,将包含条件偏度的组合投资模型与等权方案、均值-方差模型等进行比较,分析条件偏度在组合投资中的作用。实证结果表明:MIDAS-QR是测度条件偏度的有效方法,其测度结果受异常值影响小,表现稳定;条件偏度对组合投资决策具有显著影响,包含条件偏度的组合投资模型能够有效地降低投资风险、带来更高的风险调整收益。  相似文献   

8.
We propose a method to correct for sample selection in quantile regression models. Selection is modeled via the cumulative distribution function, or copula, of the percentile error in the outcome equation and the error in the participation decision. Copula parameters are estimated by minimizing a method‐of‐moments criterion. Given these parameter estimates, the percentile levels of the outcome are readjusted to correct for selection, and quantile parameters are estimated by minimizing a rotated “check” function. We apply the method to correct wage percentiles for selection into employment, using data for the UK for the period 1978–2000. We also extend the method to account for the presence of equilibrium effects when performing counterfactual exercises.  相似文献   

9.
This paper considers the problem of selection of weights for averaging across least squares estimates obtained from a set of models. Existing model average methods are based on exponential Akaike information criterion (AIC) and Bayesian information criterion (BIC) weights. In distinction, this paper proposes selecting the weights by minimizing a Mallows criterion, the latter an estimate of the average squared error from the model average fit. We show that our new Mallows model average (MMA) estimator is asymptotically optimal in the sense of achieving the lowest possible squared error in a class of discrete model average estimators. In a simulation experiment we show that the MMA estimator compares favorably with those based on AIC and BIC weights. The proof of the main result is an application of the work of Li (1987).  相似文献   

10.
Properties of instrumental variable estimators are sensitive to the choice of valid instruments, even in large cross‐section applications. In this paper we address this problem by deriving simple mean‐square error criteria that can be minimized to choose the instrument set. We develop these criteria for two‐stage least squares (2SLS), limited information maximum likelihood (LIML), and a bias adjusted version of 2SLS (B2SLS). We give a theoretical derivation of the mean‐square error and show optimality. In Monte Carlo experiments we find that the instrument choice generally yields an improvement in performance. Also, in the Angrist and Krueger (1991) returns to education application, when the instrument set is chosen in the way we consider, it turns out that both 2SLS and LIML give similar (large) returns to education.  相似文献   

11.
金融风险的度量和识别是风险管理的重要内容,常用的风险度量工具是标准差、VaR、ES,但存在很多缺陷,expectile的提出弥补了这些不足,在理论界得到广泛的讨论和应用。本文扩展了expectile进行资产配置,提出Adjexpectile的概念,并讨论和分析了Adjexpectile的一致性风险度量、随机占优性、凸性,与标准差、VaR、shortfall的关系,风险贡献及风险分解的性质。通过对六个资产指数:上证国债指数、上证企业债指数、上证180指数、深圳100指数、深成长40p指数和黄金现货指数的复合周收益率数据进行组合优化配置,发现Adjexpectile在非对称性收益数据、组合前沿、风险分散方面具有一定的优越性。  相似文献   

12.
目前关于证券投资基金表现评价的模型均存在市场时机把握能力为负值的偏差,这在经济意义上是不合理的。本文在Treynor-Mazuy的二次回归模型基础上构建了含偏度调整的条件二次模型。通过对中国证券投资基金市场时机把握能力的实证分析,发现该模型有效消除了市场时机把握能力上存在负值的偏差。在实证分析上,由于各个证券投资基金净值超额收益率存在相关关系,对每一个基金样本分别进行时间序列的最小二乘法(OLS或GLS)回归,会产生无效的回归结果,因此本文采用看似无关回归方程(SURE)的方法。从研究结果来看,SURE的结果更可靠。  相似文献   

13.
在指令不均衡与股票收益关系研究中,常常遇到两个困难:第一,不同市场环境下,前者对后者存在异质影响;第二,往往涉及大规模数据处理。为此,运用大规模数据分位数回归的方法,一方面揭示不同分位点处指令不均衡对股票收益的异质影响,细致刻画两者之间关系;另一方面适应大规模数据建模要求,得到更为可靠的结果。以上证A股和深证A股为研究对象,通过大规模数据分位数回归方法,得到了比均值回归更多有用信息。实证结果表明:第一,在高分位点处,滞后1期指令不均衡对股票收益具有正向影响且呈现上升趋势,而在低分位点却具有负向影响;第二,控制当期指令不均衡后,滞后期指令不均衡对股票收益具有负向影响,且随着分位点的增加呈现下降趋势。这些结果意味着,指令不均衡对股票收益具有一定的解释能力和预测能力。  相似文献   

14.
This paper considers the appropriate stabilization objectives for monetary policy in a micro‐founded model with staggered price‐setting. Rotemberg and Woodford (1997) and Woodford (2002) have shown that under certain conditions, a local approximation to the expected utility of the representative household in a model of this kind is related inversely to the expected discounted value of a conventional quadratic loss function, in which each period's loss is a weighted average of squared deviations of inflation and an output gap measure from their optimal values (zero). However, those derivations rely on an assumption of the existence of an output or employment subsidy that offsets the distortion due to the market power of monopolistically competitive price‐setters, so that the steady state under a zero‐inflation policy involves an efficient level of output. Here we show how to dispense with this unappealing assumption, so that a valid linear‐quadratic approximation to the optimal policy problem is possible even when the steady state is distorted to an arbitrary extent (allowing for tax distortions as well as market power), and when, as a consequence, it is necessary to take account of the effects of stabilization policy on the average level of output. We again obtain a welfare‐theoretic loss function that involves both inflation and an appropriately defined output gap, though the degree of distortion of the steady state affects both the weights on the two stabilization objectives and the definition of the welfare‐relevant output gap. In the light of these results, we reconsider the conditions under which complete price stability is optimal, and find that they are more restrictive in the case of a distorted steady state. We also consider the conditions under which pure randomization of monetary policy can be welfare‐improving, and find that this is possible in the case of a sufficiently distorted steady state, though the parameter values required are probably not empirically realistic. (JEL: D61, E52, E61)  相似文献   

15.
The ability to accurately measure recovery rate of infrastructure systems and communities impacted by disasters is vital to ensure effective response and resource allocation before, during, and after a disruption. However, a challenge in quantifying such measures resides in the lack of data as community recovery information is seldom recorded. To provide accurate community recovery measures, a hierarchical Bayesian kernel model (HBKM) is developed to predict the recovery rate of communities experiencing power outages during storms. The performance of the proposed method is evaluated using cross‐validation and compared with two models, the hierarchical Bayesian regression model and the Poisson generalized linear model. A case study focusing on the recovery of communities in Shelby County, Tennessee after severe storms between 2007 and 2017 is presented to illustrate the proposed approach. The predictive accuracy of the models is evaluated using the log‐likelihood and root mean squared error. The HBKM yields on average the highest out‐of‐sample predictive accuracy. This approach can help assess the recoverability of a community when data are scarce and inform decision making in the aftermath of a disaster. An illustrative example is presented demonstrating how accurate measures of community resilience can help reduce the cost of infrastructure restoration.  相似文献   

16.
We consider the situation when there is a large number of series, N, each with T observations, and each series has some predictive ability for some variable of interest. A methodology of growing interest is first to estimate common factors from the panel of data by the method of principal components and then to augment an otherwise standard regression with the estimated factors. In this paper, we show that the least squares estimates obtained from these factor‐augmented regressions are consistent and asymptotically normal if . The conditional mean predicted by the estimated factors is consistent and asymptotically normal. Except when T/N goes to zero, inference should take into account the effect of “estimated regressors” on the estimated conditional mean. We present analytical formulas for prediction intervals that are valid regardless of the magnitude of N/T and that can also be used when the factors are nonstationary.  相似文献   

17.
This paper provides weak conditions under which there is nonparametric interval identification of local features of a structural function that depends on a discrete endogenous variable and is nonseparable in latent variates. The function delivers values of a discrete or continuous outcome and instruments may be discrete valued. Application of the analog principle leads to quantile regression based interval estimators of values and partial differences of structural functions. The results are used to investigate the nonparametric identifying power of the quarter‐of‐birth instruments used in Angrist and Krueger's 1991 study of the returns to schooling.  相似文献   

18.
This paper considers tests of the parameter on an endogenous variable in an instrumental variables regression model. The focus is on determining tests that have some optimal power properties. We start by considering a model with normally distributed errors and known error covariance matrix. We consider tests that are similar and satisfy a natural rotational invariance condition. We determine a two‐sided power envelope for invariant similar tests. This allows us to assess and compare the power properties of tests such as the conditional likelihood ratio (CLR), the Lagrange multiplier, and the Anderson–Rubin tests. We find that the CLR test is quite close to being uniformly most powerful invariant among a class of two‐sided tests. The finite‐sample results of the paper are extended to the case of unknown error covariance matrix and possibly nonnormal errors via weak instrument asymptotics. Strong instrument asymptotic results also are provided because we seek tests that perform well under both weak and strong instruments.  相似文献   

19.
This paper analyzes the properties of standard estimators, tests, and confidence sets (CS's) for parameters that are unidentified or weakly identified in some parts of the parameter space. The paper also introduces methods to make the tests and CS's robust to such identification problems. The results apply to a class of extremum estimators and corresponding tests and CS's that are based on criterion functions that satisfy certain asymptotic stochastic quadratic expansions and that depend on the parameter that determines the strength of identification. This covers a class of models estimated using maximum likelihood (ML), least squares (LS), quantile, generalized method of moments, generalized empirical likelihood, minimum distance, and semi‐parametric estimators. The consistency/lack‐of‐consistency and asymptotic distributions of the estimators are established under a full range of drifting sequences of true distributions. The asymptotic sizes (in a uniform sense) of standard and identification‐robust tests and CS's are established. The results are applied to the ARMA(1, 1) time series model estimated by ML and to the nonlinear regression model estimated by LS. In companion papers, the results are applied to a number of other models.  相似文献   

20.
多期VaR主要受到持有期及波动率两个变量的影响,并且其影响模式(线性或非线性)的确定对于准确地进行VaR风险测度至关重要。非线性分位数回归模型,能够克服线性分位数回归模型只能揭示多期VaR及其影响因素之间线性依赖关系的局限,从而提高多期VaR风险测度的准确性。结合波动模型与两个非线性分位数回归方法:QRNN和SVQR,给出了多期VaR风险测度的三类方案:波动模型法、QRNN+波动模型法、SVQR+波动模型法。选取3个股票价格指数作为研究对象,考虑了6种不同形式的波动模型,得到了18个多期VaR风险测度方法进行实证比较,结果表明:波动模型选择影响到多期VaR风险测度效果;SVQR+波动模型法略优于QRNN+波动模型法,并且两者显著优于波动模型法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号