首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
This paper considers model averaging as a way to construct optimal instruments for the two‐stage least squares (2SLS), limited information maximum likelihood (LIML), and Fuller estimators in the presence of many instruments. We propose averaging across least squares predictions of the endogenous variables obtained from many different choices of instruments and then use the average predicted value of the endogenous variables in the estimation stage. The weights for averaging are chosen to minimize the asymptotic mean squared error of the model averaging version of the 2SLS, LIML, or Fuller estimator. This can be done by solving a standard quadratic programming problem.  相似文献   

2.
This paper presents research on the problem of selecting a proper surrogate for a forecast error cost criterion in the production smoothing problem. Various forecast models estimated future selected demand process values. Resultant error costs were computed and the coincidence of the selection of a forecast model on the basis of least error cost and the various error measures was noted. The error measures used were the mean absolute deviation, average algebraic error(bias), and the mean squared error. Computations necessary to develop the mathematical form of the error cost criterion are presented in an Appendix. Also presented are the penalty costs of using an error measure as a surrogate for an error cost criterion.  相似文献   

3.
在基本的SV模型中引入包含丰富日内高频信息的已实现测度,同时考虑其偏差修正以及波动率非对称性与长记忆性,构建了双因子非对称已实现SV(2FARSV)模型.进一步基于连续粒子滤波算法,给出了2FARSV模型参数的极大似然估计方法.蒙特卡罗模拟实验表明,给出的估计方法是有效的.采用上证综合指数和深证成份指数日内高频数据计算已实现波动率(RV)和已实现极差波动率(RRV),对2FARSV模型进行了实证研究.结果表明:RV和RRV都是真实日度波动率的有偏估计(下偏),但RRV相比RV是更有效的波动率估计量;沪深股市具有强的波动率持续性以及显著的波动率非对称性(杠杆效应与规模效应);2FARSV模型相比其它已实现波动率模型具有更好的数据拟合效果,该模型能够充分地捕获沪深股市波动率的动态特征(时变性、聚集性、非对称性与长记忆性).  相似文献   

4.
This paper considers a generalized method of moments (GMM) estimation problem in which one has a vector of moment conditions, some of which are correct and some incorrect. The paper introduces several procedures for consistently selecting the correct moment conditions. The procedures also can consistently determine whether there is a sufficient number of correct moment conditions to identify the unknown parameters of interest. The paper specifies moment selection criteria that are GMM analogues of the widely used BIC and AIC model selection criteria. (The latter is not consistent.) The paper also considers downward and upward testing procedures. All of the moment selection procedures discussed in this paper are based on the minimized values of the GMM criterion function for different vectors of moment conditions. The procedures are applicable in time-series and cross-sectional contexts. Application of the results of the paper to instrumental variables estimation problems yields consistent procedures for selecting instrumental variables.  相似文献   

5.
加权复合分位数回归方法在动态VaR风险度量中的应用   总被引:1,自引:0,他引:1  
风险价值(VaR)因为简单直观,成为了当今国际上最主流的风险度量方法之一,而基于时间序列自回归(AR)模型来计算无条件风险度量值在实业界有广泛应用。本文基于分位数回归理论对AR模型提出了一个估计方法--加权复合分位数回归(WCQR)估计,该方法可以充分利用多个分位数信息提高参数估计的效率,并且对于不同的分位数回归赋予不同的权重,使得估计更加有效,文中给出了该估计的渐近正态性质。有限样本的数值模拟表明,当残差服从非正态分布时,WCQR估计的的统计性质接近于极大似然估计,而该估计是不需要知道残差分布的,因此,所提出的WCQR估计更加具有竞争力。此方法在预测资产收益的VaR动态风险时有较好的应用,我们将所提出的理论分析了我国九只封闭式基金,实证分析发现,结合WCQR方法求得的VaR风险与用非参数方法求得的VaR风险非常接近,而结合WCQR方法可以计算动态的VaR风险值和预测资产收益的VaR风险值。  相似文献   

6.
In the regression‐discontinuity (RD) design, units are assigned to treatment based on whether their value of an observed covariate exceeds a known cutoff. In this design, local polynomial estimators are now routinely employed to construct confidence intervals for treatment effects. The performance of these confidence intervals in applications, however, may be seriously hampered by their sensitivity to the specific bandwidth employed. Available bandwidth selectors typically yield a “large” bandwidth, leading to data‐driven confidence intervals that may be biased, with empirical coverage well below their nominal target. We propose new theory‐based, more robust confidence interval estimators for average treatment effects at the cutoff in sharp RD, sharp kink RD, fuzzy RD, and fuzzy kink RD designs. Our proposed confidence intervals are constructed using a bias‐corrected RD estimator together with a novel standard error estimator. For practical implementation, we discuss mean squared error optimal bandwidths, which are by construction not valid for conventional confidence intervals but are valid with our robust approach, and consistent standard error estimators based on our new variance formulas. In a special case of practical interest, our procedure amounts to running a quadratic instead of a linear local regression. More generally, our results give a formal justification to simple inference procedures based on increasing the order of the local polynomial estimator employed. We find in a simulation study that our confidence intervals exhibit close‐to‐correct empirical coverage and good empirical interval length on average, remarkably improving upon the alternatives available in the literature. All results are readily available in R and STATA using our companion software packages described in Calonico, Cattaneo, and Titiunik (2014d, 2014b).  相似文献   

7.
Quantile regression (QR) fits a linear model for conditional quantiles just as ordinary least squares (OLS) fits a linear model for conditional means. An attractive feature of OLS is that it gives the minimum mean‐squared error linear approximation to the conditional expectation function even when the linear model is misspecified. Empirical research using quantile regression with discrete covariates suggests that QR may have a similar property, but the exact nature of the linear approximation has remained elusive. In this paper, we show that QR minimizes a weighted mean‐squared error loss function for specification error. The weighting function is an average density of the dependent variable near the true conditional quantile. The weighted least squares interpretation of QR is used to derive an omitted variables bias formula and a partial quantile regression concept, similar to the relationship between partial regression and OLS. We also present asymptotic theory for the QR process under misspecification of the conditional quantile function. The approximation properties of QR are illustrated using wage data from the U.S. census. These results point to major changes in inequality from 1990 to 2000.  相似文献   

8.
The best-worst method (BWM) is a multi-criteria decision-making method which finds the optimal weights of a set of criteria based on the preferences of only one decision-maker (DM) (or evaluator). However, it cannot amalgamate the preferences of multiple decision-makers/evaluators in the so-called group decision-making problem. A typical way of aggregating the preferences of multiple DMs is to use the average operator, e.g., arithmetic or geometric mean. However, averages are sensitive to outliers and provide restricted information regarding the overall preferences of all DMs. In this paper, a Bayesian BWM is introduced to find the aggregated final weights of criteria for a group of DMs at once. To this end, the BWM framework is meaningfully viewed from a probabilistic angle, and a Bayesian hierarchical model is tailored to compute the weights in the presence of a group of DMs. We further introduce a new ranking scheme for decision criteria, called credal ranking, where a confidence level is assigned to measure the extent to which a group of DMs prefers one criterion over one another. A weighted directed graph visualizes the credal ranking based on which the interrelation of criteria and confidences are merely understood. The numerical example validates the results obtained by the Bayesian BWM while it yields much more information in comparison to that of the original BWM.  相似文献   

9.
This paper considers studentized tests in time series regressions with nonparametrically autocorrelated errors. The studentization is based on robust standard errors with truncation lag M=bT for some constant b∈(0, 1] and sample size T. It is shown that the nonstandard fixed‐b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small‐b limit distribution. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long‐run variance estimator. A plug‐in procedure for implementing this optimal bandwidth is suggested and simulations (not reported here) confirm that the new plug‐in procedure works well in finite samples.  相似文献   

10.
Abstract. In this paper a new algorithm is proposed in order to produce an automatic dynamic compound estimator of the labour force based on an interactive scheme. The proposed algorithm, JARES, is based on the probability estimator of Jaynes based on the notion of maximum entropy of a given probability distribution with a constraint on the average of an external information. The iterative scheme is based on the solution of a set of linear equations which represent the algebraic relationships between the weights and the estimates.  相似文献   

11.
针对多种数据包络分析(Data Envelopment Analysis,简称DEA)模型会产生不同绩效评价结果的问题,提出基于Gini准则科学地融合各DEA模型结果的方法。首先基于Gini准则定义信息纯度以衡量各DEA模型结果的确定性并赋予权重,然后通过加权融合最终得出客观唯一的综合效率。此外,根据评价者的偏好信息或先验知识,进一步提出交互式多DEA模型-Gini准则方法。以前学者仅从单一角度出发选择DEA模型评价高校的运营绩效,考虑到从不同角度出发的多种DEA模型可以给出高校更加全面客观的运营绩效评价,利用以上方法对2011年国内25所理工类高校的运营绩效进行了实证分析,结果验证了以上方法可以合理有效的衡量各高校的运营绩效表现,对于高校运营绩效的评价研究具有实际指导意义。  相似文献   

12.
We consider semiparametric estimation of the memory parameter in a model that includes as special cases both long‐memory stochastic volatility and fractionally integrated exponential GARCH (FIEGARCH) models. Under our general model the logarithms of the squared returns can be decomposed into the sum of a long‐memory signal and a white noise. We consider periodogram‐based estimators using a local Whittle criterion function. We allow the optional inclusion of an additional term to account for possible correlation between the signal and noise processes, as would occur in the FIEGARCH model. We also allow for potential nonstationarity in volatility by allowing the signal process to have a memory parameter d*1/2. We show that the local Whittle estimator is consistent for d*∈(0,1). We also show that the local Whittle estimator is asymptotically normal for d*∈(0,3/4) and essentially recovers the optimal semiparametric rate of convergence for this problem. In particular, if the spectral density of the short‐memory component of the signal is sufficiently smooth, a convergence rate of n2/5−δ for d*∈(0,3/4) can be attained, where n is the sample size and δ>0 is arbitrarily small. This represents a strong improvement over the performance of existing semiparametric estimators of persistence in volatility. We also prove that the standard Gaussian semiparametric estimator is asymptotically normal if d*=0. This yields a test for long memory in volatility.  相似文献   

13.
This paper studies the problem of identification and estimation in nonparametric regression models with a misclassified binary regressor where the measurement error may be correlated with the regressors. We show that the regression function is nonparametrically identified in the presence of an additional random variable that is correlated with the unobserved true underlying variable but unrelated to the measurement error. Identification for semiparametric and parametric regression functions follows straightforwardly from the basic identification result. We propose a kernel estimator based on the identification strategy, derive its large sample properties, and discuss alternative estimation procedures. We also propose a test for misclassification in the model based on an exclusion restriction that is straightforward to implement.  相似文献   

14.
In this paper we propose a new estimator for a model with one endogenous regressor and many instrumental variables. Our motivation comes from the recent literature on the poor properties of standard instrumental variables estimators when the instrumental variables are weakly correlated with the endogenous regressor. Our proposed estimator puts a random coefficients structure on the relation between the endogenous regressor and the instruments. The variance of the random coefficients is modelled as an unknown parameter. In addition to proposing a new estimator, our analysis yields new insights into the properties of the standard two‐stage least squares (TSLS) and limited‐information maximum likelihood (LIML) estimators in the case with many weak instruments. We show that in some interesting cases, TSLS and LIML can be approximated by maximizing the random effects likelihood subject to particular constraints. We show that statistics based on comparisons of the unconstrained estimates of these parameters to the implicit TSLS and LIML restrictions can be used to identify settings when standard large sample approximations to the distributions of TSLS and LIML are likely to perform poorly. We also show that with many weak instruments, LIML confidence intervals are likely to have under‐coverage, even though its finite sample distribution is approximately centered at the true value of the parameter. In an application with real data and simulations around this data set, the proposed estimator performs markedly better than TSLS and LIML, both in terms of coverage rate and in terms of risk.  相似文献   

15.
大量经济、金融以及企业管理等领域研究对象的行为特征可以通过矩约束模型来刻画。然而,该模型中参数的估计对矩条件的选取非常敏感。如何选取最优的矩条件,进而得到更准确的参数估计和更精确的统计推断,是实证研究面临的重要问题。本文从估计量均方误差(MSE)最小的角度,研究了一般矩约束模型两步有效广义矩(GMM)估计的最优矩条件选取方法。首先,利用迭代的方法,推导出两步有效GMM估计的高阶MSE,然后通过Nagar分解,求出了两步有效GMM估计量的近似MSE。接着,基于近似MSE表达式,给出了两步有效GMM估计矩条件选取准则的一般理论,即定义了最优的矩条件,提出了两步有效GMM估计的最优矩条件选取准则,并证明了选取准则的渐近有效性。模拟结果表明,本文提出的矩条件选取方法能够很好地改善两步有效GMM估计量的有限样本表现,降低估计量的有效样本偏差。本研究为实证研究中面临的矩条件选择问题提供了理论依据。  相似文献   

16.
对协方差矩阵高频估计量和预测模型的选择,共同影响协方差的预测效果,从而影响波动择时投资组合策略的绩效。资产维数很高时,协方差矩阵高频估计量的构建会因非同步交易而丢弃大量数据,降低信息利用效率。鉴于此,将可以充分利用资产日内价格信息的KEM估计量用于估计中国股市资产的高维协方差矩阵,并与两种常用协方差矩阵估计量进行比较。进一步地,将三种估计量分别用于多元异质自回归模型、指数加权移动平均模型以及短、中、长期移动平均模型进行样本外预测,并比较在三种基于风险的投资组合策略下的经济效益。采用上证50指数中20只不同流动性成份股逐笔高频数据的实证研究发现:(1)无论是在市场平稳时期还是市场剧烈震荡期,长期移动平均模型都是高维协方差估计量预测建模的最优选择,在应用于各种波动择时策略时都可以实现最低成本和最高收益。(2)在市场平稳时期,KEM估计量是高维协方差估计的最优选择,应用于各种波动择时策略时基本都可以实现最低成本和最高收益;在市场剧烈震荡期,使用KEM估计量进行波动择时仍然可以在成本方面保持优势,但在收益上并不占优。(3)无论是在市场平稳时期还是市场剧烈震荡期,最低的成本都是在采用等风险贡献投资组合时实现的,而最高的收益则都是在采用最小方差投资组合时实现的。研究不仅首次检验了KEM估计量在常用波动择时策略中的适用性,而且首次实证了实现最为简单的长期移动平均模型在高维协方差矩阵预测中的优越性,对投资决策和风险管理等实务应用都具有重要意义。  相似文献   

17.
应用合作博弈确定组合评价权重系数的方法研究   总被引:10,自引:1,他引:10  
本文运用合作博弈的原理,将用于组合评价、具有相同属性的单一评价方法看作合作博弈中的局中人;采用平均值作为单一方法相对于组合评价结论偏差的参考基准,所得组合评价的误差平方和视为合作的结果;应用多种单一评价方法所得结论的偏差相对于组合评价结论总偏差的贡献,来对单一方法进行赋权。最后针对一个案例,采用自主开发的基于MATLAB语言工具的EVversion1 0软件进行模拟分析与比较分析。  相似文献   

18.
Jump Regressions     
We develop econometric tools for studying jump dependence of two processes from high‐frequency observations on a fixed time interval. In this context, only segments of data around a few outlying observations are informative for the inference. We derive an asymptotically valid test for stability of a linear jump relation over regions of the jump size domain. The test has power against general forms of nonlinearity in the jump dependence as well as temporal instabilities. We further propose an efficient estimator for the linear jump regression model that is formed by optimally weighting the detected jumps with weights based on the diffusive volatility around the jump times. We derive the asymptotic limit of the estimator, a semiparametric lower efficiency bound for the linear jump regression, and show that our estimator attains the latter. The analysis covers both deterministic and random jump arrivals. In an empirical application, we use the developed inference techniques to test the temporal stability of market jump betas.  相似文献   

19.
The purpose of this note is to show how semiparametric estimators with a small bias property can be constructed. The small bias property (SBP) of a semiparametric estimator is that its bias converges to zero faster than the pointwise and integrated bias of the nonparametric estimator on which it is based. We show that semiparametric estimators based on twicing kernels have the SBP. We also show that semiparametric estimators where nonparametric kernel estimation does not affect the asymptotic variance have the SBP. In addition we discuss an interpretation of series and sieve estimators as idempotent transformations of the empirical distribution that helps explain the known result that they lead to the SBP. In Monte Carlo experiments we find that estimators with the SBP have mean‐square error that is smaller and less sensitive to bandwidth than those that do not have the SBP.  相似文献   

20.
In this paper we study identification and estimation of a correlated random coefficients (CRC) panel data model. The outcome of interest varies linearly with a vector of endogenous regressors. The coefficients on these regressors are heterogenous across units and may covary with them. We consider the average partial effect (APE) of a small change in the regressor vector on the outcome (cf. Chamberlain (1984), Wooldridge (2005a)). Chamberlain (1992) calculated the semiparametric efficiency bound for the APE in our model and proposed a √N‐consistent estimator. Nonsingularity of the APE's information bound, and hence the appropriateness of Chamberlain's (1992) estimator, requires (i) the time dimension of the panel (T) to strictly exceed the number of random coefficients (p) and (ii) strong conditions on the time series properties of the regressor vector. We demonstrate irregular identification of the APE when T = p and for more persistent regressor processes. Our approach exploits the different identifying content of the subpopulations of stayers—or units whose regressor values change little across periods—and movers—or units whose regressor values change substantially across periods. We propose a feasible estimator based on our identification result and characterize its large sample properties. While irregularity precludes our estimator from attaining parametric rates of convergence, its limiting distribution is normal and inference is straightforward to conduct. Standard software may be used to compute point estimates and standard errors. We use our methods to estimate the average elasticity of calorie consumption with respect to total outlay for a sample of poor Nicaraguan households.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号