首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper reports estimates of the effects of JTPA training programs on the distribution of earnings. The estimation uses a new instrumental variable (IV) method that measures program impacts on quantiles. The quantile treatment effects (QTE) estimator reduces to quantile regression when selection for treatment is exogenously determined. QTE can be computed as the solution to a convex linear programming problem, although this requires first‐step estimation of a nuisance function. We develop distribution theory for the case where the first step is estimated nonparametrically. For women, the empirical results show that the JTPA program had the largest proportional impact at low quantiles. Perhaps surprisingly, however, JTPA training raised the quantiles of earnings for men only in the upper half of the trainee earnings distribution.  相似文献   

2.
This paper develops a method for inference in dynamic discrete choice models with serially correlated unobserved state variables. Estimation of these models involves computing high‐dimensional integrals that are present in the solution to the dynamic program and in the likelihood function. First, the paper proposes a Bayesian Markov chain Monte Carlo estimation procedure that can handle the problem of multidimensional integration in the likelihood function. Second, the paper presents an efficient algorithm for solving the dynamic program suitable for use in conjunction with the proposed estimation procedure.  相似文献   

3.
This paper establishes that instruments enable the identification of nonparametric regression models in the presence of measurement error by providing a closed form solution for the regression function in terms of Fourier transforms of conditional expectations of observable variables. For parametrically specified regression functions, we propose a root n consistent and asymptotically normal estimator that takes the familiar form of a generalized method of moments estimator with a plugged‐in nonparametric kernel density estimate. Both the identification and the estimation methodologies rely on Fourier analysis and on the theory of generalized functions. The finite‐sample properties of the estimator are investigated through Monte Carlo simulations.  相似文献   

4.
This paper presents a new approach to estimation and inference in panel data models with a general multifactor error structure. The unobserved factors and the individual‐specific errors are allowed to follow arbitrary stationary processes, and the number of unobserved factors need not be estimated. The basic idea is to filter the individual‐specific regressors by means of cross‐section averages such that asymptotically as the cross‐section dimension (N) tends to infinity, the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by least squares applied to auxiliary regressions where the observed regressors are augmented with cross‐sectional averages of the dependent variable and the individual‐specific regressors. A number of estimators (referred to as common correlated effects (CCE) estimators) are proposed and their asymptotic distributions are derived. The small sample properties of mean group and pooled CCE estimators are investigated by Monte Carlo experiments, showing that the CCE estimators have satisfactory small sample properties even under a substantial degree of heterogeneity and dynamics, and for relatively small values of N and T.  相似文献   

5.
Beyond Markowitz with multiple criteria decision aiding   总被引:1,自引:1,他引:0  
The paper is about portfolio selection in a non-Markowitz way, involving uncertainty modeling in terms of a series of meaningful quantiles of probabilistic distributions. Considering the quantiles as evaluation criteria of the portfolios leads to a multiobjective optimization problem which needs to be solved using a Multiple Criteria Decision Aiding (MCDA) method. The primary method we propose for solving this problem is an Interactive Multiobjective Optimization (IMO) method based on so-called Dominance-based Rough Set Approach (DRSA). IMO-DRSA is composed of two phases: computation phase, and dialogue phase. In the computation phase, a sample of feasible portfolio solutions is calculated and presented to the Decision Maker (DM). In the dialogue phase, the DM indicates portfolio solutions which are relatively attractive in a given sample; this binary classification of sample portfolios into ‘good’ and ‘others’ is an input preference information to be analyzed using DRSA; DRSA is producing decision rules relating conditions on particular quantiles with the qualification of supporting portfolios as ‘good’; a rule that best fits the current DM’s preferences is chosen to constrain the previous multiobjective optimization in order to compute a new sample in the next computation phase; in this way, the computation phase yields a new sample including better portfolios, and the procedure loops a necessary number of times to end with the most preferred portfolio. We compare IMO-DRSA with two representative MCDA methods based on traditional preference models: value function (UTA method) and outranking relation (ELECTRE IS method). The comparison, which is of methodological nature, is illustrated by a didactic example.  相似文献   

6.
We propose a new regression method to evaluate the impact of changes in the distribution of the explanatory variables on quantiles of the unconditional (marginal) distribution of an outcome variable. The proposed method consists of running a regression of the (recentered) influence function (RIF) of the unconditional quantile on the explanatory variables. The influence function, a widely used tool in robust estimation, is easily computed for quantiles, as well as for other distributional statistics. Our approach, thus, can be readily generalized to other distributional statistics.  相似文献   

7.
Model averaging for dichotomous dose–response estimation is preferred to estimate the benchmark dose (BMD) from a single model, but challenges remain regarding implementing these methods for general analyses before model averaging is feasible to use in many risk assessment applications, and there is little work on Bayesian methods that include informative prior information for both the models and the parameters of the constituent models. This article introduces a novel approach that addresses many of the challenges seen while providing a fully Bayesian framework. Furthermore, in contrast to methods that use Monte Carlo Markov Chain, we approximate the posterior density using maximum a posteriori estimation. The approximation allows for an accurate and reproducible estimate while maintaining the speed of maximum likelihood, which is crucial in many applications such as processing massive high throughput data sets. We assess this method by applying it to empirical laboratory dose–response data and measuring the coverage of confidence limits for the BMD. We compare the coverage of this method to that of other approaches using the same set of models. Through the simulation study, the method is shown to be markedly superior to the traditional approach of selecting a single preferred model (e.g., from the U.S. EPA BMD software) for the analysis of dichotomous data and is comparable or superior to the other approaches.  相似文献   

8.
加权复合分位数回归方法在动态VaR风险度量中的应用   总被引:1,自引:0,他引:1  
风险价值(VaR)因为简单直观,成为了当今国际上最主流的风险度量方法之一,而基于时间序列自回归(AR)模型来计算无条件风险度量值在实业界有广泛应用。本文基于分位数回归理论对AR模型提出了一个估计方法--加权复合分位数回归(WCQR)估计,该方法可以充分利用多个分位数信息提高参数估计的效率,并且对于不同的分位数回归赋予不同的权重,使得估计更加有效,文中给出了该估计的渐近正态性质。有限样本的数值模拟表明,当残差服从非正态分布时,WCQR估计的的统计性质接近于极大似然估计,而该估计是不需要知道残差分布的,因此,所提出的WCQR估计更加具有竞争力。此方法在预测资产收益的VaR动态风险时有较好的应用,我们将所提出的理论分析了我国九只封闭式基金,实证分析发现,结合WCQR方法求得的VaR风险与用非参数方法求得的VaR风险非常接近,而结合WCQR方法可以计算动态的VaR风险值和预测资产收益的VaR风险值。  相似文献   

9.
Discrete Probability Distributions for Probabilistic Fracture Mechanics   总被引:1,自引:0,他引:1  
Recently, discrete probability distributions (DPDs) have been suggested for use in risk analysis calculations to simplify the numerical computations which must be performed to determine failure probabilities. Specifically, DPDs have been developed to investigate probabilistic functions, that is, functions whose exact form is uncertain. The analysis of defect growth in materials by probabilistic fracture mechanics (PFM) models provides an example in which probabilistic functions play an important role. This paper compares and contrasts Monte Carlo simulation and DPDs as tools for calculating material failure due to fatigue crack growth. For the problem studied, the DPD method takes approximately one third the computation time of the Monte Carlo approach for comparable accuracy. It is concluded that the DPD method has considerable promise in low-failure-probability calculations of importance in risk assessment. In contrast to Monte Carlo, the computation time for the DPD approach is relatively insensitive to the magnitude of the probability being estimated.  相似文献   

10.
This paper studies the estimation of dynamic discrete games of incomplete information. Two main econometric issues appear in the estimation of these models: the indeterminacy problem associated with the existence of multiple equilibria and the computational burden in the solution of the game. We propose a class of pseudo maximum likelihood (PML) estimators that deals with these problems, and we study the asymptotic and finite sample properties of several estimators in this class. We first focus on two‐step PML estimators, which, although they are attractive for their computational simplicity, have some important limitations: they are seriously biased in small samples; they require consistent nonparametric estimators of players' choice probabilities in the first step, which are not always available; and they are asymptotically inefficient. Second, we show that a recursive extension of the two‐step PML, which we call nested pseudo likelihood (NPL), addresses those drawbacks at a relatively small additional computational cost. The NPL estimator is particularly useful in applications where consistent nonparametric estimates of choice probabilities either are not available or are very imprecise, e.g., models with permanent unobserved heterogeneity. Finally, we illustrate these methods in Monte Carlo experiments and in an empirical application to a model of firm entry and exit in oligopoly markets using Chilean data from several retail industries.  相似文献   

11.
The application of an ISO standard procedure (Guide to the Expression of Uncertainty in Measurement (GUM)) is here discussed to quantify uncertainty in human risk estimation under chronic exposure to hazardous chemical compounds. The procedure was previously applied to a simple model; in this article a much more complex model is used, i.e., multiple compound and multiple exposure pathways. Risk was evaluated using the usual methodologies: the deterministic reasonable maximum exposure (RME) and the statistical Monte Carlo method. In both cases, the procedures to evaluate uncertainty on risk values are detailed. Uncertainties were evaluated by different methodologies to account for the peculiarity of information about the single variable. The GUM procedure enables the ranking of variables by their contribution to uncertainty; it provides a criterion for choosing variables for deeper analysis. The obtained results show that the application of GUM procedure is easy and straightforward to quantify uncertainty and variability of risk estimation. Health risk estimation is based on literature data on a water table contaminated by three volatile organic compounds. Daily intake was considered by either ingestion of water or inhalation during showering. The results indicate one of the substances as the main contaminant, and give a criterion to identify the key component on which the treatment selection may be performed and the treatment process may be designed in order to reduce risk.  相似文献   

12.
巴黎期权是由障碍期权发展起来的一种复杂的路径依赖期权,其允许期权持有者在标的资产价格满足在某个给定的价格水平(障碍价格)之上或者之下连续或累计停留预先设定的一段时间的条件下,以预先约定的价格(执行价格)买入或卖出某种标的资产。目前巴黎期权定价的主流数值方法有二叉树方法、有限差分法和蒙特卡罗方法。论文的研究结果表明,在给定的精度条件下,与标准蒙特卡罗方法相比,多层蒙特卡罗方法能够将运算成本从O(ε-3)减少到O(ε-2(logε)2);反之,在给定的计算成本条件下,相对于标准蒙特卡罗方法,多层蒙特卡罗方法能够更快地收敛到真值附近。本文将其应用于巴黎期权的定价计算中,增加了巴黎期权的数值算法选择范围,并提高了巴黎期权定价的精度。  相似文献   

13.
在指令不均衡与股票收益关系研究中,常常遇到两个困难:第一,不同市场环境下,前者对后者存在异质影响;第二,往往涉及大规模数据处理。为此,运用大规模数据分位数回归的方法,一方面揭示不同分位点处指令不均衡对股票收益的异质影响,细致刻画两者之间关系;另一方面适应大规模数据建模要求,得到更为可靠的结果。以上证A股和深证A股为研究对象,通过大规模数据分位数回归方法,得到了比均值回归更多有用信息。实证结果表明:第一,在高分位点处,滞后1期指令不均衡对股票收益具有正向影响且呈现上升趋势,而在低分位点却具有负向影响;第二,控制当期指令不均衡后,滞后期指令不均衡对股票收益具有负向影响,且随着分位点的增加呈现下降趋势。这些结果意味着,指令不均衡对股票收益具有一定的解释能力和预测能力。  相似文献   

14.
A valid Edgeworth expansion is established for the limit distribution of density‐weighted semiparametric averaged derivative estimates of single index models. The leading term that corrects the normal limit varies in magnitude, depending on the choice of bandwidth and kernel order. In general this term has order larger than the n−1/2 that prevails in standard parametric problems, but we find circumstances in which it is O(n−1/2), thereby extending the achievement of an n−1/2 Berry‐Esseen bound in Robinson (1995a). A valid empirical Edgeworth expansion is also established. We also provide theoretical and empirical Edgeworth expansions for a studentized statistic, where some correction terms are different from those for the unstudentized case. We report a Monte Carlo study of finite sample performance.  相似文献   

15.
中国股市超高频持续期序列长记忆性研究   总被引:1,自引:0,他引:1  
针对股市超高频持续期序列,提出了长记忆随机条件持续期模型(LMSCD),并设计了一类基于混沌禁忌遗传算法的谱似然函数模型参数估计方法,通过Monte Carlo模拟实验,验证了方法的可行性.然后,利用沪市浦发银行股票的超高频数据,分别建立了交易持续期、价格持续期和交易量持续期的长记忆随机条件持续期模型,验证了中国股票市场超高频持续期序列长记忆性的存在.  相似文献   

16.
可违约零息债券风险综合度量Monte Carlo方法   总被引:1,自引:0,他引:1  
可违约零息债券同时面临着违约风险和市场风险(利率风险)这两类主要风险.相对于传统的不同类风险独立度量方法,也不同于割裂两类风险再进行加总或通过Copula函数关联,本文在信用风险强度定价模型的基础上,同时考虑信用风险、市场风险和两类风险之间的相关关系,建立了计算可违约零息债券综合风险VaR的Monte Carlo方法,得出同一个风险计算期下反映两类风险的损失分布和同一个某置信度的损失分布的分位点,进而能求得风险综合VaR值,这样可在同一个框架下同时捕捉可违约零息债券的两类风险,这里,给出了MonteCarlo模拟方法具体技术细节,包括违约时间和基础状态向量过程的模拟.最后运用本文的风险综合度量模型对短期融资券的综合风险进行计算,得出风险综合VaR值,并与利率风险独立度量VaR值和信用风险独立度量VaR值进行比较分析.  相似文献   

17.
We develop a new specification test for IV estimators adopting a particular second order approximation of Bekker. The new specification test compares the difference of the forward (conventional) 2SLS estimator of the coefficient of the right‐hand side endogenous variable with the reverse 2SLS estimator of the same unknown parameter when the normalization is changed. Under the null hypothesis that conventional first order asymptotics provide a reliable guide to inference, the two estimates should be very similar. Our test sees whether the resulting difference in the two estimates satisfies the results of second order asymptotic theory. Essentially the same idea is applied to develop another new specification test using second‐order unbiased estimators of the type first proposed by Nagar. If the forward and reverse Nagar‐type estimators are not significantly different we recommend estimation by LIML, which we demonstrate is the optimal linear combination of the Nagar‐type estimators (to second order). We also demonstrate the high degree of similarity for k‐class estimators between the approach of Bekker and the Edgeworth expansion approach of Rothenberg. An empirical example and Monte Carlo evidence demonstrate the operation of the new specification test.  相似文献   

18.
We consider semiparametric estimation of the memory parameter in a model that includes as special cases both long‐memory stochastic volatility and fractionally integrated exponential GARCH (FIEGARCH) models. Under our general model the logarithms of the squared returns can be decomposed into the sum of a long‐memory signal and a white noise. We consider periodogram‐based estimators using a local Whittle criterion function. We allow the optional inclusion of an additional term to account for possible correlation between the signal and noise processes, as would occur in the FIEGARCH model. We also allow for potential nonstationarity in volatility by allowing the signal process to have a memory parameter d*1/2. We show that the local Whittle estimator is consistent for d*∈(0,1). We also show that the local Whittle estimator is asymptotically normal for d*∈(0,3/4) and essentially recovers the optimal semiparametric rate of convergence for this problem. In particular, if the spectral density of the short‐memory component of the signal is sufficiently smooth, a convergence rate of n2/5−δ for d*∈(0,3/4) can be attained, where n is the sample size and δ>0 is arbitrarily small. This represents a strong improvement over the performance of existing semiparametric estimators of persistence in volatility. We also prove that the standard Gaussian semiparametric estimator is asymptotically normal if d*=0. This yields a test for long memory in volatility.  相似文献   

19.
宏观经济领域中存在严重的结构突变性,模型估计量的优劣对估计样本规模是敏感的。本文针对时变参数模型,建立了滚动窗宽选择标准,通过最小化估计量的近似二次损失函数及最大化各子样本估计量间的曼哈顿距离选择窗宽大小,权衡了模型估计量的准确性和时变性两个相悖目标。蒙特卡罗模拟实验表明,本文所提出的方法在各种结构突变情形下均适用,能够应用于线性关系和非线性关系的时变参数模型中,且均具有稳健性。将该方法应用于我国金融网络的结构突变识别过程,显著改善了传统窗宽选择方法的结果。  相似文献   

20.
在探讨单重分形模型与以小波领袖法为代表的多重分形模型的内在联系的基础上,分析了小波领袖法的内在缺陷并提出了改进方法,并利用Monte Carlo模拟比较了传统的小波领袖法和改进的小波领袖法的效果;在改进的小波领袖法的框架下,利用基于逼近技术的贝叶斯法估计了上证指数收益率序列的多重分形谱及有关参数。理论分析表明,在传统小波领袖法中,小波母函数"能量和为1",这与经典的R/S法以及实际股票市场的实际情况都不相符,且会严重低估标度指数。实证表明,改进的小波领袖法克服了对标度指数的低估效应和对多重分形谱估计值的扭曲;在利用改进的小波领袖法刻画股票市场的波动性后,基于逼近技术的贝叶斯估计法不仅减少了需要估计的参数,而且准确识别了我国股票市场长期趋势发生变化的主要转折点,结果也非常稳健。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号