首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
作为金融衍生产品标的,沪深300指数是否存在跳以及跳服从怎样的动态规律对资产定价和风险管理都十分重要。本文提出新的时点方差估计方法,构造渐进性质更好的跳检验统计量,以此对沪深300指数5分钟数据进行时点跳检验,在此基础上对跳的动态变化进行分析。实证结果表明,沪深300指数存在跳,跳发生次数服从Poisson过程,但跳发生概率随时间变化,具有时变性;跳幅分布具有厚尾性并向右偏斜,分布随时间发生变化,不服从同分布假设。本文的研究结果为相关研究提供了基础性实证结论。  相似文献   

2.
Nonseparable panel models are important in a variety of economic settings, including discrete choice. This paper gives identification and estimation results for nonseparable models under time‐homogeneity conditions that are like “time is randomly assigned” or “time is an instrument.” Partial‐identification results for average and quantile effects are given for discrete regressors, under static or dynamic conditions, in fully nonparametric and in semiparametric models, with time effects. It is shown that the usual, linear, fixed‐effects estimator is not a consistent estimator of the identified average effect, and a consistent estimator is given. A simple estimator of identified quantile treatment effects is given, providing a solution to the important problem of estimating quantile treatment effects from panel data. Bounds for overall effects in static and dynamic models are given. The dynamic bounds provide a partial‐identification solution to the important problem of estimating the effect of state dependence in the presence of unobserved heterogeneity. The impact of T, the number of time periods, is shown by deriving shrinkage rates for the identified set as T grows. We also consider semiparametric, discrete‐choice models and find that semiparametric panel bounds can be much tighter than nonparametric bounds. Computationally convenient methods for semiparametric models are presented. We propose a novel inference method that applies in panel data and other settings and show that it produces uniformly valid confidence regions in large samples. We give empirical illustrations.  相似文献   

3.
本文基于半鞅过程和非参数统计推断方法,利用已实现幂变差的渐进统计特性,构造检验统计量,在统一的分析框架下,对金融资产价格中随机波动、跳跃和微观结构噪声等问题进行全面系统的研究。并根据上海证券交易所不同行业的股票,上证50 股票指数及其成分股的高频数据进行实证研究。结果表明,我国A 股市场中,噪音交易显著;约43%的风险来源于资产收益过程的随机波动风险,可用股票期权交易对冲;不同来源风险的重要性程度依次为:随机波动的风险、系统性跳跃风险以及异质性跳跃风险;流动性越好的股票越显示出跳跃、尤其是无限小跳的证据。  相似文献   

4.
We develop a new parametric estimation procedure for option panels observed with error. We exploit asymptotic approximations assuming an ever increasing set of option prices in the moneyness (cross‐sectional) dimension, but with a fixed time span. We develop consistent estimators for the parameters and the dynamic realization of the state vector governing the option price dynamics. The estimators converge stably to a mixed‐Gaussian law and we develop feasible estimators for the limiting variance. We also provide semiparametric tests for the option price dynamics based on the distance between the spot volatility extracted from the options and one constructed nonparametrically from high‐frequency data on the underlying asset. Furthermore, we develop new tests for the day‐by‐day model fit over specific regions of the volatility surface and for the stability of the risk‐neutral dynamics over time. A comprehensive Monte Carlo study indicates that the inference procedures work well in empirically realistic settings. In an empirical application to S&P 500 index options, guided by the new diagnostic tests, we extend existing asset pricing models by allowing for a flexible dynamic relation between volatility and priced jump tail risk. Importantly, we document that the priced jump tail risk typically responds in a more pronounced and persistent manner than volatility to large negative market shocks.  相似文献   

5.
In the setting of ‘affine’ jump‐diffusion state processes, this paper provides an analytical treatment of a class of transforms, including various Laplace and Fourier transforms as special cases, that allow an analytical treatment of a range of valuation and econometric problems. Example applications include fixed‐income pricing models, with a role for intensity‐based models of default, as well as a wide range of option‐pricing applications. An illustrative example examines the implications of stochastic volatility and jumps for option valuation. This example highlights the impact on option ‘smirks’ of the joint distribution of jumps in volatility and jumps in the underlying asset price, through both jump amplitude as well as jump timing.  相似文献   

6.
针对VIX指数的均值回复、波动率聚集特征以及实证研究最近发现的跳跃自刺激性,本文采用具有自刺激性的Hawkes过程对VIX指数的跳跃进行建模,进而构建仿射跳跃扩散模型用于VIX期权定价,得到Hawkes跳跃扩散过程的条件特征函数,然后在风险中性定价框架内采用傅里叶变换方法推导出VIX期权的价值表达式。实证结果表明,本文模型不仅能克服一般均值回复模型拟合误差大的缺点,且能产生正的隐含波动率倾斜和隐含波动率微笑;另一方面,由于考虑了跳跃的自刺激,本文模型在泊松跳跃均值回复模型基础上进一步改善了VIX期权价格的预测效果。  相似文献   

7.
In expected utility theory, risk attitudes are modeled entirely in terms of utility. In the rank‐dependent theories, a new dimension is added: chance attitude, modeled in terms of nonadditive measures or nonlinear probability transformations that are independent of utility. Most empirical studies of chance attitude assume probabilities given and adopt parametric fitting for estimating the probability transformation. Only a few qualitative conditions have been proposed or tested as yet, usually quasi‐concavity or quasi‐convexity in the case of given probabilities. This paper presents a general method of studying qualitative properties of chance attitude such as optimism, pessimism, and the “inverse‐S shape” pattern, both for risk and for uncertainty. These qualitative properties can be characterized by permitting appropriate, relatively simple, violations of the sure‐thing principle. In particular, this paper solves a hitherto open problem: the preference axiomatization of convex (“pessimistic” or “uncertainty averse”) nonadditive measures under uncertainty. The axioms of this paper preserve the central feature of rank‐dependent theories, i.e. the separation of chance attitude and utility.  相似文献   

8.
研究跳跃的内在机制和理清不同类型的风险对波动估计和建模非常重要,这是风险管理的核心内容。当前,利用高频数据这方面研究仍然还不成熟,还有丰富的内容期待探索。文章基于非参数方法,结合A-J跳跃检验统计量,构建新的跳跃方差和连续样本路径方差、对跳跃方差建模。利用上证综指高频数据,对跳跃方差统计特征、跳跃方差贡献、跳跃幅度以及跳跃与经济信息关系进行分析。结果显示:跳跃方差存在尖峰厚尾与波动集聚性;在不同的抽样频率下,跳跃方差对总方差的贡献程度相近;正向、负向跳跃幅度不对称,剥离跳跃后的标准化收益率接近正态分布;经济信息公布与跳跃总是正相关的,并对一些异常现象给予解释。依据波动和跳跃的复杂性,此项研究有助于投资者优化投资策略和为监管部门提供监管基础。  相似文献   

9.
Adam M. Finkel 《Risk analysis》2014,34(10):1785-1794
If exposed to an identical concentration of a carcinogen, every human being would face a different level of risk, determined by his or her genetic, environmental, medical, and other uniquely individual characteristics. Various lines of evidence indicate that this susceptibility variable is distributed rather broadly in the human population, with perhaps a factor of 25‐ to 50‐fold between the center of this distribution and either of its tails, but cancer risk assessment at the EPA and elsewhere has always treated every (adult) human as identically susceptible. The National Academy of Sciences “Silver Book” concluded that EPA and the other agencies should fundamentally correct their mis‐computation of carcinogenic risk in two ways: (1) adjust individual risk estimates upward to provide information about the upper tail; and (2) adjust population risk estimates upward (by about sevenfold) to correct an underestimation due to a mathematical property of the interindividual distribution of human susceptibility, in which the susceptibility averaged over the entire (right‐skewed) population exceeds the median value for the typical human. In this issue of Risk Analysis, Kenneth Bogen disputes the second adjustment and endorses the first, though he also relegates the problem of underestimated individual risks to the realm of “equity concerns” that he says should have little if any bearing on risk management policy. In this article, I show why the basis for the population risk adjustment that the NAS recommended is correct—that current population cancer risk estimates, whether they are derived from animal bioassays or from human epidemiologic studies, likely provide estimates of the median with respect to human variation, which in turn must be an underestimate of the mean. If cancer risk estimates have larger “conservative” biases embedded in them, a premise I have disputed in many previous writings, such a defect would not excuse ignoring this additional bias in the direction of underestimation. I also demonstrate that sensible, legally appropriate, and ethical risk policy must not only inform the public when the tail of the individual risk distribution extends into the “high‐risk” range, but must alter benefit‐cost balancing to account for the need to try to reduce these tail risks preferentially.  相似文献   

10.
Different people may use different strategies, or decision rules, when solving complex decision problems. We provide a new Bayesian procedure for drawing inferences about the nature and number of decision rules present in a population, and use it to analyze the behaviors of laboratory subjects confronted with a difficult dynamic stochastic decision problem. Subjects practiced before playing for money. Based on money round decisions, our procedure classifies subjects into three types, which we label “Near Rational,”“Fatalist,” and “Confused.” There is clear evidence of continuity in subjects' behaviors between the practice and money rounds: types who performed best in practice also tended to perform best when playing for money. However, the agreement between practice and money play is far from perfect. The divergences appear to be well explained by a combination of type switching (due to learning and/or increased effort in money play) and errors in our probabilistic type assignments.  相似文献   

11.
股指期货波动率建模与预测是揭示其波动运行规律和市场风险是重要途径。本文基于跳跃、好坏波动率与符号跳跃建立四组HAR模型,提出单级纠偏HARQ类模型和多级纠偏HARQF类模型,实证研究揭示股指期货波动运行规律,并采用MCS检验来评估模型优劣。HAR建模考察连续与跳跃波动、好与坏波动率的两种已实现波动分解。为了降低波动率估计偏差,基于最小化MSE准则确定最优抽样频率,利用已实现核修正的ADS检测法识别跳跃,采用已实现核估计修正好坏波动率与符号跳跃。基于沪深300股指期货的实证研究表明:连续波动比跳跃波动对未来已实现波动贡献更大;好坏波动率具有不对称波动冲击,而符号跳跃对未来波动具有负向冲击;好坏波动率分解优于连续与跳跃波动分解;中位数已实现四次幂差能够显著提升HAR类模型的样本内外预测能力;与样本内预测相反,样本外预测中单级纠偏HARQ类模型优于多级纠偏HARQF类模型;MCS检验得出HARQ-RV-SJ模型表现最佳。研究结论与启示对认识股指期货波动规律和市场风险具有意义。  相似文献   

12.
The availability of high frequency financial data has generated a series of estimators based on intra‐day data, improving the quality of large areas of financial econometrics. However, estimating the standard error of these estimators is often challenging. The root of the problem is that traditionally, standard errors rely on estimating a theoretically derived asymptotic variance, and often this asymptotic variance involves substantially more complex quantities than the original parameter to be estimated. Standard errors are important: they are used to assess the precision of estimators in the form of confidence intervals, to create “feasible statistics” for testing, to build forecasting models based on, say, daily estimates, and also to optimize the tuning parameters. The contribution of this paper is to provide an alternative and general solution to this problem, which we call Observed Asymptotic Variance. It is a general nonparametric method for assessing asymptotic variance (AVAR). It provides consistent estimators of AVAR for a broad class of integrated parameters Θ = ∫ θt dt, where the spot parameter process θ can be a general semimartingale, with continuous and jump components. The observed AVAR is implemented with the help of a two‐scales method. Its construction works well in the presence of microstructure noise, and when the observation times are irregular or asynchronous in the multivariate case. The methodology is valid for a wide variety of estimators, including the standard ones for variance and covariance, and also for more complex estimators, such as, of leverage effects, high frequency betas, and semivariance.  相似文献   

13.
高频数据环境下的波动函数设定检验容易受到数据中跳跃的影响。为此,本文基于近邻截断(nearest neighbor truncation)方法,利用残差构造部分和(partial sum)过程构造出波动函数参数形式的设定检验方法,并分析了该检验方法在原假设条件下的近似极限性质与自助检验步骤。所提出的波动函数设定检验方法能渐近有效的避免跳跃扩散过程的漂移项与跳跃项的影响。蒙特卡洛模拟结果表明这些检验方法对跳跃的影响具有稳健性,且具有合理的检验水平(size)和检验功效(power)。利用这些检验方法对我国的上海银行间同业拆放利率(Shibor)数据进行实证分析,结果表明本文所提出的跳跃稳健的波动函数检验方法比非跳跃稳健的波动函数检验方法具有更好的区分度。  相似文献   

14.
We propose a new methodology for structural estimation of infinite horizon dynamic discrete choice models. We combine the dynamic programming (DP) solution algorithm with the Bayesian Markov chain Monte Carlo algorithm into a single algorithm that solves the DP problem and estimates the parameters simultaneously. As a result, the computational burden of estimating a dynamic model becomes comparable to that of a static model. Another feature of our algorithm is that even though the number of grid points on the state variable is small per solution‐estimation iteration, the number of effective grid points increases with the number of estimation iterations. This is how we help ease the “curse of dimensionality.” We simulate and estimate several versions of a simple model of entry and exit to illustrate our methodology. We also prove that under standard conditions, the parameters converge in probability to the true posterior distribution, regardless of the starting values.  相似文献   

15.
This paper proposes a new method for identifying social interactions using conditional variance restrictions. The method provides a consistent estimate of the social multiplier when social interactions take the “linear‐in‐means” form (Manski (1993)). When social interactions are not of the linear‐in‐means form, the estimator, under certain conditions, continues to form the basis of a consistent test of the no social interactions null with correct large sample size. The methods are illustrated using data from the Tennessee class size reduction experiment Project STAR. The application suggests that differences in peer group quality were an important source of individual‐level variation in the academic achievement of Project STAR kindergarten students.  相似文献   

16.
文章基于5 min高频数据研究了股票市场和债券市场资产价格的高频跳跃和共跳以及它们与定期发布的宏观经济信息的关系.结果表明,股票市场和债券市场具有显著的跳跃性和共跳性,债券市场跳跃的概率远高于股票市场,而股票市场的跳跃幅度远高于债券市场.非预期宏观经济信息不仅显著地影响股票市场和债券市场的跳跃幅度,还影响两个市场的共跳.定期发布的国内生产总值、固定资产投资、居民消费价格指数、采购经理指数、工业品出厂价格指数、贸易差额和工业增加值等指标显著地影响股市债市的共跳.  相似文献   

17.
We propose an approximation method for analyzing Ericson and Pakes (1995)‐style dynamic models of imperfect competition. We define a new equilibrium concept that we call oblivious equilibrium, in which each firm is assumed to make decisions based only on its own state and knowledge of the long‐run average industry state, but where firms ignore current information about competitors' states. The great advantage of oblivious equilibria is that they are much easier to compute than are Markov perfect equilibria. Moreover, we show that, as the market becomes large, if the equilibrium distribution of firm states obeys a certain “light‐tail” condition, then oblivious equilibria closely approximate Markov perfect equilibria. This theorem justifies using oblivious equilibria to analyze Markov perfect industry dynamics in Ericson and Pakes (1995)‐style models with many firms.  相似文献   

18.
We study inference in structural models with a jump in the conditional density, where location and size of the jump are described by regression curves. Two prominent examples are auction models, where the bid density jumps from zero to a positive value at the lowest cost, and equilibrium job‐search models, where the wage density jumps from one positive level to another at the reservation wage. General inference in such models remained a long‐standing, unresolved problem, primarily due to nonregularities and computational difficulties caused by discontinuous likelihood functions. This paper develops likelihood‐based estimation and inference methods for these models, focusing on optimal (Bayes) and maximum likelihood procedures. We derive convergence rates and distribution theory, and develop Bayes and Wald inference. We show that Bayes estimators and confidence intervals are attractive both theoretically and computationally, and that Bayes confidence intervals, based on posterior quantiles, provide a valid large sample inference method.  相似文献   

19.
Jump Regressions     
We develop econometric tools for studying jump dependence of two processes from high‐frequency observations on a fixed time interval. In this context, only segments of data around a few outlying observations are informative for the inference. We derive an asymptotically valid test for stability of a linear jump relation over regions of the jump size domain. The test has power against general forms of nonlinearity in the jump dependence as well as temporal instabilities. We further propose an efficient estimator for the linear jump regression model that is formed by optimally weighting the detected jumps with weights based on the diffusive volatility around the jump times. We derive the asymptotic limit of the estimator, a semiparametric lower efficiency bound for the linear jump regression, and show that our estimator attains the latter. The analysis covers both deterministic and random jump arrivals. In an empirical application, we use the developed inference techniques to test the temporal stability of market jump betas.  相似文献   

20.
A challenge for large‐scale environmental health investigations such as the National Children's Study (NCS), is characterizing exposures to multiple, co‐occurring chemical agents with varying spatiotemporal concentrations and consequences modulated by biochemical, physiological, behavioral, socioeconomic, and environmental factors. Such investigations can benefit from systematic retrieval, analysis, and integration of diverse extant information on both contaminant patterns and exposure‐relevant factors. This requires development, evaluation, and deployment of informatics methods that support flexible access and analysis of multiattribute data across multiple spatiotemporal scales. A new “Tiered Exposure Ranking” (TiER) framework, developed to support various aspects of risk‐relevant exposure characterization, is described here, with examples demonstrating its application to the NCS. TiER utilizes advances in informatics computational methods, extant database content and availability, and integrative environmental/exposure/biological modeling to support both “discovery‐driven” and “hypothesis‐driven” analyses. “Tier 1” applications focus on “exposomic” pattern recognition for extracting information from multidimensional data sets, whereas second and higher tier applications utilize mechanistic models to develop risk‐relevant exposure metrics for populations and individuals. In this article, “tier 1” applications of TiER explore identification of potentially causative associations among risk factors, for prioritizing further studies, by considering publicly available demographic/socioeconomic, behavioral, and environmental data in relation to two health endpoints (preterm birth and low birth weight). A “tier 2” application develops estimates of pollutant mixture inhalation exposure indices for NCS counties, formulated to support risk characterization for these endpoints. Applications of TiER demonstrate the feasibility of developing risk‐relevant exposure characterizations for pollutants using extant environmental and demographic/socioeconomic data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号