首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
SCD模型与ACD模型比较研究   总被引:1,自引:0,他引:1  
耿克红  张世英 《管理学报》2008,5(1):44-48,117
针对近几年在研究金融市场超高频序列时出现的ACD模型和SCD模型,先从理论上探讨了ACD模型、SCD模型与ARMA模型之间的关系,指出两类模型均可转化为ARMA模型,具有一定的相通性;然后实证比较了两类模型的自相关函数对实际数据自相关系数的刻画能力,以及利用基于随机模拟的似然比检验方法,从实证角度比较两类模型对持续期序列的拟合优度,得出在拟合金融市场超高频持续期数据时,SCD模型比ACD模型更具有优势。  相似文献   

2.
曹广喜 《管理科学》2013,26(1):89-100
股市和汇市的动态关系研究对于宏观金融政策制定和微观投资决策具有重要参考价值.针对金融时间序列长记忆特征,改进Primiceri时变VAR模型(向量自回归模型),给出长记忆动态VAR模型.基于2005年8月1日至2011年10月20日的人民币对美元汇率、中国上证综合指数和美国标普500指数的日价格数据,利用长记忆动态VAR模型实证分析中国汇市与中美股市问的动态冲击影响关系.实证结果表明,人民币汇率和中国股价收益率序列具有长记忆特征,美国股价收益率具有反持续性特征;存在人民币汇率波动对中美股市波动的单向冲击影响关系,且汇市对股市的冲击持续期为7天左右,前3天的冲击影响具有一定的时变特征,但这种时变性具体表现为结构突变特征;人民币汇率波动对于股市冲击影响的时变性在短期与汇率机制改革政策有关,在长期与金融环境剧变有关.  相似文献   

3.
超高频数据下金融市场持续期序列模型述评   总被引:1,自引:0,他引:1  
鉴于针对超高频数据统计建模能够有效弥补传统相同时间间隔数据统计建模的不足,而且有助洞悉金融市场微观结构,近年来,对金融市场超高频数据的研究已成为金融计量学一个全新的研究领域。本文总结了近十年来超高频数据下金融市场持续期序列建模及其参数估计方法的发展及主要成果,对这些持续期模型及其参数估计方法进行了比较,并指出现在和未来该研究领域研究所面临的主要课题。  相似文献   

4.
本文引入正态分布、t分布以及偏t分布下的长记忆特征条件方差模型—FIAPARCH模型研究纽约黄金期货时间序列波动,通过与其他常用的方差模型进行比较,并运用AIC准则、SC准则以及比较对数似然值验证了FIAPARCH-skt模型为最佳拟合模型。最后以模型自身的特点,对期金时间序列的聚集性、非对称性和杠杆效应进行描述,从而对期金时间序列的波动特征有更深的了解。  相似文献   

5.
本文基于半参数估计方法,从两个方面研究了股市波动长记忆性的聚合问题:一方面,首先将股指收益序列转换为波动序列,再考虑波动序列聚合后的长记忆性;另一方面,首先对股指收益序列进行聚合,再考虑聚合序列波动的长记忆性。前者考察的是波动序列中长记忆参数的聚合性,后者考察的是数据频率对波动长记忆参数的影响。从我国股市实际出发,并综合两个方面考虑,实证了股市波动长记忆性聚合不变效应的存在,同时也说明了半参数方法的良好效果。  相似文献   

6.
多元长记忆SV模型及其在沪深股市的应用   总被引:5,自引:0,他引:5       下载免费PDF全文
对多元长记忆随机波动进行建模,并给出了相应的谱似然估计方法以及在多元随机波动模 型框架下分数维协整的检验步骤. 最后用上海和深圳的数据对所给的模型与方法进行了实证检验.  相似文献   

7.
郝清民  杨军 《管理学报》2010,7(7):1091-1096
针对股市时间序列长记忆性的研究方法差异导致结论不同的问题,采用逐步分析的方法研究,结果表明:用经典方法分析股市收益时间序列所发现的长记忆性在控制条件异方差后减弱;用马尔可夫转换模型控制短期依赖和结构性变化后,长记忆性基本消失.说明众多学者对股市收益序列研究所得出的长记忆性,主要是由收益时间序列的短期相关和结构变化引起的,因此,对我国深沪股市而言,股市波动性对股市长记忆性具有一定影响,而结构性变化对股市收益变化的影响更为重要.  相似文献   

8.
本文采用CGMY和GIG过程对非高斯OU随机波动率模型进行扩展,建立连续叠加Lévy过程驱动的非高斯OU随机波动率模型,并给出模型的散粒噪声(Shot-Noise)表现方式与近似。在此基础上,为了反映的波动率相关性,本文把回顾抽样(Retrospective Sampling)方法扩展到连续叠加的Lévy过程驱动的非高斯OU随机波动模型中,设计了Lévy过程驱动的非高斯OU随机波动模型的贝叶斯参数统计推断方法。最后,采用金融市场实际数据对不同模型和参数估计方法进行验证和比较研究。本文理论和实证研究均表明采用CGMY和GIG过程对非高斯OU随机波动率模型进行扩展之后,模型的绩效得到明显提高,更能反映金融资产收益率波动率变化特征,本文设计的Lévy过程驱动的非高斯OU随机波动模型的贝叶斯参数统计推断方法效率也较高,克服了已有研究的不足。同时,实证研究发现上证指数收益率和波动率跳跃的特征以及波动率序列具有明显的长记忆特性。  相似文献   

9.
李锬  齐中英  雷莹 《中国管理科学》2007,15(Z1):268-274
本文构建了一个基于交易者异质信念的商品期货市场仿真模型,在控制外部信息对市场影响的前提下,研究交易者异质信念的演化对市场价格产生的影响.采用Brenner的随机信念学习模型能够刻画个体交易者的学习、模仿行为,改以往所采用的遗传算法等借鉴生物进化论思想的模型无法将宏观价格动态与个体行为相联系的不足.检验结果表明,仿真产生的期货价格时间序列具有高峰厚尾分布、波动聚集、长记忆性,与真实期货价格的统计特征一致的典型特征,验证了模型的有效性.结果表明,外部信息不是形成收益序列典型特征的原因,个体交易者通过学习发生的信念转换和交易者之间的相互影响是造成价格收益序列典型特征的根本原因.  相似文献   

10.
针对金融市场条件收益存在的有偏胖尾分布与非对称波动性特征以及长记忆特征等典型事实特征,运用ARFIMA-FIGARCH-SKST模型等来测度股市动态风险,并通过规范的返回测试检验中的LRT和DQR方法实证考察了测度模型的可靠性。得到了一些非常有价值的实证结果:有无长记忆约束的非对称结构风险模型在中国大陆沪深股市动态风险测度能力上并无实质性差异;ARFIMA-FIAPARCH-SKST模型能够准确测度股市的动态风险;股票市场极端风险的测度尤其不能放弃非对称结构的这一约束条件。  相似文献   

11.
In this paper we define and estimate measures of labor market frictions using data on job durations. We compare different estimation methods and different types of data. We propose and apply an unconditional inference method that can be applied to aggregate duration data. It does not require wage data, it is invariant to the way in which wages are determined, and it allows workers to care about other job characteristics. The empirical analysis focuses on France, but we perform separate analyses for the United States of America, the United Kingdom, Germany, and the Netherlands. We quantify the monopsony power due to search frictions and we examine the policy effects of the minimum wage, unemployment benefits, and search frictions. (JEL: J63, J64)  相似文献   

12.
Kum-Khiong Yang 《Omega》1998,26(6):729-738
This research examines the performance of 13 dispatching rules for executing a resource-constrained project whose estimated activity durations may differ from the actual activity durations. The dispatching rules are tested in environments characterized by three factors, namely, the order strength of the precedence relationship, the level of resource availability and the level of estimation errors in the activity durations. The results show that project environment affects only the performance differences but not the grouping of the better dispatching rules. The greatest number of successors, rank positional weight, greatest cumulative resource requirement and minimum activity slack dispatching rules consistently perform better than the other dispatching rules, unaffected by the accuracy of the estimated activity durations. This finding validates the results of many past studies in the deterministic project environment for choosing the right dispatching rule for both projects with and without well-estimated activity durations.  相似文献   

13.
It has been shown that bathroom-type water uses dominate personal exposure to water-borne contaminants in the home. Therefore, in assessing exposure of specific population groups to the contaminants in the water, understanding population water-use behavior for bathroom activities as a function of demographic characteristics is vital to realistic exposure estimates. In this article, shower and bath frequencies and durations are analyzed, presented, and compared for various demographic groups derived from analyses of the National Human Activities Pattern Survey (NHAPS) database and the Residential End Uses of Water Study (REUWS) database as well as from a review of current literature. Analysis showed that age and level of education significantly influenced shower and bath frequency and duration. The frequency of showering and bathing reported in NHAPS agreed reasonably well with previous studies; however, durations of these events were found to be significantly longer. Showering frequency reported in REUWS was slightly less than that reported for NHAPS; however, durations of showers reported in REUWS are consistent with other studies. After considering the strengths and weaknesses of each data set and comparing their results to previous studies, it is concluded that NHAPS provides more reliable frequency data, while REUWS provides more reliable duration data. The shower- and bath-use behavior parameters recommended in this article can aid modelers in appropriately specifying water-use behavior as a function of demographic group in order to conduct reasonable assessments of exposure to contaminants that enter the home via the water supply.  相似文献   

14.
Acute Exposure Guideline Level (AEGL) recommendations are developed for 10-minute, 30-minute, 1-hour, 4-hours, and 8-hours exposure durations and are designated for three levels of severity: AEGL-1 represents concentrations above which acute exposures may cause noticeable discomfort including irritation; AEGL-2 represents concentrations above which acute exposure may cause irreversible health effects or impaired ability to escape; and AEGL-3 represents concentrations above which exposure may cause life-threatening health effects or death. The default procedure for setting AEGL values across durations when applicable data are unavailable involves estimation based on Haber's rule, which has an underlying assumption that cumulative exposure is the determinant of toxicity. For acute exposure to trichloroethylene (TCE), however, experimental data indicate that momentary tissue concentration, and not the cumulative amount of exposure, is important. We employed an alternative approach to duration adjustments in which a physiologically-based pharmacokinetic (PBPK) model was used to predict the arterial blood concentrations [TCE(a)] associated with adverse outcomes appropriate for AEGL-1, -2, or -3-level effects. The PBPK model was then used to estimate the atmospheric concentration that produces equivalent [TCE(a)] at each of the AEGL-specific exposure durations. This approach yielded [TCE(a)] values of 4.89 mg/l for AEGL-1, 18.7 mg/l for AEGL-2, and 310 mg/l for AEGL-3. Duration adjustments based on equivalent target tissue doses should provide similar degrees of toxicity protection at different exposure durations.  相似文献   

15.
We consider nonparametric identification and estimation in a nonseparable model where a continuous regressor of interest is a known, deterministic, but kinked function of an observed assignment variable. We characterize a broad class of models in which a sharp “Regression Kink Design” (RKD or RK Design) identifies a readily interpretable treatment‐on‐the‐treated parameter (Florens, Heckman, Meghir, and Vytlacil (2008)). We also introduce a “fuzzy regression kink design” generalization that allows for omitted variables in the assignment rule, noncompliance, and certain types of measurement errors in the observed values of the assignment variable and the policy variable. Our identifying assumptions give rise to testable restrictions on the distributions of the assignment variable and predetermined covariates around the kink point, similar to the restrictions delivered by Lee (2008) for the regression discontinuity design. Using a kink in the unemployment benefit formula, we apply a fuzzy RKD to empirically estimate the effect of benefit rates on unemployment durations in Austria.  相似文献   

16.
We consider the stochastic, single‐machine earliness/tardiness problem (SET), with the sequence of processing of the jobs and their due‐dates as decisions and the objective of minimizing the sum of the expected earliness and tardiness costs over all the jobs. In a recent paper, Baker ( 2014 ) shows the optimality of the Shortest‐Variance‐First (SVF) rule under the following two assumptions: (a) The processing duration of each job follows a normal distribution. (b) The earliness and tardiness cost parameters are the same for all the jobs. In this study, we consider problem SET under assumption (b). We generalize Baker's result by establishing the optimality of the SVF rule for more general distributions of the processing durations and a more general objective function. Specifically, we show that the SVF rule is optimal under the assumption of dilation ordering of the processing durations. Since convex ordering implies dilation ordering (under finite means), the SVF sequence is also optimal under convex ordering of the processing durations. We also study the effect of variability of the processing durations of the jobs on the optimal cost. An application of problem SET in surgical scheduling is discussed.  相似文献   

17.
We study the estimation of (joint) moments of microstructure noise based on high frequency data. The estimation is conducted under a nonparametric setting, which allows the underlying price process to have jumps, the observation times to be irregularly spaced, and the noise to be dependent on the price process and to have diurnal features. Estimators of arbitrary orders of (joint) moments are provided, for which we establish consistency as well as central limit theorems. In particular, we provide estimators of autocovariances and autocorrelations of the noise. Simulation studies demonstrate excellent performance of our estimators in the presence of jumps, irregular observation times, and even rounding. Empirical studies reveal (moderate) positive autocorrelations of microstructure noise for the stocks tested.  相似文献   

18.
Screening methods are used for hazard identification. Assays for heritable mutations in mammals are used for the confirmation of short-term test results and for the quantification of the genetic risk. There are two main approaches in making genetic risk estimates. One of these, termed the direct method, expresses risk in terms of the expected frequency of genetic changes induced per unit dose. The other, referred to as the doubling dose method or the indirect method, expresses risk in relation to the observed incidence of genetic disorders now present in man. The indirect method uses experimental data only for the calculation of the doubling dose. The quality of the risk estimation depends on the assumption of persistence of the induced mutations and the ability to determine the current incidence of genetic diseases. The difficulties of improving the estimates of current incidences of genetic diseases or the persistence of the genes in the population led us to the development of an alternative method, the direct estimation of the genetic risk. The direct estimation uses experimental data for the induced frequency for dominant mutations in mice. For the verification of these quantifications one can use the data of Hiroshima and Nagasaki. According to the estimation with the direct method, one would expect less than 1 radiation-induced dominant cataract in 19,000 children with one or both parents exposed. The expected overall frequency of dominant mutations in the first generation would be 20-25, based on radiation-induced dominant cataract mutations. It is estimated that 10 times more recessive than dominant mutations are induced. The same approaches can be used to determine the impact of chemical mutagens.  相似文献   

19.
本文利用统计理论的优良点估计方法来估计金融市场风险的VaR和CVaR,既可避开现有方法中大量的模拟计算和参数估计等工作,又可提高估算精度.在资产-正态模型下,根据不同的风险估计要求,对金融资产的这两种风险分别提供了三种优良估计,即一致最小方差无偏估计,最佳线性次序统计量无偏估计,最佳线性次序统计量同变估计,并提供了实证分析.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号