共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
文章以分整SETAR模型为基础,描述了中国股市在不同波动幅度下的变动长期记忆性特征,结果表明中国股市波动存在长期记忆性且大幅波动对中国股市的影响比小幅波动要更持久,并且相对以恒生指数为例的相对发达的香港股市,中国股市大幅波动的长期记忆性较弱,小幅波动的长期记忆性较强,波动持续影响性具有其自身的特点。 相似文献
3.
4.
针对中国股市的长记忆性问题,本文在比较各种长记忆检验方法的基础上,采用改进的分析方法来检验我国股市的日收益率的长记忆性。结果表明,我国沪深两市的日收益率序列均有长记忆性,并且深市的长记忆程度比上证长记忆程度强。 相似文献
5.
6.
文章结合财政收支协整分析理论,利用较先进分整时间序列的分析方法对我国2009~2013年间国家财政可持续性进行了再检验.基于实证分析检验的结果,文章的结论为:①我国财政收支月度数据是典型的长记忆性数据,这意味着一旦我国财政收支受到外生冲击影响偏离均值过程,则即外生扰动对我国财政收支的影响持续时间会比单位根或平稳时间序列的情况下更长;②我国财政具有弱可持续性,说明我国2008年后新一轮财政扩张政策的短期风险仍然可控,但是财政弱可持续也意味着中长期政府债券融资能力下降,中长期财政仍存在风险. 相似文献
7.
基于高频数据的沪指波动长记忆性驱动因素分析 总被引:5,自引:1,他引:4
借助于高频数据的最优取样法,利用已实现波动率给出的上证指数波动率的有效估计,在研究已实现波动率特性的基础上,用计量模型探讨沪指波动的长记忆特征.发现HAR-RV模型比FARIMA模型更能有效地刻画沪指波动的长记忆性,且HAR-RV模型样本外预测效果远远优于FARIMA模型,这说明沪指波动具有伪长记忆性,表面特征显示的长记忆性是由短期投资、中期投资和长期投资形成的短记忆性叠加而成.同时由于HAR-RV模型综合考虑了不同时间水平上的已实现波动率,从而在深层次上验证了中国股票市场的异质性和波动率的杠杆效应. 相似文献
8.
文章以中信标普公司推出的6种风格资产日收益序列为实证样本,运用4个信息准则来选择skt-ARFIMA-HYGARCH模型以研究其序列的分形特征,刻画其双长记忆性,研究结果显示:6种股市风格资产收益序列具有显著的双长记忆性特征;基于skt分布下的ARFIMA(1,d1,1)-HYGARCH(1,d2,0)模型能够较好地刻画股市风格资产日收益序列的实际分布特征,最后运用修正Pearson吻合度检验也证实了选择skt分布是合理的。 相似文献
9.
中国股市与国际股市的协整关系研究 总被引:1,自引:0,他引:1
股票市场是市场经济条件下最重要的资本市场。本文根据2005年~2008年期间中国股票市场与国际主要股票市场的每日收盘数据.运用单位根检验与协整检验分别对上海上证综合指数与香港国企H股指数、恒生指数、道琼斯指数和纳斯达克指数之间,深圳成份指数与香港国企H股指数、恒生指数、道琼斯指数和纳斯达克指数之间的是否存在协整关系进行了实证分析,从而研究我国股票市场是否与国际股票市场接轨。 相似文献
10.
11.
Guglielmo Maria Caporale Luis Gil-Alana Alex Plastun 《Journal of Statistical Computation and Simulation》2019,89(10):1763-1779
This paper investigates persistence in financial time series at three different frequencies (daily, weekly and monthly). The analysis is carried out for various financial markets (stock markets, FOREX, commodity markets) over the period from 2000 to 2016 using two different long memory approaches (R/S analysis and fractional integration) for robustness purposes. The results indicate that persistence is higher at lower frequencies, for both returns and their volatility. This is true of the stock markets (both developed and emerging) and partially of the FOREX and commodity markets examined. Such evidence against the random walk behaviour implies predictability and is inconsistent with the Efficient Market Hypothesis (EMH), since abnormal profits can be made using trading strategies based on trend analysis. 相似文献
12.
《Journal of Statistical Computation and Simulation》2012,82(11):1355-1370
This paper deals with the analysis of cointegration in a bivariate system. However, we depart from the classic concept of cointegration in two aspects. First, we permit fractional degrees of integration in both the parent series and in their linear combination. Second, instead of assuming that the pole or singularity in the spectrum takes places at the zero frequency, we consider the case where the singularity occurs at a frequency λ in the interval (0, π]. We use a procedure that follows the same lines as the two-step testing strategy of R.F. Engle, and C.W.J. Granger, [Cointegration and error correction model. Representation, estimation and testing, Econometrica 55 (1987), pp. 251–276]. Thus, we test first the order of integration in the individual series, which are specified in terms of the Gegenbauer polynomials. Then, if the two series share the same degree of integration at a given frequency, we test the null hypothesis of no cointegration against the alternative of fractional cyclical cointegration, by testing the order of integration on the estimated residuals from the cointegrating regression. Finite sample critical values are obtained, and the power properties of the test are examined. An empirical application is also carried out at the end of the article. 相似文献
13.
中国股市收益率和波动率的长记忆性检验 总被引:2,自引:0,他引:2
运用修正R/S和V/S两种分析方法,选取两大盘指数(上证综指和深证成指)以及20只个股为样本,对其收益率和收益波动率序列的长记忆性进行大范围的比较研究.结果表明:对于收益率序列两大盘指数存在长记忆性,且深市强于沪市,而个股普遍不具有长记忆性;对于收益波动率序列,无论是大盘指数还是个股均存在显著的长记忆性,并且,三个收益波动率序列的长记忆性由强到弱依次为修正对数平方收益率、绝对收益率和平方收益率. 相似文献
14.
基于ARFIMA-HYGARCH-t模型对1985年1月至2015年12月间中国月度通货膨胀的均值过程和波动过程进行统计检验,发现通货膨胀水平及其不确定性表现出"双长记忆"行为。在此行为下,利用VAR模型、ARFIMA-HYGARCH-M-t模型及ARFIMA-GJR-t模型检验通货膨胀水平与其不确定性之间的影响关系、影响方向与影响程度,结论支持Friedman-Ball假说;通货膨胀水平正向冲击引发的不确定性程度强于负向冲击引发的不确定性程度。经济政策操作时既要考虑维持通货膨胀的稳定性,也要考虑政策期限结构的长期性。 相似文献
15.
由于受到地理位置、经济文化等因素的影响,沪港台股市在收益率波动上存在相关性。利用Granger因果检验和Chi-plot图检验两种方法分别对沪港台股市波动进行了实证研究。结果一致表明:沪台股市的收益率波动存在双向传导效应,沪港两市的收益波动仅存在显著的港市对沪市的单向“溢出效应”,而沪台两市传导效应相对较弱。但沪港台三市的收益波动均存在明显的“杠杆效应”。 相似文献
16.
引入持仓量的沪铜指数长记忆波动性研究 总被引:1,自引:0,他引:1
通过协整关系检验、误差修正模型、向量自回归模型、格兰杰因果关系检验、脉冲响应函数证明了在建立模型时引入持仓量序列的必要性。运用修正R/S分析,建立了沪铜指数收益率波动的ARFIMA、FI-GARCH、ARFIMA-FIGARCH模型,并运用此种模型对沪铜指数的收益率序列rt、收益率波动序列|rt|及残差序列|εt|进行相关研究和分析,结果表明:ARFIMA(0,d1,0)-FIGARCH(1,d2,1)模型的预测效果比较好。 相似文献
17.
《Journal of Statistical Computation and Simulation》2012,82(3):301-313
This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range ]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes. 相似文献
18.
Since the seminal paper of Granger & Joyeux (1980), the concept of a long memory has focused the attention of many statisticians and econometricians trying to model and measure the persistence of stationary processes. Many methods for estimating d, the long-range dependence parameter, have been suggested since the work of Hurst (1951). They can be summarized in three classes: the heuristic methods, the semi-parametric methods and the maximum likelihood methods. In this paper, we try by simulation, to verify the two main properties of [dcirc]: the consistency and the asymptotic normality. Hence, it is very important for practitioners to compare the performance of the various classes of estimators. The results indicate that only the semi-parametric and the maximum likelihood methods can give good estimators. They also suggest that the AR component of the ARFIMA (1, d, 0) process has an important impact on the properties of the different estimators and that the Whittle method is the best one, since it has the small mean squared error. We finally carry out an empirical application using the monthly seasonally adjusted US Inflation series, in order to illustrate the usefulness of the different estimation methods in the context of using real data. 相似文献
19.
Rishideep Roy 《统计学通讯:理论与方法》2014,43(14):2859-2869
The lack of memory property is a characterizing property of the exponential distribution in the continuous domain. In the bivariate setup different generalizations of the same are available in terms of survival function. We extend this lack of memory property in terms of bivariate probability density function and examine its characterization properties. In this process the density version of the lack of memory property can be interlinked with conditionally specified exponential distribution, bivariate reciprocal coordinate subtangent of the density curve and a few other derived measures. 相似文献
20.
《Journal of Statistical Computation and Simulation》2012,82(4):902-915
Approximate normality and unbiasedness of the maximum likelihood estimate (MLE) of the long-memory parameter H of a fractional Brownian motion hold reasonably well for sample sizes as small as 20 if the mean and scale parameter are known. We show in a Monte Carlo study that if the latter two parameters are unknown the bias and variance of the MLE of H both increase substantially. We also show that the bias can be reduced by using a parametric bootstrap procedure. In very large samples, maximum likelihood estimation becomes problematic because of the large dimension of the covariance matrix that must be inverted. To overcome this difficulty, we propose a maximum likelihood method based upon first differences of the data. These first differences form a short-memory process. We split the data into a number of contiguous blocks consisting of a relatively small number of observations. Computation of the likelihood function in a block then presents no computational problem. We form a pseudo-likelihood function consisting of the product of the likelihood functions in each of the blocks and provide a formula for the standard error of the resulting estimator of H. This formula is shown in a Monte Carlo study to provide a good approximation to the true standard error. The computation time required to obtain the estimate and its standard error from large data sets is an order of magnitude less than that required to obtain the widely used Whittle estimator. Application of the methodology is illustrated on two data sets. 相似文献