首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary.  We develop Markov chain Monte Carlo methodology for Bayesian inference for non-Gaussian Ornstein–Uhlenbeck stochastic volatility processes. The approach introduced involves expressing the unobserved stochastic volatility process in terms of a suitable marked Poisson process. We introduce two specific classes of Metropolis–Hastings algorithms which correspond to different ways of jointly parameterizing the marked point process and the model parameters. The performance of the methods is investigated for different types of simulated data. The approach is extended to consider the case where the volatility process is expressed as a superposition of Ornstein–Uhlenbeck processes. We apply our methodology to the US dollar–Deutschmark exchange rate.  相似文献   

2.
This article considers the problem of testing for an explosive bubble in financial data in the presence of time-varying volatility. We propose a weighted least squares-based variant of the Phillips et al.) test for explosive autoregressive behavior. We find that such an approach has appealing asymptotic power properties, with the potential to deliver substantially greater power than the established OLS-based approach for many volatility and bubble settings. Given that the OLS-based test can outperform the weighted least squares-based test for other volatility and bubble specifications, we also suggest a union of rejections procedure that succeeds in capturing the better power available from the two constituent tests for a given alternative. Our approach involves a nonparametric kernel-based volatility function estimator for computation of the weighted least squares-based statistic, together with the use of a wild bootstrap procedure applied jointly to both individual tests, delivering a powerful testing procedure that is asymptotically size-robust to a wide range of time-varying volatility specifications.  相似文献   

3.
This paper illustrates a new approach to the statistical modeling of non-linear dependence and leptokurtosis in exchange rate data. The student's t autoregressive model withdynamic heteroskedasticity (STAR) of spanos (1992) is shown to provide a parsimonious and statistically adequate representation of the probabilistic information in exchange rate data. For the STAR model, volatility predictions are formed via a sequentially updated weighting scheme which uses all the past history of the series. The estimated STAR models are shown to statistically dominate alternative ARCH-type formulations and suggest that volatility predictions are not necessarily as large or as variable as other models indicate.  相似文献   

4.
Stochastic Volatility models have been considered as a real alternative to conditional variance models, assuming that volatility follows a process different from the observed one. However, issues like the unobservable nature of volatility and the creation of “rich” dynamics give rise to the use of non-linear transformations for the volatility process. The Box–Cox transformation and its Yeo–Johnson variation, by nesting both the linear and the non-linear case, can be considered as natural functions to specify non-linear Stochastic Volatility models. In this framework, a fully Bayesian approach is used for parametric and log–volatility estimation. The new models are then investigated for their within-sample and out-of-sample performance against alternative Stochastic Volatility models using real financial data series.  相似文献   

5.
This paper illustrates a new approach to the statistical modeling of non-linear dependence and leptokurtosis in exchange rate data. The student's t autoregressive model withdynamic heteroskedasticity (STAR) of spanos (1992) is shown to provide a parsimonious and statistically adequate representation of the probabilistic information in exchange rate data. For the STAR model, volatility predictions are formed via a sequentially updated weighting scheme which uses all the past history of the series. The estimated STAR models are shown to statistically dominate alternative ARCH-type formulations and suggest that volatility predictions are not necessarily as large or as variable as other models indicate.  相似文献   

6.
Abstract

In this paper, using estimating function approach, a new optimal volatility estimator is introduced and based on the recursive form of the estimator a data-driven generalized EWMA model for value at risk (VaR) forecast is proposed. An appropriate data-driven model for volatility is identified by the relationship between absolute deviation and standard deviation for symmetric distributions with finite variance. It is shown that the asymptotic variance of the proposed volatility estimator is smaller than that of conventional estimators and is more appropriate for financial data with larger kurtosis. For IBM, Microsoft, Apple stocks and SP 500 index the proposed method is used to identify the model, estimate the volatility, and obtain minimum mean square error(MMSE) forecasts of VaR.  相似文献   

7.
This paper develops a Bayesian procedure for estimation and forecasting of the volatility of multivariate time series. The foundation of this work is the matrix-variate dynamic linear model, for the volatility of which we adopt a multiplicative stochastic evolution, using Wishart and singular multivariate beta distributions. A diagonal matrix of discount factors is employed in order to discount the variances element by element and therefore allowing a flexible and pragmatic variance modelling approach. Diagnostic tests and sequential model monitoring are discussed in some detail. The proposed estimation theory is applied to a four-dimensional time series, comprising spot prices of aluminium, copper, lead and zinc of the London metal exchange. The empirical findings suggest that the proposed Bayesian procedure can be effectively applied to financial data, overcoming many of the disadvantages of existing volatility models.  相似文献   

8.
林金官等 《统计研究》2018,35(5):99-109
股票市场中收益与波动率的关系研究在金融证券领域起着很重要的作用,而随机波动率模型能够很好地拟合这种关系。本文将拟似然方法和渐近拟似然方法运用在随机波动率模型的参数估计方面,渐近拟似然方法可以避免因为人为的结构错误指定而造成的偏差,比较稳健。本文采用拟似然和渐近拟似然方法对随机波动率模型的参数估计进行了模拟探索,并和两种已有估计方法进行了对比,结果表明拟似然和渐近拟似然方法在模型的参数估计方面有着很好的估计结果。实证研究中,选取2000-2015年标普500指数作为研究对象,结果显示所选数据具有金融时间序列的常见特征。本文为金融证券领域中股票收益与波动率关系及其应用研究提供了一定的启示。  相似文献   

9.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

10.
The Heston-STAR model is a new class of stochastic volatility models defined by generalizing the Heston model to allow the volatility of the volatility process as well as the correlation between asset log-returns and variance shocks to change across different regimes via smooth transition autoregressive (STAR) functions. The form of the STAR functions is very flexible, much more so than the functions introduced in Jones (J Econom 116:181–224, 2003), and provides the framework for a wide range of stochastic volatility models. A Bayesian inference approach using data augmentation techniques is used for the parameters of our model. We also explore goodness of fit of our Heston-STAR model. Our analysis of the S&P 500 and VIX index demonstrates that the Heston-STAR model is more capable of dealing with large market fluctuations (such as in 2008) compared to the standard Heston model.  相似文献   

11.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

12.
《Econometric Reviews》2008,27(1):139-162
The quality of the asymptotic normality of realized volatility can be poor if sampling does not occur at very high frequencies. In this article we consider an alternative approximation to the finite sample distribution of realized volatility based on Edgeworth expansions. In particular, we show how confidence intervals for integrated volatility can be constructed using these Edgeworth expansions. The Monte Carlo study we conduct shows that the intervals based on the Edgeworth corrections have improved properties relatively to the conventional intervals based on the normal approximation. Contrary to the bootstrap, the Edgeworth approach is an analytical approach that is easily implemented, without requiring any resampling of one's data. A comparison between the bootstrap and the Edgeworth expansion shows that the bootstrap outperforms the Edgeworth corrected intervals. Thus, if we are willing to incur in the additional computational cost involved in computing bootstrap intervals, these are preferred over the Edgeworth intervals. Nevertheless, if we are not willing to incur in this additional cost, our results suggest that Edgeworth corrected intervals should replace the conventional intervals based on the first order normal approximation.  相似文献   

13.
We develop a discrete-time affine stochastic volatility model with time-varying conditional skewness (SVS). Importantly, we disentangle the dynamics of conditional volatility and conditional skewness in a coherent way. Our approach allows current asset returns to be asymmetric conditional on current factors and past information, which we term contemporaneous asymmetry. Conditional skewness is an explicit combination of the conditional leverage effect and contemporaneous asymmetry. We derive analytical formulas for various return moments that are used for generalized method of moments (GMM) estimation. Applying our approach to S&P500 index daily returns and option data, we show that one- and two-factor SVS models provide a better fit for both the historical and the risk-neutral distribution of returns, compared to existing affine generalized autoregressive conditional heteroscedasticity (GARCH), and stochastic volatility with jumps (SVJ) models. Our results are not due to an overparameterization of the model: the one-factor SVS models have the same number of parameters as their one-factor GARCH competitors and less than the SVJ benchmark.  相似文献   

14.
ABSTRACT

This paper proposes an adaptive quasi-maximum likelihood estimation (QMLE) when forecasting the volatility of financial data with the generalized autoregressive conditional heteroscedasticity (GARCH) model. When the distribution of volatility data is unspecified or heavy-tailed, we worked out adaptive QMLE based on data by using the scale parameter ηf to identify the discrepancy between wrongly specified innovation density and the true innovation density. With only a few assumptions, this adaptive approach is consistent and asymptotically normal. Moreover, it gains better efficiency under the condition that innovation error is heavy-tailed. Finally, simulation studies and an application show its advantage.  相似文献   

15.
We devise a convenient way to estimate stochastic volatility and its volatility. Our method is applicable to both cross-sectional and time series data, and both high-frequency and low-frequency data. Moreover, this method, when applied to cross-sectional data (a collection of risky assets, portfolio), provides a great simplification in the sense that estimating the volatility of the portfolio does not require an estimation of a volatility matrix (the volatilities of the individual assets in the portfolio and their correlations). Furthermore, there is no need to generate volatility data.  相似文献   

16.
The quality of the asymptotic normality of realized volatility can be poor if sampling does not occur at very high frequencies. In this article we consider an alternative approximation to the finite sample distribution of realized volatility based on Edgeworth expansions. In particular, we show how confidence intervals for integrated volatility can be constructed using these Edgeworth expansions. The Monte Carlo study we conduct shows that the intervals based on the Edgeworth corrections have improved properties relatively to the conventional intervals based on the normal approximation. Contrary to the bootstrap, the Edgeworth approach is an analytical approach that is easily implemented, without requiring any resampling of one's data. A comparison between the bootstrap and the Edgeworth expansion shows that the bootstrap outperforms the Edgeworth corrected intervals. Thus, if we are willing to incur in the additional computational cost involved in computing bootstrap intervals, these are preferred over the Edgeworth intervals. Nevertheless, if we are not willing to incur in this additional cost, our results suggest that Edgeworth corrected intervals should replace the conventional intervals based on the first order normal approximation.  相似文献   

17.
High-frequency trading activities are one of the common phenomena in nowadays financial markets. Enormous amounts of high-frequency trading data are generated by huge numbers of market participants in every trading day. The availability of this information allows researchers to further examine the statistical properties of informationally efficient market hypothesis (EMH). Heterogenous market hypothesis (HMH) is one of the important extensions of EMH literature. HMH introduced nonlinear trading behaviors of heterogenous market participants instead of normality assumption under the EMH homogenous market participants. In this study, we attempt to explore more high-frequency volatility estimators in the HMH examination. These include the bipower, tripower, and quadpower variation integrated volatility estimates using Heterogenous AutoRegressive (HAR) models. The empirical findings show that these alternatives multipower variation (MPV) estimators provide better estimation and out-of-sample forecast evaluations as compared to the standard realized volatility. In other words, the usage of MPV estimators is able to better explain the HMH statistically. At last, a market risk determination is illustrated using value-at-risk approach.  相似文献   

18.
Summary.  We propose a flexible generalized auto-regressive conditional heteroscedasticity type of model for the prediction of volatility in financial time series. The approach relies on the idea of using multivariate B -splines of lagged observations and volatilities. Estimation of such a B -spline basis expansion is constructed within the likelihood framework for non-Gaussian observations. As the dimension of the B -spline basis is large, i.e. many parameters, we use regularized and sparse model fitting with a boosting algorithm. Our method is computationally attractive and feasible for large dimensions. We demonstrate its strong predictive potential for financial volatility on simulated and real data, and also in comparison with other approaches, and we present some supporting asymptotic arguments.  相似文献   

19.
Financial time series data are typically observed to have heavy tails and time-varying volatility. Conditional heteroskedastic models to describe this behaviour have received considerable attention. In the present paper, our purpose is to examine some of these models in a general setting under some non-normal distributions. A likelihood based approach to estimation is used. New comparisons of estimators and their efficiencies are discussed.  相似文献   

20.
A general approach for modeling the volatility process in continuous-time is based on the convolution of a kernel with a non-decreasing Lévy process, which is non-negative if the kernel is non-negative. Within the framework of Continuous-time Auto-Regressive Moving-Average (CARMA) processes, we derive a necessary condition for the kernel to be non-negative, and propose a numerical method for checking the non-negativity of a kernel function. These results can be lifted to solving a similar problem with another approach to modeling volatility via the COntinuous-time Generalized Auto-Regressive Conditional Heteroscedastic (COGARCH) processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号