首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A bivariate stochastic volatility model is employed to measure the effect of intervention by the Bank of Japan (BOJ) on daily returns and volume in the USD/YEN foreign exchange market. Missing observations are accounted for, and a data-based Wishart prior for the precision matrix of the errors to the transition equation that is in line with the likelihood is suggested. Empirical results suggest there is strong conditional heteroskedasticity in the mean-corrected volume measure, as well as contemporaneous correlation in the errors to both the observation and transition equations. A threshold model is used for the BOJ reaction function, which is estimated jointly with the bivariate stochastic volatility model via Markov chain Monte Carlo. This accounts for endogeneity between volatility in the market and the BOJ reaction function, something that has hindered much previous empirical analysis in the literature on central bank intervention. The empirical results suggest there was a shift in behavior by the BOJ, with a movement away from a policy of market stabilization and toward a role of support for domestic monetary policy objectives. Throughout, we observe “leaning against the wind” behavior, something that is a feature of most previous empirical analysis of central bank intervention. A comparison with a bivariate EGARCH model suggests that the bivariate stochastic volatility model produces estimates that better capture spikes in in-sample volatility. This is important in improving estimates of a central bank reaction function because it is at these periods of high daily volatility that central banks more frequently intervene.  相似文献   

2.
Abstract

Although stochastic volatility and GARCH (generalized autoregressive conditional heteroscedasticity) models have successfully described the volatility dynamics of univariate asset returns, extending them to the multivariate models with dynamic correlations has been difficult due to several major problems. First, there are too many parameters to estimate if available data are only daily returns, which results in unstable estimates. One solution to this problem is to incorporate additional observations based on intraday asset returns, such as realized covariances. Second, since multivariate asset returns are not synchronously traded, we have to use the largest time intervals such that all asset returns are observed to compute the realized covariance matrices. However, in this study, we fail to make full use of the available intraday informations when there are less frequently traded assets. Third, it is not straightforward to guarantee that the estimated (and the realized) covariance matrices are positive definite.

Our contributions are the following: (1) we obtain the stable parameter estimates for the dynamic correlation models using the realized measures, (2) we make full use of intraday informations by using pairwise realized correlations, (3) the covariance matrices are guaranteed to be positive definite, (4) we avoid the arbitrariness of the ordering of asset returns, (5) we propose the flexible correlation structure model (e.g., such as setting some correlations to be zero if necessary), and (6) the parsimonious specification for the leverage effect is proposed. Our proposed models are applied to the daily returns of nine U.S. stocks with their realized volatilities and pairwise realized correlations and are shown to outperform the existing models with respect to portfolio performances.  相似文献   

3.
This paper proposes and analyses two types of asymmetric multivariate stochastic volatility (SV) models, namely, (i) the SV with leverage (SV-L) model, which is based on the negative correlation between the innovations in the returns and volatility, and (ii) the SV with leverage and size effect (SV-LSE) model, which is based on the signs and magnitude of the returns. The paper derives the state space form for the logarithm of the squared returns, which follow the multivariate SV-L model, and develops estimation methods for the multivariate SV-L and SV-LSE models based on the Monte Carlo likelihood (MCL) approach. The empirical results show that the multivariate SV-LSE model fits the bivariate and trivariate returns of the S&P 500, the Nikkei 225, and the Hang Seng indexes with respect to AIC and BIC more accurately than does the multivariate SV-L model. Moreover, the empirical results suggest that the univariate models should be rejected in favor of their bivariate and trivariate counterparts.  相似文献   

4.
This paper proposes and analyses two types of asymmetric multivariate stochastic volatility (SV) models, namely, (i) the SV with leverage (SV-L) model, which is based on the negative correlation between the innovations in the returns and volatility, and (ii) the SV with leverage and size effect (SV-LSE) model, which is based on the signs and magnitude of the returns. The paper derives the state space form for the logarithm of the squared returns, which follow the multivariate SV-L model, and develops estimation methods for the multivariate SV-L and SV-LSE models based on the Monte Carlo likelihood (MCL) approach. The empirical results show that the multivariate SV-LSE model fits the bivariate and trivariate returns of the S&P 500, the Nikkei 225, and the Hang Seng indexes with respect to AIC and BIC more accurately than does the multivariate SV-L model. Moreover, the empirical results suggest that the univariate models should be rejected in favor of their bivariate and trivariate counterparts.  相似文献   

5.
In this paper, efficient importance sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate stochastic volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother, a Bayesian Markov chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed.  相似文献   

6.
In this paper, efficient importance sampling (EIS) is used to perform a classical and Bayesian analysis of univariate and multivariate stochastic volatility (SV) models for financial return series. EIS provides a highly generic and very accurate procedure for the Monte Carlo (MC) evaluation of high-dimensional interdependent integrals. It can be used to carry out ML-estimation of SV models as well as simulation smoothing where the latent volatilities are sampled at once. Based on this EIS simulation smoother, a Bayesian Markov chain Monte Carlo (MCMC) posterior analysis of the parameters of SV models can be performed.  相似文献   

7.
In this paper Bayesian methods are applied to a stochastic volatility model using both the prices of the asset and the prices of options written on the asset. Posterior densities for all model parameters, latent volatilities and the market price of volatility risk are produced via a Markov Chain Monte Carlo (MCMC) sampling algorithm. Candidate draws for the unobserved volatilities are obtained in blocks by applying the Kalman filter and simulation smoother to a linearization of a nonlinear state space representation of the model. Crucially, information from both the spot and option prices affects the draws via the specification of a bivariate measurement equation, with implied Black-Scholes volatilities used to proxy observed option prices in the candidate model. Alternative models nested within the Heston (1993) framework are ranked via posterior odds ratios, as well as via fit, predictive and hedging performance. The method is illustrated using Australian News Corporation spot and option price data.  相似文献   

8.
In this paper Bayesian methods are applied to a stochastic volatility model using both the prices of the asset and the prices of options written on the asset. Posterior densities for all model parameters, latent volatilities and the market price of volatility risk are produced via a Markov Chain Monte Carlo (MCMC) sampling algorithm. Candidate draws for the unobserved volatilities are obtained in blocks by applying the Kalman filter and simulation smoother to a linearization of a nonlinear state space representation of the model. Crucially, information from both the spot and option prices affects the draws via the specification of a bivariate measurement equation, with implied Black–Scholes volatilities used to proxy observed option prices in the candidate model. Alternative models nested within the Heston (1993) framework are ranked via posterior odds ratios, as well as via fit, predictive and hedging performance. The method is illustrated using Australian News Corporation spot and option price data.  相似文献   

9.
We consider a set of data from 80 stations in the Venezuelan state of Guárico consisting of accumulated monthly rainfall in a time span of 16 years. The problem of modelling rainfall accumulated over fixed periods of time and recorded at meteorological stations at different sites is studied by using a model based on the assumption that the data follow a truncated and transformed multivariate normal distribution. The spatial correlation is modelled by using an exponentially decreasing correlation function and an interpolating surface for the means. Missing data and dry periods are handled within a Markov chain Monte Carlo framework using latent variables. We estimate the amount of rainfall as well as the probability of a dry period by using the predictive density of the data. We considered a model based on a full second-degree polynomial over the spatial co-ordinates as well as the first two Fourier harmonics to describe the variability during the year. Predictive inferences on the data show very realistic results, capturing the typical rainfall variability in time and space for that region. Important extensions of the model are also discussed.  相似文献   

10.
Summary.  When modelling multivariate financial data, the problem of structural learning is compounded by the fact that the covariance structure changes with time. Previous work has focused on modelling those changes by using multivariate stochastic volatility models. We present an alternative to these models that focuses instead on the latent graphical structure that is related to the precision matrix. We develop a graphical model for sequences of Gaussian random vectors when changes in the underlying graph occur at random times, and a new block of data is created with the addition or deletion of an edge. We show how a Bayesian hierarchical model incorporates both the uncertainty about that graph and the time variation thereof.  相似文献   

11.
The paper considers a lognormal model for the survival times and obtains a Bayes solution by means of Gibbs sampler algorithm when the priors for the parameters are vague. The formulation given in the paper is mainly focused for censored data problems though it is equally well applicable for complete data scenarios as well. For the purpose of numerical illustration, we considered two real data sets on head and neck cancer patients when they have been treated using either radiotherapy or chemotherapy followed by radiotherapy. The paper not only compares the survival functions for the two therapies assuming a lognormal model but also provides a model compatibility study based on predictive simulation results so that the choice of lognormal model can be justified for the two data sets. The ease of our analysis as compared to an earlier approach is certainly an advantage.  相似文献   

12.
We consider the competing risks set-up. In many practical situations, the conditional probability of the cause of failure given the failure time is of direct interest. We propose to model the competing risks by the overall hazard rate and the conditional probabilities rather than the cause-specific hazards. We adopt a Bayesian smoothing approach for both quantities of interest. Illustrations are given at the end.  相似文献   

13.
Bayesian analysis for a simple but widely applied dynamic programming model is obtained. The setting is the prototypal job-search model. The general case of wage and duration data, with potential censoring, is studied. The optimality condition implied by the dynamic programming setup is fully imposed. The posterior distribution reveals a “ridge” reflecting the characteristic nonstandard nature of the inference problem. Marginal distributions and moments are obtained in a canonical parameterization after a suitable approximation. The adequacy of the approximation is easily assessed. Simulation is applied to study alternative parameterizations and prior robustness and to facilitate prior elicitations. Finally, we illustrate the applicability of our methods by giving posterior distributions for the elasticities of unemployment durations and reemployment wages with respect to unemployment income. Our analysis is easy to implement and all computations are simple to perform.  相似文献   

14.
A stochastic epidemic model with several kinds of susceptible is used to analyse temporal disease outbreak data from a Bayesian perspective. Prior distributions are used to model uncertainty in the actual numbers of susceptibles initially present. The posterior distribution of the parameters of the model is explored via Markov chain Monte Carlo methods. The methods are illustrated using two datasets, and the results are compared where possible to results obtained by previous analyses.  相似文献   

15.
We incorporate a random effect into a multivariate discrete proportional hazards model and propose an efficient semiparametric Bayesian estimation method. By introducing a prior process for the parameters of baseline hazards, we consider a nonparametric estimation of baseline hazards function. Using a state space representation, we derive a dynamic modeling of baseline hazards function and propose an efficient block sampler for Markov chain Monte Carlo method. A numerical example using kidney patients data is given.  相似文献   

16.
A stochastic volatility in mean model with correlated errors using the symmetrical class of scale mixtures of normal distributions is introduced in this article. The scale mixture of normal distributions is an attractive class of symmetric distributions that includes the normal, Student-t, slash and contaminated normal distributions as special cases, providing a robust alternative to estimation in stochastic volatility in mean models in the absence of normality. Using a Bayesian paradigm, an efficient method based on Markov chain Monte Carlo (MCMC) is developed for parameter estimation. The methods developed are applied to analyze daily stock return data from the São Paulo Stock, Mercantile & Futures Exchange index (IBOVESPA). The Bayesian predictive information criteria (BPIC) and the logarithm of the marginal likelihood are used as model selection criteria. The results reveal that the stochastic volatility in mean model with correlated errors and slash distribution provides a significant improvement in model fit for the IBOVESPA data over the usual normal model.  相似文献   

17.
We present the censored regression model with the error term following the asymmetric exponential power distribution. We propose three Markov chain Monte Carlo (MCMC) algorithms: the first one uses the probability integral transformation; the second one uses a combination of the probability integral transformation and random walk draws; while the third one uses random walk draws. Using simulated data we compare the performance of the three MCMC algorithms. Then we compare the posterior means, or Bayes estimates, with maximum likelihood estimates. We estimate the stock option portion of executive compensation as an example of the empirical application.  相似文献   

18.
19.
Summary.  Traffic safety in the UK is one of the increasing number of areas where central government sets targets based on 'outcome-focused' performance indicators (PIs). Judgments about such PIs are often based solely on rankings of raw indicators and simple league tables dominate centrally published analyses. There is a considerable statistical literature examining health and education issues which has tended to use the generalized linear mixed model (GLMM) to address variability in the data when drawing inferences about relative performance from headline PIs. This methodology could obviously be applied in contexts such as traffic safety. However, when such models are applied to the fairly crude data sets that are currently available, the interval estimates generated, e.g. in respect of rankings, are often too broad to allow much real differentiation between the traffic safety performance of the units that are being considered. Such results sit uncomfortably with the ethos of 'performance management' and raise the question of whether the inference from such data sets about relative performance can be improved in some way. Motivated by consideration of a set of nine road safety performance indicators measured on English local authorities in the year 2000, the paper considers methods to strengthen the weak inference that is obtained from GLMMs of individual indicators by simultaneous, multivariate modelling of a range of related indicators. The correlation structure between indicators is used to reduce the uncertainty that is associated with rankings of any one of the individual indicators. The results demonstrate that credible intervals can be substantially narrowed by the use of the multivariate GLMM approach and that multivariate modelling of multiple PIs may therefore have considerable potential for introducing more robust and realistic assessments of differential performance in some contexts.  相似文献   

20.
侯晓辉  张国平 《统计研究》2007,24(11):80-84
 摘  要:本文应用蒙特卡罗模拟方法,在定义单次模拟程序时,假设数据产生机制是一个超越对数随机前沿生产函数的10模型,由此创造出模拟样本,并用一个超越对数的00模型(scaling-property模型)计算出有关参数、特别是非效率项的估计值。又进一步判定了所得到的估计值和原来10模型中的“真实”非效率项的一致性。研究发现,真实非效率项与从scaling-property模型中计算出来的非效率估计值之间的各种相关系数均为负值。因此,效率秩估计值和“真实”效率秩是不一致的  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号