共查询到20条相似文献,搜索用时 15 毫秒
1.
A bivariate stochastic volatility model is employed to measure the effect of intervention by the Bank of Japan (BOJ) on daily returns and volume in the USD/YEN foreign exchange market. Missing observations are accounted for, and a data-based Wishart prior for the precision matrix of the errors to the transition equation that is in line with the likelihood is suggested. Empirical results suggest there is strong conditional heteroskedasticity in the mean-corrected volume measure, as well as contemporaneous correlation in the errors to both the observation and transition equations. A threshold model is used for the BOJ reaction function, which is estimated jointly with the bivariate stochastic volatility model via Markov chain Monte Carlo. This accounts for endogeneity between volatility in the market and the BOJ reaction function, something that has hindered much previous empirical analysis in the literature on central bank intervention. The empirical results suggest there was a shift in behavior by the BOJ, with a movement away from a policy of market stabilization and toward a role of support for domestic monetary policy objectives. Throughout, we observe “leaning against the wind” behavior, something that is a feature of most previous empirical analysis of central bank intervention. A comparison with a bivariate EGARCH model suggests that the bivariate stochastic volatility model produces estimates that better capture spikes in in-sample volatility. This is important in improving estimates of a central bank reaction function because it is at these periods of high daily volatility that central banks more frequently intervene. 相似文献
2.
A bivariate stochastic volatility model is employed to measure the effect of intervention by the Bank of Japan (BOJ) on daily returns and volume in the USD/YEN foreign exchange market. Missing observations are accounted for, and a data-based Wishart prior for the precision matrix of the errors to the transition equation that is in line with the likelihood is suggested. Empirical results suggest there is strong conditional heteroskedasticity in the mean-corrected volume measure, as well as contemporaneous correlation in the errors to both the observation and transition equations. A threshold model is used for the BOJ reaction function, which is estimated jointly with the bivariate stochastic volatility model via Markov chain Monte Carlo. This accounts for endogeneity between volatility in the market and the BOJ reaction function, something that has hindered much previous empirical analysis in the literature on central bank intervention. The empirical results suggest there was a shift in behavior by the BOJ, with a movement away from a policy of market stabilization and toward a role of support for domestic monetary policy objectives. Throughout, we observe “leaning against the wind” behavior, something that is a feature of most previous empirical analysis of central bank intervention. A comparison with a bivariate EGARCH model suggests that the bivariate stochastic volatility model produces estimates that better capture spikes in in-sample volatility. This is important in improving estimates of a central bank reaction function because it is at these periods of high daily volatility that central banks more frequently intervene. 相似文献
3.
Discrete time modelling of disease incidence time series by using Markov chain Monte Carlo methods 总被引:1,自引:0,他引:1
Alexander Morton Bärbel F. Finkenstädt 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(3):575-594
Summary. A stochastic discrete time version of the susceptible–infected–recovered model for infectious diseases is developed. Disease is transmitted within and between communities when infected and susceptible individuals interact. Markov chain Monte Carlo methods are used to make inference about these unobserved populations and the unknown parameters of interest. The algorithm is designed specifically for modelling time series of reported measles cases although it can be adapted for other infectious diseases with permanent immunity. The application to observed measles incidence series motivates extensions to incorporate age structure as well as spatial epidemic coupling between communities. 相似文献
4.
C. P. Robert T. Rydén & D. M. Titterington 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(1):57-75
Hidden Markov models form an extension of mixture models which provides a flexible class of models exhibiting dependence and a possibly large degree of variability. We show how reversible jump Markov chain Monte Carlo techniques can be used to estimate the parameters as well as the number of components of a hidden Markov model in a Bayesian framework. We employ a mixture of zero-mean normal distributions as our main example and apply this model to three sets of data from finance, meteorology and geomagnetism. 相似文献
5.
This paper proposes and analyses two types of asymmetric multivariate stochastic volatility (SV) models, namely, (i) the SV with leverage (SV-L) model, which is based on the negative correlation between the innovations in the returns and volatility, and (ii) the SV with leverage and size effect (SV-LSE) model, which is based on the signs and magnitude of the returns. The paper derives the state space form for the logarithm of the squared returns, which follow the multivariate SV-L model, and develops estimation methods for the multivariate SV-L and SV-LSE models based on the Monte Carlo likelihood (MCL) approach. The empirical results show that the multivariate SV-LSE model fits the bivariate and trivariate returns of the S&P 500, the Nikkei 225, and the Hang Seng indexes with respect to AIC and BIC more accurately than does the multivariate SV-L model. Moreover, the empirical results suggest that the univariate models should be rejected in favor of their bivariate and trivariate counterparts. 相似文献
6.
This paper proposes and analyses two types of asymmetric multivariate stochastic volatility (SV) models, namely, (i) the SV with leverage (SV-L) model, which is based on the negative correlation between the innovations in the returns and volatility, and (ii) the SV with leverage and size effect (SV-LSE) model, which is based on the signs and magnitude of the returns. The paper derives the state space form for the logarithm of the squared returns, which follow the multivariate SV-L model, and develops estimation methods for the multivariate SV-L and SV-LSE models based on the Monte Carlo likelihood (MCL) approach. The empirical results show that the multivariate SV-LSE model fits the bivariate and trivariate returns of the S&P 500, the Nikkei 225, and the Hang Seng indexes with respect to AIC and BIC more accurately than does the multivariate SV-L model. Moreover, the empirical results suggest that the univariate models should be rejected in favor of their bivariate and trivariate counterparts. 相似文献
7.
Håvard Rue 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2001,63(2):325-338
This paper demonstrates how Gaussian Markov random fields (conditional autoregressions) can be sampled quickly by using numerical techniques for sparse matrices. The algorithm is general and efficient, and expands easily to various forms for conditional simulation and evaluation of normalization constants. We demonstrate its use by constructing efficient block updates in Markov chain Monte Carlo algorithms for disease mapping. 相似文献
8.
Håvard Rue Ingelin Steinsland Sveinung Erland 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(4):877-892
Summary. Gaussian Markov random-field (GMRF) models are frequently used in a wide variety of applications. In most cases parts of the GMRF are observed through mutually independent data; hence the full conditional of the GMRF, a hidden GMRF (HGMRF), is of interest. We are concerned with the case where the likelihood is non-Gaussian, leading to non-Gaussian HGMRF models. Several researchers have constructed block sampling Markov chain Monte Carlo schemes based on approximations of the HGMRF by a GMRF, using a second-order expansion of the log-density at or near the mode. This is possible as the GMRF approximation can be sampled exactly with a known normalizing constant. The Markov property of the GMRF approximation yields computational efficiency.The main contribution in the paper is to go beyond the GMRF approximation and to construct a class of non-Gaussian approximations which adapt automatically to the particular HGMRF that is under study. The accuracy can be tuned by intuitive parameters to nearly any precision. These non-Gaussian approximations share the same computational complexity as those which are based on GMRFs and can be sampled exactly with computable normalizing constants. We apply our approximations in spatial disease mapping and model-based geostatistical models with different likelihoods, obtain procedures for block updating and construct Metropolized independence samplers. 相似文献
9.
This article presents a Bayesian analysis of a multinomial probit model by building on previous work that specified priors on identified parameters. The main contribution of our article is to propose a prior on the covariance matrix of the latent utilities that permits elements of the inverse of the covariance matrix to be identically zero. This allows a parsimonious representation of the covariance matrix when such parsimony exists. The methodology is applied to both simulated and real data, and its ability to obtain more efficient estimators of the covariance matrix and regression coefficients is assessed using simulated data. 相似文献
10.
Raquel Prado Mike West & Andrew D. Krystal 《Journal of the Royal Statistical Society. Series C, Applied statistics》2001,50(1):95-109
Multiple time series of scalp electrical potential activity are generated routinely in electroencephalographic (EEG) studies. Such recordings provide important non-invasive data about brain function in human neuropsychiatric disorders. Analyses of EEG traces aim to isolate characteristics of their spatiotemporal dynamics that may be useful in diagnosis, or may improve the understanding of the underlying neurophysiology or may improve treatment through identifying predictors and indicators of clinical outcomes. We discuss the development and application of non-stationary time series models for multiple EEG series generated from individual subjects in a clinical neuropsychiatric setting. The subjects are depressed patients experiencing generalized tonic–clonic seizures elicited by electroconvulsive therapy (ECT) as antidepressant treatment. Two varieties of models—dynamic latent factor models and dynamic regression models—are introduced and studied. We discuss model motivation and form, and aspects of statistical analysis including parameter identifiability, posterior inference and implementation of these models via Markov chain Monte Carlo techniques. In an application to the analysis of a typical set of 19 EEG series recorded during an ECT seizure at different locations over a patient's scalp, these models reveal time-varying features across the series that are strongly related to the placement of the electrodes. We illustrate various model outputs, the exploration of such time-varying spatial structure and its relevance in the ECT study, and in basic EEG research in general. 相似文献
11.
A. E. Brockwell N. H. Chan P. K. Lee 《Journal of the Royal Statistical Society. Series C, Applied statistics》2003,52(4):417-430
Summary. The development of time series models for traffic volume data constitutes an important step in constructing automated tools for the management of computing infrastructure resources. We analyse two traffic volume time series: one is the volume of hard disc activity, aggregated into half-hour periods, measured on a workstation, and the other is the volume of Internet requests made to a workstation. Both of these time series exhibit features that are typical of network traffic data, namely strong seasonal components and highly non-Gaussian distributions. For these time series, a particular class of non-linear state space models is proposed, and practical techniques for model fitting and forecasting are demonstrated. 相似文献
12.
Bayesian Inference for Stochastic Epidemics in Populations with Random Social Structure 总被引:1,自引:0,他引:1
A single-population Markovian stochastic epidemic model is defined so that the underlying social structure of the population is described by a Bernoulli random graph. The parameters of the model govern the rate of infection, the length of the infectious period, and the probability of social contact with another individual in the population. Markov chain Monte Carlo methods are developed to facilitate Bayesian inference for the parameters of both the epidemic model and underlying unknown social structure. The methods are applied in various examples of both illustrative and real-life data, with two different kinds of data structure considered. 相似文献
13.
Margaret R. Donald Chris Strickland Clair L. Alston Rick Young Kerrie L. Mengersen 《Journal of applied statistics》2012,39(7):1455-1474
In this paper, we describe an analysis for data collected on a three-dimensional spatial lattice with treatments applied at the horizontal lattice points. Spatial correlation is accounted for using a conditional autoregressive model. Observations are defined as neighbours only if they are at the same depth. This allows the corresponding variance components to vary by depth. We use the Markov chain Monte Carlo method with block updating, together with Krylov subspace methods, for efficient estimation of the model. The method is applicable to both regular and irregular horizontal lattices and hence to data collected at any set of horizontal sites for a set of depths or heights, for example, water column or soil profile data. The model for the three-dimensional data is applied to agricultural trial data for five separate days taken roughly six months apart in order to determine possible relationships over time. The purpose of the trial is to determine a form of cropping that leads to less moist soils in the root zone and beyond. We estimate moisture for each date, depth and treatment accounting for spatial correlation and determine relationships of these and other parameters over time. 相似文献
14.
A new Markov chain Monte Carlo method for the Bayesian analysis of finite mixture distributions with an unknown number of
components is presented. The sampler is characterized by a state space consisting only of the number of components and the
latent allocation variables. Its main advantage is that it can be used, with minimal changes, for mixtures of components from
any parametric family, under the assumption that the component parameters can be integrated out of the model analytically.
Artificial and real data sets are used to illustrate the method and mixtures of univariate and of multivariate normals are
explicitly considered. The problem of label switching, when parameter inference is of interest, is addressed in a post-processing
stage. 相似文献
15.
16.
A general framework for exact simulation of Markov random fields using the Propp–Wilson coupling from the past approach is proposed. Our emphasis is on situations lacking the monotonicity properties that have been exploited in previous studies. A critical aspect is the convergence time of the algorithm; this we study both theoretically and experimentically. Our main theoretical result in this direction says, roughly, that if interactions are sufficiently weak, then the expected running time of a carefully designed implementation is O ( N log N ), where N is the number of interacting components of the system. Computer experiments are carried out for random q -colourings and for the Widom–Rowlinson lattice gas model. 相似文献
17.
Bayesian texture segmentation of weed and crop images using reversible jump Markov chain Monte Carlo methods 总被引:1,自引:0,他引:1
Ian L. Dryden Mark R. Scarr Charles C. Taylor 《Journal of the Royal Statistical Society. Series C, Applied statistics》2003,52(1):31-50
Summary. A Bayesian method for segmenting weed and crop textures is described and implemented. The work forms part of a project to identify weeds and crops in images so that selective crop spraying can be carried out. An image is subdivided into blocks and each block is modelled as a single texture. The number of different textures in the image is assumed unknown. A hierarchical Bayesian procedure is used where the texture labels have a Potts model (colour Ising Markov random field) prior and the pixels within a block are distributed according to a Gaussian Markov random field, with the parameters dependent on the type of texture. We simulate from the posterior distribution by using a reversible jump Metropolis–Hastings algorithm, where the number of different texture components is allowed to vary. The methodology is applied to a simulated image and then we carry out texture segmentation on the weed and crop images that motivated the work. 相似文献
18.
Probability density estimation via an infinite Gaussian mixture model: application to statistical process monitoring 总被引:1,自引:0,他引:1
Tao Chen Julian Morris Elaine Martin 《Journal of the Royal Statistical Society. Series C, Applied statistics》2006,55(5):699-715
Summary. The primary goal of multivariate statistical process performance monitoring is to identify deviations from normal operation within a manufacturing process. The basis of the monitoring schemes is historical data that have been collected when the process is running under normal operating conditions. These data are then used to establish confidence bounds to detect the onset of process deviations. In contrast with the traditional approaches that are based on the Gaussian assumption, this paper proposes the application of the infinite Gaussian mixture model (GMM) for the calculation of the confidence bounds, thereby relaxing the previous restrictive assumption. The infinite GMM is a special case of Dirichlet process mixtures and is introduced as the limit of the finite GMM, i.e. when the number of mixtures tends to ∞. On the basis of the estimation of the probability density function, via the infinite GMM, the confidence bounds are calculated by using the bootstrap algorithm. The methodology proposed is demonstrated through its application to a simulated continuous chemical process, and a batch semiconductor manufacturing process. 相似文献
19.
Trevor C. Bailey Paul J. Hewson 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2004,167(3):501-517
Summary. Traffic safety in the UK is one of the increasing number of areas where central government sets targets based on 'outcome-focused' performance indicators (PIs). Judgments about such PIs are often based solely on rankings of raw indicators and simple league tables dominate centrally published analyses. There is a considerable statistical literature examining health and education issues which has tended to use the generalized linear mixed model (GLMM) to address variability in the data when drawing inferences about relative performance from headline PIs. This methodology could obviously be applied in contexts such as traffic safety. However, when such models are applied to the fairly crude data sets that are currently available, the interval estimates generated, e.g. in respect of rankings, are often too broad to allow much real differentiation between the traffic safety performance of the units that are being considered. Such results sit uncomfortably with the ethos of 'performance management' and raise the question of whether the inference from such data sets about relative performance can be improved in some way. Motivated by consideration of a set of nine road safety performance indicators measured on English local authorities in the year 2000, the paper considers methods to strengthen the weak inference that is obtained from GLMMs of individual indicators by simultaneous, multivariate modelling of a range of related indicators. The correlation structure between indicators is used to reduce the uncertainty that is associated with rankings of any one of the individual indicators. The results demonstrate that credible intervals can be substantially narrowed by the use of the multivariate GLMM approach and that multivariate modelling of multiple PIs may therefore have considerable potential for introducing more robust and realistic assessments of differential performance in some contexts. 相似文献
20.
Dan Cornford Lehel Csató David J. Evans Manfred Opper 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(3):609-626
Summary. The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm. 相似文献