共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we present a parsimonious multivariate model for exchange rate volatilities based on logarithmic high–low ranges of daily exchange rates. The multivariate stochastic volatility model decomposes the log range of each exchange rate into two independent latent factors, which could be interpreted as the underlying currency specific components. Owing to the empirical normality of the logarithmic range measure the model can be estimated conveniently with the standard Kalman filter methodology. Our results show that our model fits the exchange rate data quite well. Exchange rate news seems to be currency specific and allows identification of currency contributions to both exchange rate levels and exchange rate volatilities. 相似文献
2.
In this paper we present a parsimonious multivariate model for exchange rate volatilities based on logarithmic high-low ranges of daily exchange rates. The multivariate stochastic volatility model decomposes the log range of each exchange rate into two independent latent factors, which could be interpreted as the underlying currency specific components. Owing to the empirical normality of the logarithmic range measure the model can be estimated conveniently with the standard Kalman filter methodology. Our results show that our model fits the exchange rate data quite well. Exchange rate news seems to be currency specific and allows identification of currency contributions to both exchange rate levels and exchange rate volatilities. 相似文献
3.
Bayesian estimation for the two unknown parameters and the reliability function of the exponentiated Weibull model are obtained based on generalized order statistics. Markov chain Monte Carlo (MCMC) methods are considered to compute the Bayes estimates of the target parameters. Our computations are based on the balanced loss function which contains the symmetric and asymmetric loss functions as special cases. The results have been specialized to the progressively Type-II censored data and upper record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation. 相似文献
4.
空间ZISF的估计及蒙特卡罗模拟 总被引:1,自引:1,他引:1
在随机前沿模型中引入空间效应和技术无效率项的非连续性并构建了空间零无效率随机前沿模型,使用极大似然估计和JLMS方法得出参数和技术效率的估计。蒙特卡罗模拟表明:(1)逆似然比检验能以较高的准确率识别真实模型;(2)本方法在参数估计和技术效率的估计两方面均表现较好;(3)若真实模型为空间零无效率随机前沿模型但误用了空间随机前沿模型,参数估计和技术效率的估计两方面均表现较差。空间零无效率随机前沿模型有其存在的必要性。 相似文献
5.
We examine alternative generalized method of moments procedures for estimation of a stochastic autoregressive volatility model by Monte Carlo methods. We document the existence of a tradeoff between the number of moments, or information, included in estimation and the quality, or precision, of the objective function used for estimation. Furthermore, an approximation to the optimal weighting matrix is used to explore the impact of the weighting matrix for estimation, specification testing, and inference procedures. The results provide guidelines that help achieve desirable small-sample properties in settings characterized by strong conditional heteroscedasticity and correlation among the moments. 相似文献
6.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients. 相似文献
7.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients. 相似文献
8.
Olivier Lopez 《统计学通讯:理论与方法》2013,42(15):2639-2660
In a regression model with univariate censored responses, a new estimator of the joint distribution function of the covariates and response is proposed, under the assumption that the response and the censoring variable are independent conditionally to the covariates. This estimator is based on the conditional Kaplan–Meier estimator of Beran (1981), and happens to be an extension of the multivariate empirical distribution function used in the uncensored case. We derive asymptotic i.i.d. representations for the integrals with respect to the measure defined by this estimated distribution function. These representations hold even in the case where the covariates are multidimensional under some additional assumption on the censoring. Applications to censored regression and to density estimation are considered. 相似文献
9.
We give a set of identifying conditions for p-dimensional (p ≥ 2) simultaneous equation systems (SES) with heteroscedasticity in the framework of Gaussian quasi-maximum likelihood (QML). Our conditions rely on the presence of heteroscedasticity in the data rather than identifying restrictions traditionally employed in the literature. The QML estimator is shown to be consistent for the true parameter point and asymptotically normal. Monte Carlo experiments indicate that the QML estimator performs well in comparison to the generalized method of moments (GMM) estimator in finite samples, even when the conditional variance is mildly misspecified. We analyze the relationship between traded stock prices and volumes in the setting of SES. Based on a sample of the Russell 3000 stocks, our findings provide new evidence against perfectly elastic demand and supply schedules for equities. 相似文献
10.
Dinghai Xu 《统计学通讯:模拟与计算》2013,42(7):1403-1421
This article investigates an efficient estimation method for a class of switching regressions based on the characteristic function (CF). We show that with the exponential weighting function, the CF-based estimator can be achieved from minimizing a closed form distance measure. Due to the availability of the analytical structure of the asymptotic covariance, an iterative estimation procedure is developed involving the minimization of a precision measure of the asymptotic covariance matrix. Numerical examples are illustrated via a set of Monte Carlo experiments examining the implementation, finite sample property and the efficiency of the proposed estimator. 相似文献
11.
Martin Crowder 《Scandinavian Journal of Statistics》1998,25(1):53-67
A parametric multivariate failure time distribution is derived from a frailty-type model with a particular frailty distribution. It covers as special cases certain distributions which have been used for multivariate survival data in recent years. Some properties of the distribution are derived: its marginal and conditional distributions lie within the parametric family, and association between the component variates can be positive or, to a limited extent, negative. The simple closed form of the survivor function is useful for right-censored data, as occur commonly in survival analysis, and for calculating uniform residuals. Also featured is the distribution of ratios of paired failure times. The model is applied to data from the literature 相似文献
12.
A Monte Carlo simulation is used to study the performance of the Wald, likelihood ratio and Lagrange multiplier tests for regression coefficients when least absolute value regression is used. The simulation results provide support for use of the Lagrange multiplier test, especially when certain computational advantages are considered. 相似文献
13.
Robert K. Rayner 《商业与经济统计学杂志》2013,31(2):251-263
The small-sample behavior of the bootstrap is investigated as a method for estimating p values and power in the stationary first-order autoregressive model. Monte Carlo methods are used to examine the bootstrap and Student-t approximations to the true distribution of the test statistic frequently used for testing hypotheses on the underlying slope parameter. In contrast to Student's t, the results suggest that the bootstrap can accurately estimate p values and power in this model in sample sizes as small as 5–10. 相似文献
14.
Drew Creal 《Econometric Reviews》2013,32(3):245-296
This article serves as an introduction and survey for economists to the field of sequential Monte Carlo methods which are also known as particle filters. Sequential Monte Carlo methods are simulation-based algorithms used to compute the high-dimensional and/or complex integrals that arise regularly in applied work. These methods are becoming increasingly popular in economics and finance; from dynamic stochastic general equilibrium models in macro-economics to option pricing. The objective of this article is to explain the basics of the methodology, provide references to the literature, and cover some of the theoretical results that justify the methods in practice. 相似文献
15.
Markov Chain Monte Carlo Techniques for Studying Interoccasion and Intersubject Variability: Application to Pharmacokinetic Data 总被引:1,自引:0,他引:1
David J. Lunn & Leon J. Aarons 《Journal of the Royal Statistical Society. Series C, Applied statistics》1997,46(1):73-91
Values of pharmacokinetic parameters may seem to vary randomly between dosing occasions. An accurate explanation of the pharmacokinetic behaviour of a particular drug within a population therefore requires two major sources of variability to be accounted for, namely interoccasion variability and intersubject variability. A hierarchical model that recognizes these two sources of variation has been developed. Standard Bayesian techniques were applied to this statistical model, and a mathematical algorithm based on a Gibbs sampling strategy was derived. The accuracy of this algorithm's determination of the interoccasion and intersubject variation in pharmacokinetic parameters was evaluated from various population analyses of several sets of simulated data. A comparison of results from these analyses with those obtained from parallel maximum likelihood analyses (NONMEM) showed that, for simple problems, the outputs from the two algorithms agreed well, whereas for more complex situations the NONMEM approach may be less accurate. Statistical analyses of a multioccasion data set of pharmacokinetic measurements on the drug metoprolol (the measurements being of concentrations of drug in blood plasma from human subjects) revealed substantial interoccasion variability for all structural model parameters. For some parameters, interoccasion variability appears to be the primary source of pharmacokinetic variation. 相似文献
16.
This paper argues that Fisher's paradox can be explained away in terms of estimator choice. We analyse by means of Monte Carlo experiments the small sample properties of a large set of estimators (including virtually all available single-equation estimators), and compute the critical values based on the empirical distributions of the t-statistics, for a variety of Data Generation Processes (DGPs), allowing for structural breaks, ARCH effects etc. We show that precisely the estimators most commonly used in the literature, namely OLS, Dynamic OLS (DOLS) and non-prewhitened FMLS, have the worst performance in small samples, and produce rejections of the Fisher hypothesis. If one employs the estimators with the most desirable properties (i.e., the smallest downward bias and the minimum shift in the distribution of the associated t-statistics), or if one uses the empirical critical values, the evidence based on US data is strongly supportive of the Fisher relation, consistently with many theoretical models. 相似文献
17.
Sarantis Tsiaplias 《Econometric Reviews》2013,32(2):244-271
Theoretical models of contagion and spillovers allow for asset-specific shocks that can be directly transmitted from one asset to another, as well as indirectly transmitted across uncorrelated assets through some intermediary mechanism. Standard multivariate Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models, however, provide estimates of volatilities and correlations based only on the direct transmission of shocks across assets. As such, spillover effects via an intermediary asset or market are not considered. In this article, a multivariate GARCH model is constructed that provides estimates of volatilities and correlations based on both directly and indirectly transmitted shocks. The model is applied to exchange rate and equity returns data. The results suggest that if a spillover component is observed in the data, the spillover augmented models provide significantly different volatility estimates compared to standard multivariate GARCH models. 相似文献
18.
《Econometric Reviews》2013,32(1):25-52
Abstract This paper argues that Fisher's paradox can be explained away in terms of estimator choice. We analyse by means of Monte Carlo experiments the small sample properties of a large set of estimators (including virtually all available single-equation estimators), and compute the critical values based on the empirical distributions of the t-statistics, for a variety of Data Generation Processes (DGPs), allowing for structural breaks, ARCH effects etc. We show that precisely the estimators most commonly used in the literature, namely OLS, Dynamic OLS (DOLS) and non-prewhitened FMLS, have the worst performance in small samples, and produce rejections of the Fisher hypothesis. If one employs the estimators with the most desirable properties (i.e., the smallest downward bias and the minimum shift in the distribution of the associated t-statistics), or if one uses the empirical critical values, the evidence based on US data is strongly supportive of the Fisher relation, consistently with many theoretical models. 相似文献
19.
Since the seminal paper of Granger & Joyeux (1980), the concept of a long memory has focused the attention of many statisticians and econometricians trying to model and measure the persistence of stationary processes. Many methods for estimating d, the long-range dependence parameter, have been suggested since the work of Hurst (1951). They can be summarized in three classes: the heuristic methods, the semi-parametric methods and the maximum likelihood methods. In this paper, we try by simulation, to verify the two main properties of [dcirc]: the consistency and the asymptotic normality. Hence, it is very important for practitioners to compare the performance of the various classes of estimators. The results indicate that only the semi-parametric and the maximum likelihood methods can give good estimators. They also suggest that the AR component of the ARFIMA (1, d, 0) process has an important impact on the properties of the different estimators and that the Whittle method is the best one, since it has the small mean squared error. We finally carry out an empirical application using the monthly seasonally adjusted US Inflation series, in order to illustrate the usefulness of the different estimation methods in the context of using real data. 相似文献
20.
Vasco J. Gabriel 《Econometric Reviews》2003,22(4):411-435
The aim of this paper is to compare the relative performance of several tests for the null hypothesis of cointegration, in terms of size and power in finite samples. This is carried out using Monte Carlo simulations for a range of plausible data-generating processes. We also analyze the impact on size and power of choosing different procedures to estimate the long run variance of the errors. We found that the parametrically adjusted test of McCabe et al. (1997) is the most well-balanced test, displaying good power and relatively few size distortions. 相似文献