首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
This paper proposes a copula directional dependence by using a bivariate Gaussian copula beta regression with Stochastic Volatility (SV) models for marginal distributions. With the asymmetric copula generated by the composition of two Plackett copulas, we show that our SV copula directional dependence by the Gaussian copula beta regression model is superior to the Kim and Hwang (2016) copula directional dependence by an asymmetric GARCH model in terms of the percent relative efficiency of bias and mean squared error. To validate our proposed method with the real data, we use Brent Crude Daily Price (BRENT), West Texas Intermediate Daily Price (WTI), the Standard & Poor’s 500 (SP) and US 10-Year Treasury Constant Maturity Rate (TCM) so that our copula SV directional dependence is overall superior to the Kim and Hwang (2016) copula directional dependence by an asymmetric GARCH model in terms of precision by the percent relative efficiency of mean squared error. In terms of forecasting using the real financial data, we also show that the Bayesian SV model of the uniform transformed data by a copula conditional distribution yields an improvement on the volatility models such as GARCH and SV.  相似文献   

2.
变权重组合预测模型的局部加权最小二乘解法   总被引:2,自引:0,他引:2  
随着科学技术的不断进步,预测方法也得到了很大的发展,常见的预测方法就有数十种之多。而组合预测是将不同的预测方法组合起来,综合利用各个方法所提供的信息,其效果往往优于单一的预测方法,故得到了广泛的应用。而基于变系数模型的思想研究了组合预测模型,将变权重的求取转化为变系数模型中系数函数的估计问题,从而可以基于局部加权最小二乘方法求解,利用交叉证实法选取光滑参数。其结果表明所提方法预测精度很高,效果优于其他方法。  相似文献   

3.
This paper is concerned with the volatility modeling of a set of South African Rand (ZAR) exchange rates. We investigate the quasi-maximum-likelihood (QML) estimator based on the Kalman filter and explore how well a choice of stochastic volatility (SV) models fits the data. We note that a data set from a developing country is used. The main results are: (1) the SV model parameter estimates are in line with those reported from the analysis of high-frequency data for developed countries; (2) the SV models we considered, along with their corresponding QML estimators, fit the data well; (3) using the range return instead of the absolute return as a volatility proxy produces QML estimates that are both less biased and less variable; (4) although the log range of the ZAR exchange rates has a distribution that is quite far from normal, the corresponding QML estimator has a superior performance when compared with the log absolute return.  相似文献   

4.
The class of beta regression models proposed by Ferrari and Cribari-Neto [Beta regression for modelling rates and proportions, Journal of Applied Statistics 31 (2004), pp. 799–815] is useful for modelling data that assume values in the standard unit interval (0, 1). The dependent variable relates to a linear predictor that includes regressors and unknown parameters through a link function. The model is also indexed by a precision parameter, which is typically taken to be constant for all observations. Some authors have used, however, variable dispersion beta regression models, i.e., models that include a regression submodel for the precision parameter. In this paper, we show how to perform testing inference on the parameters that index the mean submodel without having to model the data precision. This strategy is useful as it is typically harder to model dispersion effects than mean effects. The proposed inference procedure is accurate even under variable dispersion. We present the results of extensive Monte Carlo simulations where our testing strategy is contrasted to that in which the practitioner models the underlying dispersion and then performs testing inference. An empirical application that uses real (not simulated) data is also presented and discussed.  相似文献   

5.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients.  相似文献   

6.
In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications that are natural extensions to certain existing models, one of which allows for time-varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time-varying correlations, heavy-tailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the best specifications are those that allow for time-varying correlation coefficients.  相似文献   

7.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

8.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

9.
The GARCH and stochastic volatility (SV) models are two competing, well-known and often used models to explain the volatility of financial series. In this paper, we consider a closed form estimator for a stochastic volatility model and derive its asymptotic properties. We confirm our theoretical results by a simulation study. In addition, we propose a set of simple, strongly consistent decision rules to compare the ability of the GARCH and the SV model to fit the characteristic features observed in high frequency financial data such as high kurtosis and slowly decaying autocorrelation function of the squared observations. These rules are based on a number of moment conditions that is allowed to increase with sample size. We show that our selection procedure leads to choosing the model that fits best, or the simplest model under equivalence, with probability one as the sample size increases. The finite sample size behavior of our procedure is analyzed via simulations. Finally, we provide an application to stocks in the Dow Jones industrial average index.  相似文献   

10.
In this paper, we investigate robust parameter estimation and variable selection for binary regression models with grouped data. We investigate estimation procedures based on the minimum-distance approach. In particular, we employ minimum Hellinger and minimum symmetric chi-squared distances criteria and propose regularized minimum-distance estimators. These estimators appear to possess a certain degree of automatic robustness against model misspecification and/or for potential outliers. We show that the proposed non-penalized and penalized minimum-distance estimators are efficient under the model and simultaneously have excellent robustness properties. We study their asymptotic properties such as consistency, asymptotic normality and oracle properties. Using Monte Carlo studies, we examine the small-sample and robustness properties of the proposed estimators and compare them with traditional likelihood estimators. We also study two real-data applications to illustrate our methods. The numerical studies indicate the satisfactory finite-sample performance of our procedures.  相似文献   

11.
In this article, we assess Bayesian estimation and prediction using integrated Laplace approximation (INLA) on a stochastic volatility (SV) model. This was performed through a Monte Carlo study with 1,000 simulated time series. To evaluate the estimation method, two criteria were considered: the bias and square root of the mean square error (smse). The criteria used for prediction are the one step ahead forecast of volatility and the one day Value at Risk (VaR). The main findings are that the INLA approximations are fairly accurate and relatively robust to the choice of prior distribution on the persistence parameter. Additionally, VaR estimates are computed and compared for three financial time series returns indexes.  相似文献   

12.
For two independent non-homogeneous Poisson processes with unknown intensities we propose a test for testing the hypothesis that the ratio of the intensities is constant versus it is increasing on (0,t]. The existing test procedures for testing such relative trends are based on conditioning on the number of failures observed in (0,t] from the two processes. Our test is unconditional and is based on the original time truncated data which enables us to have meaningful asymptotics. We obtain the asymptotic null distribution (as t becomes large) of the proposed test statistic and show that the proposed test is consistent against several large classes of alternatives. It was observed by Park and Kim (IEEE. Trans. Rehab. 40 (1), 1992, 107–111) that it is difficult to distinguish between the power-law and log-linear processes for certain parameter values. We show that our test is consistent for such alternatives also.  相似文献   

13.
We consider optimal testing procedures for specific models of early and instantaneous failures in reliability studies. These models are typically used to accommodate lifetime data that have a higher concentration of failures near time zero. We show that it is possible to derive uniformly most powerful tests, for testing the mixing parameter in the instantaneous failure model, for general lifetime distributions. A novel procedure to test for early failures, which uses an invariance property of the maximum likelihood estimate of the nuisance parameter, is also presented.  相似文献   

14.
When some explanatory variables in a regression are correlated with the disturbance term, instrumental variable methods are typically employed to make reliable inferences. Furthermore, to avoid difficulties associated with weak instruments, identification-robust methods are often proposed. However, it is hard to assess whether an instrumental variable is valid in practice because instrument validity is based on the questionable assumption that some of them are exogenous. In this paper, we focus on structural models and analyze the effects of instrument endogeneity on two identification-robust procedures, the Anderson–Rubin (1949, AR) and the Kleibergen (2002, K) tests, with or without weak instruments. Two main setups are considered: (1) the level of “instrument” endogeneity is fixed (does not depend on the sample size) and (2) the instruments are locally exogenous, i.e. the parameter which controls instrument endogeneity approaches zero as the sample size increases. In the first setup, we show that both test procedures are in general consistent against the presence of invalid instruments (hence asymptotically invalid for the hypothesis of interest), whether the instruments are “strong” or “weak”. We also describe cases where test consistency may not hold, but the asymptotic distribution is modified in a way that would lead to size distortions in large samples. These include, in particular, cases where the 2SLS estimator remains consistent, but the AR and K tests are asymptotically invalid. In the second setup, we find (non-degenerate) asymptotic non-central chi-square distributions in all cases, and describe cases where the non-centrality parameter is zero and the asymptotic distribution remains the same as in the case of valid instruments (despite the presence of invalid instruments). Overall, our results underscore the importance of checking for the presence of possibly invalid instruments when applying “identification-robust” tests.  相似文献   

15.
Stefan Fremdt 《Statistics》2015,49(1):128-155
In a variety of different settings cumulative sum (CUSUM) procedures have been applied for the sequential detection of structural breaks in the parameters of stochastic models. Yet their performance depends strongly on the time of change and is best under early change scenarios. For later changes their finite sample behavior is rather questionable. We therefore propose modified CUSUM procedures for the detection of abrupt changes in the regression parameter of multiple time series regression models, that show a higher stability with respect to the time of change than ordinary CUSUM procedures. The asymptotic distributions of the test statistics and the consistency of the procedures are provided. In a simulation study it is shown that the proposed procedures behave well in finite samples. Finally the procedures are applied to a set of capital asset pricing data related to the Fama–French extension of the CAPM.  相似文献   

16.
Clinical trials usually involve efficient and ethical objectives such as maximizing the power and minimizing the total failure number. Interim analysis is now a standard technique in practice to achieve these objectives. Randomized urn models have been extensively studied in the literature. In this paper, we propose to perform interim analysis on clinical trials based on urn models and study its properties. We show that the urn composition, allocation of patients and parameter estimators can be approximated by a joint Gaussian process. Consequently, sequential test statistics of the proposed procedure converge to a Brownian motion in distribution and the sequential test statistics asymptotically satisfy the canonical joint distribution defined in Jennison & Turnbull (Jennison & Turnbull 2000. Group Sequential Methods with Applications to Clinical Trials, Chapman and Hall/CRC). These results provide a solid foundation and open a door to perform the interim analysis on randomized clinical trials with urn models in practice. Furthermore, we demonstrate our proposal through examples and simulations by applying sequential monitoring and stochastic curtailment techniques. The Canadian Journal of Statistics 40: 550–568; 2012 © 2012 Statistical Society of Canada  相似文献   

17.
Risk of investing in a financial asset is quantified by functionals of squared returns. Discrete time stochastic volatility (SV) models impose a convenient and practically relevant time series dependence structure on the log-squared returns. Different long-term risk characteristics are postulated by short-memory SV and long-memory SV models. It is therefore important to test which of these two alternatives is suitable for a specific asset. Most standard tests are confounded by deterministic trends. This paper introduces a new, wavelet-based, test of the null hypothesis of short versus long memory in volatility which is robust to deterministic trends. In finite samples, the test performs better than currently available tests which are based on the Fourier transform.  相似文献   

18.
In this paper, we consider a binary response model for the analysis of the two-treatment, two-period and four-sequence crossover design. We have introduced intra-patient drug dependency parameter in the model and provide two tests for the hypothesis of equality of treatment effects. We employ Monte Carlo simulation to compare our tests and a test that works under parallel design on the basis of type I error rate and power. We find that our procedures are dominant over the competitor with respect to power. Finally, we use a data set to illustrate the applicability of our procedure.  相似文献   

19.
Longitudinal studies with repeatedly measured dependent variable (out-come) and time-invariant covariates are common in biomedical and epidemi-ological studies. A useful statistical tool to evaluate the effects of covariates on the outcome variable over time is the varying-coefficient regression, which considers a linear relationship between the covariates and the outcome at a specific time point but assumes the linear coefficients to be smooth curves over time. In order to provide adequate smoothing for each coefficient curve, Wu and Chiang ( 1999 ) proposed a class of component-wise kernel estimators and determined the large sample convergence rates and some of the constant terms of the mean squared errors of their estimators. In this paper we calcu¬late the explicit large sample mean squared errors, including the convergence rates and ail the constant terms, and the asymptotic distributions of the kernel estimators of Wu and Chiang ( 1999 ). These asymptotic distributions are used to construct point-wise confidence intervals and Bonferroni-type confidence bands for the coefficient curves. Through a Monte Carlo simulation, wre show that our confidence regions have adequate coverage probabilities. Applying our procedures to a NIH fetal growth study, we show that our procedures are useful to determine the effects of maternal height, cigarette smoking and al¬cohol consumption on the growth of fetal abdominal circumference over time during pregnancy.  相似文献   

20.
对于包含近单整时间序列的预测模型,在进行Scheffe检验时由于内生性问题的影响,导致参数统计量的检验水平过于保守,由此也相应降低了检验功效。通过加入解释变量的超前项与滞后项差分项的动态方法进行修正,并对修正前后的统计量有限样本性质进行仿真比较,结果显示这一修正方法可以有效降低内生性问题对Scheffe检验结果的影响。在小样本条件下,经过修正的Scheffe检验不仅提高了统计量的检验功效,同时明显减少了检验水平的扭曲现象。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号