首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Long memory in conditional variance is one of the empirical features exhibited by many financial time series. One class of models that was suggested to capture this behavior is the so-called Fractionally Integrated GARCH (Baillie, Bollerslev and Mikkelsen 1996) in which the ideas of fractional integration originally introduced by Granger (1980) and Hosking (1981) for processes for the mean are applied to a GARCH framework. In this paper we derive analytic expressions for the second-order derivatives of the log-likelihood function of FIGARCH processes with a view to the advantages that can be gained in computational speed and estimation accuracy. The comparison is computationally intensive given the typical sample size of the time series involved and the way the likelihood function is built. An illustration is provided on exchange rate and stock index data. A preliminary version of this paper was presented at the conference S.Co. 2001 in Bressanone. We would like to thank Silvano Bordignon for being an insightful and constructive discussant and Luisa Bisaglia and Giorgio Calzolari for providing useful comments. We also thank Tim Bollerslev for providing the data on the DEM/USD exchange rate used in Baillie, Bollerslev and Mikkelsen (1996).  相似文献   

2.
A frequent question raised by practitioners doing unit root tests is whether these tests are sensitive to the presence of heteroscedasticity. Theoretically this is not the case for a wide range of heteroscedastic models. However, for some limiting cases such as degenerate and integrated heteroscedastic processes it is not obvious whether this will have an effect. In this paper we report a Monte Carlo study analyzing the implications of various types of heteroscedasticity on three types of unit root tests: The usual Dickey-Fuller test, Phillips' (1987) semi-parametric test and finally a Dickey-Fuller type test using White's (1980) heteroscedasticity consistent standard errors. The sorts of heteroscedasticity we examine are the GARCH model of Bollerslev (1986) and the Exponential ARCH model of Nelson (1991). In particular, we call attention to situations where the conditional variances exhibit a high degree of persistence as is frequently observed for returns of financial time series, and the case where, in fact, the variance process for the first class of models becomes degenerate.  相似文献   

3.
This article considers short memory characteristics in a long memory process. We derive new asymptotic results for the sample autocorrelation difference ratios. We used these results to develop a new portmanteau test that determines if short memory parameters are statistically significant. In simulations, the new test can detect short memory components more often than the Ljung-Box test when these short memory components are in fact within a long memory process. Interestingly, our test finds short memory autocorrelations in U.S. inflation rate data, whereas the Ljung-Box test fails to find these autocorrelations. Modeling these short memory autocorrelations of the inflation rate data leads to improved model accuracy and more precise prediction.  相似文献   

4.
Risk of investing in a financial asset is quantified by functionals of squared returns. Discrete time stochastic volatility (SV) models impose a convenient and practically relevant time series dependence structure on the log-squared returns. Different long-term risk characteristics are postulated by short-memory SV and long-memory SV models. It is therefore important to test which of these two alternatives is suitable for a specific asset. Most standard tests are confounded by deterministic trends. This paper introduces a new, wavelet-based, test of the null hypothesis of short versus long memory in volatility which is robust to deterministic trends. In finite samples, the test performs better than currently available tests which are based on the Fourier transform.  相似文献   

5.
The first two stages in modelling times series are hypothesis testing and estimation. For long memory time series, the second stage was studied in the paper published in [M. Boutahar et al., Estimation methods of the long memory parameter: monte Carlo analysis and application, J. Appl. Statist. 34(3), pp. 261–301.] in which we have presented some estimation methods of the long memory parameter. The present paper is intended for the first stage, and hence completes the former, by exploring some tests for detecting long memory in time series. We consider two kinds of tests: the non-parametric class and the semi-parametric one. We precise the limiting distribution of the non-parametric tests under the null of short memory and we show that they are consistent against the alternative of long memory. We perform also some Monte Carlo simulations to analyse the size distortion and the power of all proposed tests. We conclude that for large sample size, the two classes are equivalent but for small sample size the non-parametric class is better than the semi-parametric one.  相似文献   

6.
In accelerated life testing (ALT), products are exposed to stress levels higher than those at normal use in order to obtain information in a timely manner. Past work on planning ALT is predominantly on a single cause of failure. This article presents methods for planning ALT in the presence of k competing risks. Expressions for computing the Fisher information matrix are presented when risks are independently distributed lognormal. Optimal test plans are obtained under criteria that are based on determinants and maximum likelihood estimation. The proposed method is demonstrated on ALT of motor insulation.  相似文献   

7.
Two wavelet based estimators are considered in this paper for the two parameters that characterize long range dependence processes. The first one is linear and is based on the statistical properties of the coefficients of a discrete wavelet transform of long range dependence processes. The estimator consists in measuring the slope (related to the long memory parameter) and the intercept (related to the variance of the process) of a linear regression after a discrete wavelet transform is performed (Veitch and Abry, 1999). In this paper its properties are reviewed, and analytic evidence is produced that the linear estimator is applicable only when the second parameter is unknown. To overcome this limitation a non linear wavelet based estimator - that takes into account that the intercept depends on the long memory parameter - is proposed here for the cases in which the second parameter is known or the only parameter of interest is the long memory parameter. Under the same hypothesis assumed for the linear estimator, the non linear estimator is shown to be asymptotically more efficient for the long memory parameter. Numerical simulations show that, even for small data sets, the bias is very small and the variance close to optimal. An application to ATM based Internet traffic is presented.Financial support from the Italian Ministry of University and Scientific Research (MIUR), also in the context of the COFIN 2002 ALINWEB (Algorithms for the Internet and the Web) Project, is gratefully acknowledged.  相似文献   

8.
This paper studies well-known tests by Kim et?al. (J Econom 109:389?C392, 2002) and Busetti and Taylor (J Econom 123:33?C66, 2004) for the null hypothesis of short memory against a change to nonstationarity, I (1). The potential break point is not assumed to be known but estimated from the data. First, we show that the tests are also applicable for a change from I (0) to a fractional order of integration I (d) with d?>?0 (long memory) in that the tests are consistent. The rates of divergence of the test statistics are derived as functions of the sample size T and d. Second, we compare their finite sample power experimentally. Third, we consider break point estimation for a change from I (0) to I (d) for finite samples in computer simulations. It turns out that the estimators proposed for the integer case (d?=?1) are practically reliable only if d is close enough to 1.  相似文献   

9.
For testing normality we investigate the power of several tests, first of all, the well-known test of Jarque & Bera (1980) and furthermore the tests of Kuiper (1960) and Shapiro & Wilk (1965) as well as tests of Kolmogorov–Smirnov and Cramér-von Mises type. The tests on normality are based, first, on independent random variables (model I) and, second, on the residuals in the classical linear regression (model II). We investigate the exact critical values of the Jarque–Bera test and the Kolmogorov–Smirnov and Cramér-von Mises tests, in the latter case for the original and standardized observations where the unknown parameters μ and σ have to be estimated. The power comparison is carried out via Monte Carlo simulation assuming the model of contaminated normal distributions with varying parameters μ and σ and different proportions of contamination. It turns out that for the Jarque–Bera test the approximation of critical values by the chi-square distribution does not work very well. The test is superior in power to its competitors for symmetric distributions with medium up to long tails and for slightly skewed distributions with long tails. The power of the Jarque–Bera test is poor for distributions with short tails, especially if the shape is bimodal – sometimes the test is even biased. In this case a modification of the Cramér-von Mises test or the Shapiro–Wilk test may be recommended.  相似文献   

10.
We establish the limiting distributions for empirical estimators of the coefficient of skewness, kurtosis, and the Jarque–Bera normality test statistic for long memory linear processes. We show that these estimators, contrary to the case of short memory, are neither ${\sqrt{n}}We establish the limiting distributions for empirical estimators of the coefficient of skewness, kurtosis, and the Jarque–Bera normality test statistic for long memory linear processes. We show that these estimators, contrary to the case of short memory, are neither ?n{\sqrt{n}}-consistent nor asymptotically normal. The normalizations needed to obtain the limiting distributions depend on the long memory parameter d. A direct consequence is that if data are long memory then testing normality with the Jarque–Bera test by using the chi-squared critical values is not valid. Therefore, statistical inference based on skewness, kurtosis, and the Jarque–Bera normality test, needs a rescaling of the corresponding statistics and computing new critical values of their nonstandard limiting distributions.  相似文献   

11.
邓露 《统计研究》2010,27(9):97-102
 本文运用蒙特卡罗模拟的方法对小样本下长记忆性的三种半参数估计量的分布特征尤其是有偏性问题进行了深入分析,结果发现,当长记忆和短记忆同时存在时,在大多数情况下,各参数估计量仍然服从正态分布,因此在小样本下仍可以构造t统计量判别参数的显著性,但由于受到短期参数的影响,估计量的分布是有偏的,因此导致参数的估计和检验出现偏差。而当真实数据过程接近非平稳或过度差分时,半参数估计量的分布也会发生改变。  相似文献   

12.
The concept of fractional cointegration (Cheung and Lai in J Bus Econ Stat 11:103–112, 1993) has been introduced to generalize traditional cointegration (Engle and Granger in Econometrica 55:251–276, 1987) to the long memory framework. In this work we propose a test for fractional cointegration with the sieve bootstrap and compare by simulations the performance of our proposal with other semiparametric methods existing in literature: the three steps technique of Marinucci and Robinson (J Econom 105:225–247, 2001) and the procedure to determine the fractional cointegration rank of Robinson and Yajima (J Econom 106:217–241, 2002).  相似文献   

13.
Long-memory tests are often complicated by the presence of deterministic trends. Hence, an additional step of detrending the data is necessary. The typical way to detrend a suspected long-memory series is to use OLS or BSP residuals. Applying the method of sensitivity analysis we address the of question of how robust these residuals are in presence of potential long memory components. Unlike short-memory ARMA process long-memory I(d) processes causes sensitivity to OLS/BSP residuals. Therefore, we develop a finite sample measure of the sensitivity of a detrended series based on the residuals. Based on our sensitivity measure we propose a “rule of thumb” for practitioners to choose between the two methods of detrending, has been provided in this article.  相似文献   

14.
Testing against ordered alternatives in the c -sample location problem plays an important role in statistical practice. The parametric test proposed by Barlow et al .-in the following, called the 'B-test'-is an appropriate test under the model of normality. For non-normal data, however, there are rank tests which have higher power than the B-test, such as the Jonckheere test or so-called Jonckheere-type tests introduced and studied by Buning and Kossler. However, we usually have no information about the underlying distribution. Thus, an adaptive test should be applied which takes into account the given data set. Two versions of such an adaptive test are proposed, which are based on the concept introduced by Hogg in 1974. These adaptive tests are compared with each of the single Jonckheere-type tests in the adaptive scheme and also with the B-test. It is shown via Monte Carlo simulation that the adaptive tests behave well over a broad class of symmetric distributions with short, medium and long tails, as well as for asymmetric distributions.  相似文献   

15.
In a recent volume of this journal, Holden [Testing the normality assumption in the Tobit Model, J. Appl. Stat. 31 (2004) pp. 521–532] presents Monte Carlo evidence comparing several tests for departures from normality in the Tobit Model. This study adds to the work of Holden by considering another test, and several information criteria, for detecting departures from normality in the Tobit Model. The test given here is a modified likelihood ratio statistic based on a partially adaptive estimator of the Censored Regression Model using the approach of Caudill [A partially adaptive estimator for the Censored Regression Model based on a mixture of normal distributions, Working Paper, Department of Economics, Auburn University, 2007]. The information criteria examined include the Akaike’s Information Criterion (AIC), the Consistent AIC (CAIC), the Bayesian information criterion (BIC), and the Akaike’s BIC (ABIC). In terms of fewest ‘rejections’ of a true null, the best performance is exhibited by the CAIC and the BIC, although, like some of the statistics examined by Holden, there are computational difficulties with each.  相似文献   

16.
To test the extreme value condition, Cramér-Von Mises type tests were recently proposed by Drees et al. (2006) and Dietrich et al. (2002). Hüsler and Li (2006) presented a simulation study on the behavior of these tests and verified that they are not robust for models in the domain of attraction of a max-semistable distribution function. In this work we develop a test statistic that distinguishes quite well distribution functions which belong to a max-stable domain of attraction from those in a max-semistable one. The limit law is deduced and the results from a numerical simulation study are presented.  相似文献   

17.
中国粮价与通货膨胀关系的协整分析:2001-2005   总被引:8,自引:1,他引:7  
文章利用2001~2005年CPI和批发粮价的月度数据进行协整分析,并构建均衡修正模型,实证检验了二者的关系。实证结果说明无论长短期通货膨胀对粮价都具有Granger因果关系,验证了通货膨胀通过改变粮食市场参与主体的预期影响粮价的假说;在长期粮价对通货膨胀具有Granger因果关系,粮价长期高位运行会导致通货膨胀,但在短期内粮价对通货膨胀的影响较弱,不支持粮价上涨短期内即会引发通货膨胀的观点。  相似文献   

18.
This paper deals with the topic of revisions in macroeconomic Italian data with the aim of investigating whether consecutive vintages published by the National Statistical Institute contain useful information for economic analysis and forecasting. The rationality of the revisions process is tested considering the complete history of data and an application to show the usefulness of revisions for improving the precision of forecasts is proposed. The results on Italian GDP show that embedding the revision process in a dynamic factor model helps to reduce the forecast error in the short term.  相似文献   

19.
We test for the presence of long memory in daily stock returns and their squares using a robust semiparametric procedure of Lobato and Robinson. Spurious results can be produced by nonstationarity and aggregation. We address these problems by analyzing subperiods of returns and using individual stocks. The test results show no evidence of long memory in the returns. By contrast, there is strong evidence in the squared returns.  相似文献   

20.
Model selection problems arise while constructing unbiased or asymptotically unbiased estimators of measures known as discrepancies to find the best model. Most of the usual criteria are based on goodness-of-fit and parsimony. They aim to maximize a transformed version of likelihood. For linear regression models with normally distributed error, the situation is less clear when two models are equivalent: are they close to or far from the unknown true model? In this work, based on stochastic simulation and parametric simulation, we study the results of Vuong's test, Cox's test, Akaike's information criterion, Bayesian information criterion, Kullback information criterion and bias corrected Kullback information criterion and the ability of these tests to discriminate between non-nested linear models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号