首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Econometric Reviews》2013,32(3):215-228
Abstract

Decisions based on econometric model estimates may not have the expected effect if the model is misspecified. Thus, specification tests should precede any analysis. Bierens' specification test is consistent and has optimality properties against some local alternatives. A shortcoming is that the test statistic is not distribution free, even asymptotically. This makes the test unfeasible. There have been many suggestions to circumvent this problem, including the use of upper bounds for the critical values. However, these suggestions lead to tests that lose power and optimality against local alternatives. In this paper we show that bootstrap methods allow us to recover power and optimality of Bierens' original test. Bootstrap also provides reliable p-values, which have a central role in Fisher's theory of hypothesis testing. The paper also includes a discussion of the properties of the bootstrap Nonlinear Least Squares Estimator under local alternatives.  相似文献   

2.
A frequent question raised by practitioners doing unit root tests is whether these tests are sensitive to the presence of heteroscedasticity. Theoretically this is not the case for a wide range of heteroscedastic models. However, for some limiting cases such as degenerate and integrated heteroscedastic processes it is not obvious whether this will have an effect. In this paper we report a Monte Carlo study analyzing the implications of various types of heteroscedasticity on three types of unit root tests: The usual Dickey-Fuller test, Phillips' (1987) semi-parametric test and finally a Dickey-Fuller type test using White's (1980) heteroscedasticity consistent standard errors. The sorts of heteroscedasticity we examine are the GARCH model of Bollerslev (1986) and the Exponential ARCH model of Nelson (1991). In particular, we call attention to situations where the conditional variances exhibit a high degree of persistence as is frequently observed for returns of financial time series, and the case where, in fact, the variance process for the first class of models becomes degenerate.  相似文献   

3.
When prediction intervals are constructed using unobserved component models (UCM), problems can arise due to the possible existence of components that may or may not be conditionally heteroscedastic. Accurate coverage depends on correctly identifying the source of the heteroscedasticity. Different proposals for testing heteroscedasticity have been applied to UCM; however, in most cases, these procedures are unable to identify the heteroscedastic component correctly. The main issue is that test statistics are affected by the presence of serial correlation, causing the distribution of the statistic under conditional homoscedasticity to remain unknown. We propose a nonparametric statistic for testing heteroscedasticity based on the well-known Wilcoxon''s rank statistic. We study the asymptotic validation of the statistic and examine bootstrap procedures for approximating its finite sample distribution. Simulation results show an improvement in the size of the homoscedasticity tests and a power that is clearly comparable with the best alternative in the literature. We also apply the test on real inflation data. Looking for the presence of a conditionally heteroscedastic effect on the error terms, we arrive at conclusions that almost all cases are different than those given by the alternative test statistics presented in the literature.  相似文献   

4.
ABSTRACT

This paper proposes an adaptive quasi-maximum likelihood estimation (QMLE) when forecasting the volatility of financial data with the generalized autoregressive conditional heteroscedasticity (GARCH) model. When the distribution of volatility data is unspecified or heavy-tailed, we worked out adaptive QMLE based on data by using the scale parameter ηf to identify the discrepancy between wrongly specified innovation density and the true innovation density. With only a few assumptions, this adaptive approach is consistent and asymptotically normal. Moreover, it gains better efficiency under the condition that innovation error is heavy-tailed. Finally, simulation studies and an application show its advantage.  相似文献   

5.
ABSTRACT

This paper analyses the behaviour of the goodness-of-fit tests for regression models. To this end, it uses statistics based on an estimation of the integrated regression function with missing observations either in the response variable or in some of the covariates. It proposes several versions of one empirical process, constructed from a previous estimation, that uses only the complete observations or replaces the missing observations with imputed values. In the case of missing covariates, a link model is used to fill the missing observations with other complete covariates. In all the situations, Bootstrap methodology is used to calibrate the distribution of the test statistics. A broad simulation study compares the different procedures based on empirical regression methodology, with smoothed tests previously studied in the literature. The comparison reflects the effect of the correlation between the covariates in the tests based on the imputed sample for missing covariates. In addition, the paper proposes a computational binning strategy to evaluate the tests based on an empirical process for large data sets. Finally, two applications to real data illustrate the performance of the tests.  相似文献   

6.
The quantile–quantile plot is widely used to check normality. The plot depends on the plotting positions. Many commonly used plotting positions do not depend on the sample values. We propose an adaptive plotting position that depends on the relative distances of the two neighbouring sample values. The correlation coefficient obtained from the adaptive plotting position is used to test normality. The test using the adaptive plotting position is better than the Shapiro–Wilk W test for small samples and has larger power than Hazen's and Blom's plotting positions for symmetric alternatives with shorter tail than normal and skewed alternatives when n is 20 or larger. The Brown–Hettmansperger T* test is designed for detecting bad tail behaviour, so it does not have power for symmetric alternatives with shorter tail than normal, but it is generally better than the other tests when β2 is greater than 3.25.  相似文献   

7.
ABSTRACT

A simple test based on Gini's mean difference is proposed to test the hypothesis of equality of population variances. Using 2000 replicated samples and empirical distributions, we show that the test compares favourably with Bartlett's and Levene's test for the normal population. Also, it is more powerful than Bartlett's and Levene's tests for some alternative hypotheses for some non-normal distributions and more robust than the other two tests for large sample sizes under some alternative hypotheses. We also give an approximate distribution to the test statistic to enable one to calculate the nominal levels and P-values.  相似文献   

8.
This paper examines the robustness of the Welch test, the James test as well as Tan's ANOVA test (to be referred as Fβ test) for testing parallelism in k straight lines under heteroscedasticity and nonnormality. Results of Monte Carlo studies demonstrate the robustness of all tests with respect to departure from normality. Further, there is hardly any difference between these methods with respect to both power and size of the test.  相似文献   

9.
This paper deals with a survey of different types of tests, parametric, nonparametric, robustified and adaptive ones, and with an application to the two-sided c-sample location problem. Some concepts of robustness are discussed, such as breakdown point, influence function, gross-error sensitivity and especially α- and β-robustness. A robustness study on level α in the case of heteroscedasticity and nonnormal distributions is carried out via Monte Carlo methods and also a power comparison of all the tests considered. It turns out that robustified versions of the F-test and Welch-test where the original observations are replaced by its ranks behave well over a broad class of distributions, symmetric ones with different tail weight and asymmetric ones, but, on the whole, an adaptive test is to prefer.  相似文献   

10.
ABSTRACT

In this paper, the maximum value test is proposed and considered for two-sample problem solving with lifetime data. This test is a distribution-free test under non-censoring and is a not distribution-free test under censoring. The formula of the limit distribution of the proposed maximal value test is represented in the general case. The distribution of the test statistic has been studied experimentally. Also, we propose the estimate of a p-value calculation of the maximum value test instead of the Monte-Carlo simulation. This test is useful and applicable in case of choosing among the logrank test, the Cox–Mantel test, the Q test and Generalized Wilcoxon tests, for instance, the Gehan's Generalized Wilcoxon test and the Peto and Peto's Generalized Wilcoxon test.  相似文献   

11.
This paper deals with a study of different types of tests for the two-sided c-sample scale problem. We consider the classical parametric test of Bartlett [M.S. Bartlett, Properties of sufficiency and statistical tests, Proc. R. Stat. Soc. Ser. A. 160 (1937), pp. 268–282] several nonparametric tests, especially the test of Fligner and Killeen [M.A. Fligner and T.J. Killeen, Distribution-free two-sample tests for scale, J. Amer. Statist. Assoc. 71 (1976), pp. 210–213], the test of Levene [H. Levene, Robust tests for equality of variances, in Contribution to Probability and Statistics, I. Olkin, ed., Stanford University Press, Palo Alto, 1960, pp. 278–292] and a robust version of it introduced by Brown and Forsythe [M.B. Brown and A.B. Forsythe, Robust tests for the equality of variances, J. Amer. Statist. Assoc. 69 (1974), pp. 364–367] as well as two adaptive tests proposed by Büning [H. Büning, Adaptive tests for the c-sample location problem – the case of two-sided alternatives, Comm. Statist.Theory Methods. 25 (1996), pp. 1569–1582] and Büning [H. Büning, An adaptive test for the two sample scale problem, Nr. 2003/10, Diskussionsbeiträge des Fachbereich Wirtschaftswissenschaft der Freien Universität Berlin, Volkswirtschaftliche Reihe, 2003]. which are based on the principle of Hogg [R.V. Hogg, Adaptive robust procedures. A partial review and some suggestions for future applications and theory, J. Amer. Statist. Assoc. 69 (1974), pp. 909–927]. For all the tests we use Bootstrap sampling strategies, too. We compare via Monte Carlo Methods all the tests by investigating level α and power β of the tests for distributions with different strength of tailweight and skewness and for various sample sizes. It turns out that the test of Fligner and Killeen in combination with the bootstrap is the best one among all tests considered.  相似文献   

12.
The purpose of this note is to derive simple testing procedures for ANOVA under heteroscedasticity by a single approach that are equivalent to the prior art in the literature obtained by the Parametric Bootstrap and the Generalized Fiducial approach. By similar approach, researchers are encouraged to derive generalized tests in other applications, as alternative to parametric bootstrap tests and fiducial tests, including ANCOVA and MANOVA under heteroscedasticity, especially in Mixed Model applications, where the bootstrap approach fails.  相似文献   

13.
The growth rate of the gross domestic product (GDP) usually carries heteroscedasticity, asymmetry and fat-tails. In this study three important and significantly heteroscedastic GDP series are examined. A Normal, normal-mixture, normal-asymmetric Laplace distribution and a Student's t-Asymmetric Laplace (TAL) distribution mixture are considered for distributional fit comparison of GDP growth series after removing heteroscedasticity. The parameters of the distributions have been estimated using maximum likelihood method. Based on the results of different accuracy measures, goodness-of-fit tests and plots, we find out that in the case of asymmetric, heteroscedastic and highly leptokurtic data the TAL-distribution fits better than the alternatives. In the case of asymmetric, heteroscedastic but less leptokurtic data the NM fit is superior. Furthermore, a simulation study has been carried out to obtain standard errors for the estimated parameters. The results of this study might be used in e.g. density forecasting of GDP growth series or to compare different economies.  相似文献   

14.
In the last few years, two adaptive tests for paired data have been proposed. One test proposed by Freidlin et al. [On the use of the Shapiro–Wilk test in two-stage adaptive inference for paired data from moderate to very heavy tailed distributions, Biom. J. 45 (2003), pp. 887–900] is a two-stage procedure that uses a selection statistic to determine which of three rank scores to use in the computation of the test statistic. Another statistic, proposed by O'Gorman [Applied Adaptive Statistical Methods: Tests of Significance and Confidence Intervals, Society for Industrial and Applied Mathematics, Philadelphia, 2004], uses a weighted t-test with the weights determined by the data. These two methods, and an earlier rank-based adaptive test proposed by Randles and Hogg [Adaptive Distribution-free Tests, Commun. Stat. 2 (1973), pp. 337–356], are compared with the t-test and to Wilcoxon's signed-rank test. For sample sizes between 15 and 50, the results show that the adaptive test proposed by Freidlin et al. and the adaptive test proposed by O'Gorman have higher power than the other tests over a range of moderate to long-tailed symmetric distributions. The results also show that the test proposed by O'Gorman has greater power than the other tests for short-tailed distributions. For sample sizes greater than 50 and for small sample sizes the adaptive test proposed by O'Gorman has the highest power for most distributions.  相似文献   

15.
ABSTRACT

For two-way layouts in a between-subjects analysis of variance design, the parametric F-test is compared with seven nonparametric methods: rank transform (RT), inverse normal transform (INT), aligned rank transform (ART), a combination of ART and INT, Puri & Sen's L statistic, Van der Waerden, and Akritas and Brunners ANOVA-type statistics (ATS). The type I error rates and the power are computed for 16 normal and nonnormal distributions, with and without homogeneity of variances, for balanced and unbalanced designs as well as for several models including the null and the full model. The aim of this study is to identify a method that is applicable without too much testing for all the attributes of the plot. The Van der Waerden test shows the overall best performance though there are some situations in which it is disappointing. The Puri & Sen's and the ATS tests show generally very low power. These two and the other methods cannot keep the type I error rate under control in too many situations. Especially in the case of lognormal distributions, the use of any of the rank-based procedures can be dangerous for cell sizes above 10. As already shown by many other authors, nonnormal distributions do not violate the parametric F-test, but unequal variances do, and heterogeneity of variances leads to an inflated error rate more or less also for the nonparametric methods. Finally, it should be noted that some procedures show rising error rates with increasing cell sizes, the ART, especially for discrete variables, and the RT, Puri & Sen, and the ATS in the cases of heteroscedasticity.  相似文献   

16.
All the usual heteroscedasticity tests in the statistics and econometrics literature are based on raw residuals. Although the raw residuals are heteroscedastic, studentized residuals can still be homoscedastic. In this study, the version of Çelik’s RCEV heteroscedasticity test which is based on studentized residuals is introduced.  相似文献   

17.
In two-phase linear regression models, it is a standard assumption that the random errors of two phases have constant variances. However, this assumption is not necessarily appropriate. This paper is devoted to the tests for variance heterogeneity in these models. We initially discuss the simultaneous test for variance heterogeneity of two phases. When the simultaneous test shows that significant heteroscedasticity occurs in the whole model, we construct two individual tests to investigate whether or not both phases or one of them have/has significant heteroscedasticity. Several score statistics and their adjustments based on Cox and Reid [D. R. Cox and N. Reid, Parameter orthogonality and approximate conditional inference. J. Roy. Statist. Soc. Ser. B 49 (1987), pp. 1–39] are obtained and illustrated with Australian onion data. The simulated powers of test statistics are investigated through Monte Carlo methods.  相似文献   

18.
It is common for a linear regression model that the error terms display some form of heteroscedasticity and at the same time, the regressors are also linearly correlated. Both of these problems have serious impact on the ordinary least squares (OLS) estimates. In the presence of heteroscedasticity, the OLS estimator becomes inefficient and the similar adverse impact can also be found on the ridge regression estimator that is alternatively used to cope with the problem of multicollinearity. In the available literature, the adaptive estimator has been established to be more efficient than the OLS estimator when there is heteroscedasticity of unknown form. The present article proposes the similar adaptation for the ridge regression setting with an attempt to have more efficient estimator. Our numerical results, based on the Monte Carlo simulations, provide very attractive performance of the proposed estimator in terms of efficiency. Three different existing methods have been used for the selection of biasing parameter. Moreover, three different distributions of the error term have been studied to evaluate the proposed estimator and these are normal, Student's t and F distribution.  相似文献   

19.
In this paper we consider the problem of comparing several means under heteroscedasticity and nonnormality. By combining Huber‘s M-estimators with the Brown-Forsythe test, several robust procedures were developed; these procedures were compared through computer simulation studies with the Tan-Tabatabai procedure which was developed by combining Tiku's MML estimators with the Brown-Forsythe test. The numerical results indicate clearly that the Tan-Tabatabai procedure is considerably more powerful than tests based on Huber's M-estimators over a wide range of nonnormal distributions.  相似文献   

20.
ABSTRACT

In non-normal populations, it is more convenient to use the coefficient of quartile variation rather than the coefficient of variation. This study compares the percentile and t-bootstrap confidence intervals with Bonett's confidence interval for the quartile variation. We show that empirical coverage of the bootstrap confidence intervals is closer to the nominal coverage (0.95) for small sample sizes (n = 5, 6, 7, 8, 9, 10 and 15) for most distributions studied. Bootstrap confidence intervals also have smaller average width. Thus, we propose using bootstrap confidence intervals for the coefficient of quartile variation when the sample size is small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号