首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Econometric Reviews》2013,32(3):217-237
Abstract

The debate on whether macroeconomic series are trend or difference stationary, initiated by Nelson and Plosser [Nelson, C. R.; Plosser, C. I. (1982). Trends and random walks in macroeconomic time series: some evidence and implications. Journal of Monetary Economics10:139–162] remains unresolved. The main objective of the paper is to contribute toward a resolution of this issue by bringing into the discussion the problem of statistical adequacy. The paper revisits the empirical results of Nelson and Plosser [Nelson, C. R.; Plosser, C. I. (1982). Trends and random walks in macroeconomic time series: some evidence and implications. Journal of Monetary Economics10:139–162] and Perron [Perron, P. (1989). The great crash, the oil price shock, and the unit root hypothesis. Econometrica57:1361–1401] and shows that several of their estimated models are misspecified. Respecification with a view to ensuring statistical adequacy gives rise to heteroskedastic AR(k) models for some of the price series. Based on estimated models which are statistically adequate, the main conclusion of the paper is that the majority of the data series are trend stationary.  相似文献   

2.
In this paper, we use simulated data to investigate the power of different causality tests in a two-dimensional vector autoregressive (VAR) model. The data are presented in a nonlinear environment that is modelled using a logistic smooth transition autoregressive function. We use both linear and nonlinear causality tests to investigate the unidirection causality relationship and compare the power of these tests. The linear test is the commonly used Granger causality F test. The nonlinear test is a non-parametric test based on Baek and Brock [A general test for non-linear Granger causality: Bivariate model. Tech. Rep., Iowa State University and University of Wisconsin, Madison, WI, 1992] and Hiemstra and Jones [Testing for linear and non-linear Granger causality in the stock price–volume relation, J. Finance 49(5) (1994), pp. 1639–1664]. When implementing the nonlinear test, we use separately the original data, the linear VAR filtered residuals, and the wavelet decomposed series based on wavelet multiresolution analysis. The VAR filtered residuals and the wavelet decomposition series are used to extract the nonlinear structure of the original data. The simulation results show that the non-parametric test based on the wavelet decomposition series (which is a model-free approach) has the highest power to explore the causality relationship in nonlinear models.  相似文献   

3.
《Econometric Reviews》2013,32(1):53-70
Abstract

We review the different block bootstrap methods for time series, and present them in a unified framework. We then revisit a recent result of Lahiri [Lahiri, S. N. (1999b). Theoretical comparisons of block bootstrap methods, Ann. Statist. 27:386–404] comparing the different methods and give a corrected bound on their asymptotic relative efficiency; we also introduce a new notion of finite-sample “attainable” relative efficiency. Finally, based on the notion of spectral estimation via the flat-top lag-windows of Politis and Romano [Politis, D. N., Romano, J. P. (1995). Bias-corrected nonparametric spectral estimation. J. Time Series Anal. 16:67–103], we propose practically useful estimators of the optimal block size for the aforementioned block bootstrap methods. Our estimators are characterized by the fastest possible rate of convergence which is adaptive on the strength of the correlation of the time series as measured by the correlogram.  相似文献   

4.
Chen and Balakrishnan [Chen, G. and Balakrishnan, N., 1995, A general purpose approximate goodness-of-fit test. Journal of Quality Technology, 27, 154–161] proposed an approximate method of goodness-of-fit testing that avoids the use of extensive tables. This procedure first transforms the data to normality, and subsequently applies the classical tests for normality based on the empirical distribution function, and critical points thereof. In this paper, we investigate the potential of this method in comparison to a corresponding goodness-of-fit test which instead of the empirical distribution function, utilizes the empirical characteristic function. Both methods are in full generality as they may be applied to arbitrary laws with continuous distribution function, provided that an efficient method of estimation exists for the parameters of the hypothesized distribution.  相似文献   

5.
《Econometric Reviews》2013,32(1):29-58
Abstract

Approximation formulae are developed for the bias of ordinary and generalized Least Squares Dummy Variable (LSDV) estimators in dynamic panel data models. Results from Kiviet [Kiviet, J. F. (1995), on bias, inconsistency, and efficiency of various estimators in dynamic panel data models, J. Econometrics68:53–78; Kiviet, J. F. (1999), Expectations of expansions for estimators in a dynamic panel data model: some results for weakly exogenous regressors, In: Hsiao, C., Lahiri, K., Lee, L‐F., Pesaran, M. H., eds., Analysis of Panels and Limited Dependent Variables, Cambridge: Cambridge University Press, pp. 199–225] are extended to higher‐order dynamic panel data models with general covariance structure. The focus is on estimation of both short‐ and long‐run coefficients. The results show that proper modelling of the disturbance covariance structure is indispensable. The bias approximations are used to construct bias corrected estimators which are then applied to quarterly data from 14 European Union countries. Money demand functions for M1, M2 and M3 are estimated for the EU area as a whole for the period 1991: I–1995: IV. Significant spillovers between countries are found reflecting the dependence of domestic money demand on foreign developments. The empirical results show that in general plausible long‐run effects are obtained by the bias corrected estimators. Moreover, finite sample bias, although of moderate magnitude, is present underlining the importance of more refined estimation techniques. Also the efficiency gains by exploiting the heteroscedasticity and cross‐correlation patterns between countries are sometimes considerable.  相似文献   

6.
Abstract

Use of the MVUE for the inverse-Gaussian distribution has been recently proposed by Nguyen and Dinh [Nguyen, T. T., Dinh, K. T. (2003). Exact EDF goodnes-of-fit tests for inverse Gaussian distributions. Comm. Statist. (Simulation and Computation) 32(2):505–516] where a sequential application based on Rosenblatt's transformation [Rosenblatt, M. (1952). Remarks on a multivariate transformation. Ann. Math. Statist. 23:470–472] led the authors to solve the composite goodness-of-fit problem by solving the surrogate simple goodness-of-fit problem, of testing uniformity of the independent transformed variables. In this note, we observe first that the proposal is not new since it was proposed in a rather general setting in O'Reilly and Quesenberry [O'Reilly, F., Quesenberry, C. P. (1973). The conditional probability integral transformation and applications to obtain composite chi-square goodness-of-fit tests. Ann. Statist. I:74–83]. It is shown on the other hand that the results in the paper of Nguyen and Dinh (2003) are incorrect in their Sec. 4, specially the Monte Carlo figures reported. Power simulations are provided here comparing these corrected results with two previously reported goodness-of-fit tests for the inverse-Gaussian; the modified Kolmogorov–Smirnov test in Edgeman et al. [Edgeman, R. L., Scott, R. C., Pavur, R. J. (1988). A modified Kolmogorov-Smirnov test for inverse Gaussian distribution with unknown parameters. Comm. Statist. 17(B): 1203–1212] and the A 2 based method in O'Reilly and Rueda [O'Reilly, F., Rueda, R. (1992). Goodness of fit for the inverse Gaussian distribution. T Can. J. Statist. 20(4):387–397]. The results show clearly that there is a large loss of power in the method explored in Nguyen and Dinh (2003) due to an implicit exogenous randomization.  相似文献   

7.
Abstract

It is well known that prior application of GLS detrending, as advocated by Elliot et al. [Elliot, G., Rothenberg, T., Stock, J. (1996). Efficient tests for an autoregressive unit root. Econometrica 64:813–836], can produce a significant increase in power to reject the unit root null over that obtained from a conventional OLS-based Dickey and Fuller [Dickey, D., Fuller, W. (1979). Distribution of the estimators for autoregressive time series with a unit root. J. Am. Statist. Assoc. 74:427–431] testing equation. However, this paper employs Monte Carlo simulation to demonstrate that this increase in power is not necessarily obtained when breaks occur in either level or trend. It is found that neither OLS nor GLS-based tests are robust to level or trend breaks, their size and power properties both deteriorating as the break size increases.  相似文献   

8.
We investigate the instability problem of the covariance structure of time series by combining the non-parametric approach based on the evolutionary spectral density theory of Priestley [Evolutionary spectra and non-stationary processes, J. R. Statist. Soc., 27 (1965), pp. 204–237; Wavelets and time-dependent spectral analysis, J. Time Ser. Anal., 17 (1996), pp. 85–103] and the parametric approach based on linear regression models of Bai and Perron [Estimating and testing linear models with multiple structural changes, Econometrica 66 (1998), pp. 47–78]. A Monte Carlo study is presented to evaluate the performance of some parametric testing and estimation procedures for models characterized by breaks in variance. We attempt to see whether these procedures perform in the same way as models characterized by mean-shifts as investigated by Bai and Perron [Multiple structural change models: a simulation analysis, in: Econometric Theory and Practice: Frontiers of Analysis and Applied Research, D. Corbea, S. Durlauf, and B.E. Hansen, eds., Cambridge University Press, 2006, pp. 212–237]. We also provide an analysis of financial data series, of which the stability of the covariance function is doubtful.  相似文献   

9.
Testing the order of integration of economic and financial time series has become a conventional procedure prior to any modelling exercise. In this paper, we investigate and compare the finite sample properties of the frequency-domain tests proposed by Robinson [Efficient tests of nonstationary hypotheses, J. Amer. Statist. Assoc. 89(428) (1994), pp. 1420–1437] and the time-domain procedure proposed by Hassler, Rodrigues, and Rubia [Testing for general fractional integration in the time domain, Econometric Theory 25 (2009), pp. 1793–1828] when applied to seasonal data. The results presented are of empirical relevance as they provide some guidance regarding the finite sample properties of these tests.  相似文献   

10.
ABSTRACT

We derive concentration inequalities for the cross-validation estimate of the generalization error for empirical risk minimizers. In the general setting, we show that the worst-case error of this estimate is not much worse that of training error estimate see Kearns M, Ron D. [Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Neural Comput. 1999;11:1427–1453]. General loss functions and class of predictors with finite VC-dimension are considered. Our focus is on proving the consistency of the various cross-validation procedures. We point out the interest of each cross-validation procedure in terms of rates of convergence. An interesting consequence is that the size of the test sample is not required to grow to infinity for the consistency of the cross-validation procedure.  相似文献   

11.
In this paper we apply the sequential bootstrap method proposed by Collet et al. [Bootstrap Central Limit theorem for chains of infinite order via Markov approximations, Markov Processes and Related Fields 11(3) (2005), pp. 443–464] to estimate the variance of the empirical mean of a special class of chains of infinite order called sparse chains. For this process, we show that we are able to compute numerically the true value of the standard error with any fixed error.

Our main goal is to present a comparison, for sparse chains, among sequential bootstrap, the block bootstrap method proposed by Künsch [The jackknife and the Bootstrap for general stationary observations, Ann. Statist. 17 (1989), pp. 1217–1241] and improved by Liu and Singh [Moving blocks jackknife and Bootstrap capture week dependence, in Exploring the limits of the Bootstrap, R. Lepage and L. Billard, eds., Wiley, New York, 1992, pp. 225–248] and the bootstrap method proposed by Bühlmann [Blockwise bootstrapped empirical process for stationary sequences, Ann. Statist. 22 (1994), pp. 995–1012].  相似文献   

12.
A Gaussian process (GP) can be thought of as an infinite collection of random variables with the property that any subset, say of dimension n, of these variables have a multivariate normal distribution of dimension n, mean vector β and covariance matrix Σ [O'Hagan, A., 1994, Kendall's Advanced Theory of Statistics, Vol. 2B, Bayesian Inference (John Wiley & Sons, Inc.)]. The elements of the covariance matrix are routinely specified through the multiplication of a common variance by a correlation function. It is important to use a correlation function that provides a valid covariance matrix (positive definite). Further, it is well known that the smoothness of a GP is directly related to the specification of its correlation function. Also, from a Bayesian point of view, a prior distribution must be assigned to the unknowns of the model. Therefore, when using a GP to model a phenomenon, the researcher faces two challenges: the need of specifying a correlation function and a prior distribution for its parameters. In the literature there are many classes of correlation functions which provide a valid covariance structure. Also, there are many suggestions of prior distributions to be used for the parameters involved in these functions. We aim to investigate how sensitive the GPs are to the (sometimes arbitrary) choices of their correlation functions. For this, we have simulated 25 sets of data each of size 64 over the square [0, 5]×[0, 5] with a specific correlation function and fixed values of the GP's parameters. We then fit different correlation structures to these data, with different prior specifications and check the performance of the adjusted models using different model comparison criteria.  相似文献   

13.
This article examines the spurious regression phenomenon between long memory series if the generating mechanism of individual series is assumed to follow a stationary/nonstationary process with mis-specified breaks. By using least-squares regression, the t-ratio becomes divergent and spurious regression is present. The intuition behind this is that the long memory series with change points can increase persistency in the level of regression errors and cause such spurious relationship. Simulation results indicate that the extent of spurious regression heavily relies on memory index, sample size, and location of break. As a remedy, we employ a four-stage procedure motivated by Maynard, Smallwood and Wohar (2013 Maynard, A., Smallwood, A., Wohar, M. E. (2013). Long memory regressors and predictive testing: a two-stage rebalancing approach. Econometric Reviews 32:318360.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], Econ. Rev., 32, 318–360) to alleviate the size distortions. Finally, an empirical illustration using some stock price data from Shanghai Stock Exchange is reported.  相似文献   

14.
Tests for the equality of variances are of interest in many areas such as quality control, agricultural production systems, experimental education, pharmacology, biology, as well as a preliminary to the analysis of variance, dose–response modelling or discriminant analysis. The literature is vast. Traditional non-parametric tests are due to Mood, Miller and Ansari–Bradley. A test which usually stands out in terms of power and robustness against non-normality is the W50 Brown and Forsythe [Robust tests for the equality of variances, J. Am. Stat. Assoc. 69 (1974), pp. 364–367] modification of the Levene test [Robust tests for equality of variances, in Contributions to Probability and Statistics, I. Olkin, ed., Stanford University Press, Stanford, 1960, pp. 278–292]. This paper deals with the two-sample scale problem and in particular with Levene type tests. We consider 10 Levene type tests: the W50, the M50 and L50 tests [G. Pan, On a Levene type test for equality of two variances, J. Stat. Comput. Simul. 63 (1999), pp. 59–71], the R-test [R.G. O'Brien, A general ANOVA method for robust tests of additive models for variances, J. Am. Stat. Assoc. 74 (1979), pp. 877–880], as well as the bootstrap and permutation versions of the W50, L50 and R tests. We consider also the F-test, the modified Fligner and Killeen [Distribution-free two-sample tests for scale, J. Am. Stat. Assoc. 71 (1976), pp. 210–213] test, an adaptive test due to Hall and Padmanabhan [Adaptive inference for the two-sample scale problem, Technometrics 23 (1997), pp. 351–361] and the two tests due to Shoemaker [Tests for differences in dispersion based on quantiles, Am. Stat. 49(2) (1995), pp. 179–182; Interquantile tests for dispersion in skewed distributions, Commun. Stat. Simul. Comput. 28 (1999), pp. 189–205]. The aim is to identify the effective methods for detecting scale differences. Our study is different with respect to the other ones since it is focused on resampling versions of the Levene type tests, and many tests considered here have not ever been proposed and/or compared. The computationally simplest test found robust is W50. Higher power, while preserving robustness, is achieved by considering the resampling version of Levene type tests like the permutation R-test (recommended for normal- and light-tailed distributions) and the bootstrap L50 test (recommended for heavy-tailed and skewed distributions). Among non-Levene type tests, the best one is the adaptive test due to Hall and Padmanabhan.  相似文献   

15.
This article considers the maximum likelihood estimation (MLE) of a class of stationary and invertible vector autoregressive fractionally integrated moving-average (VARFIMA) processes considered in Equation (26) of Luceño [A fast likelihood approximation for vector general linear processes with long series: Application to fractional differencing, Biometrika 83 (1996), pp. 603–614] or Model A of Lobato [Consistency of the averaged cross-periodogram in long memory series, J. Time Ser. Anal. 18 (1997), pp. 137–155] where each component y i, t is a fractionally integrated process of order d i , i=1, …, r. Under the conditions outlined in Assumption 1 of this article, the conditional likelihood function of this class of VARFIMA models can be efficiently and exactly calculated with a conditional likelihood Durbin–Levinson (CLDL) algorithm proposed herein. This CLDL algorithm is based on the multivariate Durbin–Levinson algorithm of Whittle [On the fitting of multivariate autoregressions and the approximate canonical factorization of a spectral density matrix, Biometrika 50 (1963), pp. 129–134] and the conditional likelihood principle of Box and Jenkins [Time Series Analysis, Forecasting, and Control, 2nd ed., Holden-Day, San Francisco, CA]. Furthermore, the conditions in the aforementioned Assumption 1 are general enough to include the model considered in Andersen et al. [Modeling and forecasting realized volatility, Econometrica 71 (2003), 579–625] for describing the behaviour of realized volatility and the model studied in Haslett and Raftery [Space–time modelling with long-memory dependence: Assessing Ireland's wind power resource, Appl. Statist. 38 (1989), pp. 1–50] for spatial data as its special cases. As the computational cost of implementing the CLDL algorithm is much lower than that of using the algorithms proposed in Sowell [Maximum likelihood estimation of fractionally integrated time series models, Working paper, Carnegie-Mellon University], we are thus able to conduct a Monte Carlo experiment to investigate the finite sample performance of the CLDL algorithm for the 3-dimensional VARFIMA processes with the sample size of 400. The simulation results are very satisfactory and reveal the great potentials of using the CLDL method for empirical applications.  相似文献   

16.
This paper proposes an approximation to the distribution of a goodness-of-fit statistic proposed recently by Balakrishnan et al. [Balakrishnan, N., Ng, H.K.T. and Kannan, N., 2002, A test of exponentiality based on spacings for progressively Type-II censored data. In: C. Huber-Carol et al. (Eds.), Goodness-of-Fit Tests and Model Validity (Boston: Birkhäuser), pp. 89–111.] for testing exponentiality based on progressively Type-II right censored data. The moments of this statistic can be easily calculated, but its distribution is not known in an explicit form. We first obtain the exact moments of the statistic using Basu's theorem and then the density approximants based on these exact moments of the statistic, expressed in terms of Laguerre polynomials, are proposed. A comparative study of the proposed approximation to the exact critical values, computed by Balakrishnan and Lin [Balakrishnan, N. and Lin, C.T., 2003, On the distribution of a test for exponentiality based on progressively Type-II right censored spacings. Journal of Statistical Computation and Simulation, 73 (4), 277–283.], is carried out. This reveals that the proposed approximation is very accurate.  相似文献   

17.
This paper proposes various double unit root tests for cross-sectionally dependent panel data. The cross-sectional correlation is handled by the projection method [P.C.B. Phillips and D. Sul, Dynamic panel estimation and homogeneity testing under cross section dependence, Econom. J. 6 (2003), pp. 217–259; H.R. Moon and B. Perron, Testing for a unit root in panels with dynamic factors, J. Econom. 122 (2004), pp. 81–126] or the subtraction method [J. Bai and S. Ng, A PANIC attack on unit roots and cointegration, Econometrica 72 (2004), pp. 1127–1177]. Pooling or averaging is applied to combine results from different panel units. Also, to estimate autoregressive parameters the ordinary least squares estimation [D.P. Hasza and W.A. Fuller, Estimation for autoregressive processes with unit roots, Ann. Stat. 7 (1979), pp. 1106–1120] or the symmetric estimation [D.L. Sen and D.A. Dickey, Symmetric test for second differencing in univariate time series, J. Bus. Econ. Stat. 5 (1987), pp. 463–473] are used, and to adjust mean functions the ordinary mean adjustment or the recursive mean adjustment are used. Combinations of different methods in defactoring to eliminate the cross-sectional dependency, integrating results from panel units, estimating the parameters, and adjusting mean functions yields various available tests for double unit roots in panel data. Simple asymptotic distributions of the proposed test statistics are derived, which can be used to find critical values of the test statistics.

We perform a Monte Carlo experiment to compare the performance of these tests and to suggest optimal tests for a given panel data. Application of the proposed tests to a real data, the yearly export panel data sets of several Latin–American countries for the past 50 years, illustrates the usefulness of the proposed tests for panel data, in that they reveal stronger evidence of double unit roots than the componentwise double unit root tests of Hasza and Fuller [Estimation for autoregressive processes with unit roots, Ann. Stat. 7 (1979), pp. 1106–1120] or Sen and Dickey [Symmetric test for second differencing in univariate time series, J. Bus. Econ. Stat. 5 (1987), pp. 463–473].  相似文献   


18.
In this paper we present data-driven smooth tests for the extreme value distribution. These tests are based on a general idea of construction of data-driven smooth tests for composite hypotheses introduced by Inglot, T., Kallenberg, W. C. M. and Ledwina, T. [(1997). Data-driven smooth tests for composite hypotheses. Ann. Statist., 25, 1222–1250] and its modification for location-scale family proposed in Janic-Wróblewska, A. [(2004). Data-driven smooth test for a location-scale family. Statistics, in press]. Results of power simulations show that the newly introduced test performs very well for a wide range of alternatives and is competitive with other commonly used tests for the extreme value distribution.  相似文献   

19.
In this paper, we perform an empirical comparison of the classification error of several ensemble methods based on classification trees. This comparison is performed by using 14 data sets that are publicly available and that were used by Lim, Loh and Shih [Lim, T., Loh, W. and Shih, Y.-S., 2000, A comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification algorithms. Machine Learning, 40, 203–228.]. The methods considered are a single tree, Bagging, Boosting (Arcing) and random forests (RF). They are compared from different perspectives. More precisely, we look at the effects of noise and of allowing linear combinations in the construction of the trees, the differences between some splitting criteria and, specifically for RF, the effect of the number of variables from which to choose the best split at each given node. Moreover, we compare our results with those obtained by Lim et al. [Lim, T., Loh, W. and Shih, Y.-S., 2000, A comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification algorithms. Machine Learning, 40, 203–228.]. In this study, the best overall results are obtained with RF. In particular, RF are the most robust against noise. The effect of allowing linear combinations and the differences between splitting criteria are small on average, but can be substantial for some data sets.  相似文献   

20.
《Econometric Reviews》2013,32(4):341-370
Abstract

The power of Pearson's overall goodness-of-fit test and the components-of-chi-squared or “Pearson analog” tests of Anderson [Anderson, G. (1994). Simple tests of distributional form. J. Econometrics 62:265–276] to detect rejections due to shifts in location, scale, skewness and kurtosis is studied, as the number and position of the partition points is varied. Simulations are conducted for small and moderate sample sizes. It is found that smaller numbers of classes than are used in practice may be appropriate, and that the choice of non-equiprobable classes can result in substantial gains in power.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号