首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary.  We propose a general bootstrap procedure to approximate the null distribution of non-parametric frequency domain tests about the spectral density matrix of a multivariate time series. Under a set of easy-to-verify conditions, we establish asymptotic validity of the bootstrap procedure proposed. We apply a version of this procedure together with a new statistic to test the hypothesis that the spectral densities of not necessarily independent time series are equal. The test statistic proposed is based on an L 2-distance between the non-parametrically estimated individual spectral densities and an overall, 'pooled' spectral density, the latter being obtained by using the whole set of m time series considered. The effects of the dependence between the time series on the power behaviour of the test are investigated. Some simulations are presented and a real life data example is discussed.  相似文献   

2.
In this article, we present a goodness-of-fit test for a distribution based on some comparisons between the empirical characteristic function cn(t) and the characteristic function of a random variable under the simple null hypothesis, c0(t). We do this by introducing a suitable distance measure. Empirical critical values for the new test statistic for testing normality are computed. In addition, the new test is compared via simulation to other omnibus tests for normality and it is shown that this new test is more powerful than others.  相似文献   

3.
The author proposes a general method for constructing nonparametric tests of hypotheses for umbrella alternatives. Such alternatives are relevant when the treatment effect changes in direction after reaching a peak. The author's class of tests is based on the ranks of the observations. His general approach consists of defining two sets of rankings: the first is induced by the alternative and the other by the data itself. His test statistic measures the distance between the two sets. The author determines the asymptotic distribution for some special cases of distances under both the null and the alternative hypothesis when the location of the peak is known or unknown. He shows the good power of his tests through a limited simulation study  相似文献   

4.
Given that the Euclidean distance between the parameter estimates of autoregressive expansions of autoregressive moving average models can be used to classify stationary time series into groups, a test of hypothesis is proposed to determine whether two stationary series in a particular group have significantly different generating processes. Based on this test a new clustering algorithm is also proposed. The results of Monte Carlo simulations are given.  相似文献   

5.
We examine robust estimators and tests using the family of generalized negative exponential disparities, which contains the Pearson's chi‐square and the ordinary negative exponential disparity as special cases. The influence function and α‐influence function of the proposed estimators are discussed and their breakdown points derived. Under the model, the estimators are asymptotically efficient, and are shown to have an asymptotic breakdown point of 50%. The proposed tests are shown to be equivalent to the likelihood ratio test under the null hypothesis, and their breakdown points are obtained. The competitive performance of the proposed estimators and tests relative to those based on the Hellinger distance is illustrated through examples and simulation results. Unlike the Hellinger distance, several members of this family of generalized negative exponential disparities generate estimators which also possess excellent inlier‐controlling capability. The corresponding tests of hypothesis are shown to have better power breakdown than the Hellinger deviance test in the cases examined.  相似文献   

6.
A test for the null hypothesis that a time series has characteristic equations with two unit roots is presented. The test, based on a standard regression computation, is shown to have good power properties when compared to previously existing tests.  相似文献   

7.
In many situations, we want to verify the existence of a relationship between multivariate time series. In this paper, we generalize the procedure developed by Haugh (1976) for univariate time series in order to test the hypothesis of noncorrelation between two multivariate stationary ARMA series. The test statistics are based on residual cross-correlation matrices. Under the null hypothesis of noncorrelation, we show that an arbitrary vector of residual cross-correlations asymptotically follows the same distribution as the corresponding vector of cross-correlations between the two innovation series. From this result, it follows that the test statistics considered are asymptotically distributed as chi-square random variables. Two test procedures are described. The first one is based on the residual cross-correlation matrix at a particular lag, whilst the second one is based on a portmanteau type statistic that generalizes Haugh's statistic. We also discuss how the procedures for testing noncorrelation can be adapted to determine the directions of causality in the sense of Granger (1969) between the two series. An advantage of the proposed procedures is that their application does not require the estimation of a global model for the two series. The finite-sample properties of the statistics introduced were studied by simulation under the null hypothesis. It led to modified statistics whose upper quantiles are much better approximated by those of the corresponding chi-square distribution. Finally, the procedures developed are applied to two different sets of economic data.  相似文献   

8.
Risk of investing in a financial asset is quantified by functionals of squared returns. Discrete time stochastic volatility (SV) models impose a convenient and practically relevant time series dependence structure on the log-squared returns. Different long-term risk characteristics are postulated by short-memory SV and long-memory SV models. It is therefore important to test which of these two alternatives is suitable for a specific asset. Most standard tests are confounded by deterministic trends. This paper introduces a new, wavelet-based, test of the null hypothesis of short versus long memory in volatility which is robust to deterministic trends. In finite samples, the test performs better than currently available tests which are based on the Fourier transform.  相似文献   

9.
In this paper we propose a family of relativel simple nonparametrics tests for a unit root in a univariate time series. Almost all the tests proposed in the literature test the unit root hypothesis against the alternative that the time series involved is stationarity or trend stationary. In this paper we take the (trend) stationarity hypothesis as the null and the unit root hypothesis as the alternative. The order differnce with most of the tests proposed in the literature is that in all four cases the asymptotic null distribution is of a well-known type, namely standard Cauchy. In the first instance we propose four Cauchy tests of the stationarity hypothesis against the unit root hypothesis. Under H1 these four test statistics involved, divided by the sample size n, converge weakly to a non-central Cauchy distribution, to one, and to the product of two normal variates, respectively. Hence, the absolute values of these test statistics converge in probability to infinity 9at order n). The tests involved are therefore consistent against the unit root hypothesis. Moreover, the small sample performance of these test are compared by Monte Carlo simulations. Furthermore, we propose two additional Cauchy tests of the trend stationarity hypothesis against the alternative of a unit root with drift.  相似文献   

10.
We propose tests for hypotheses on the parameters of the deterministic trend function of a univariate time series. The tests do not require knowledge of the form of serial correlation in the data, and they are robust to strong serial correlation. The data can contain a unit root and still have the correct size asymptotically. The tests that we analyze are standard heteroscedasticity autocorrelation robust tests based on nonparametric kernel variance estimators. We analyze these tests using the fixed-b asymptotic framework recently proposed by Kiefer and Vogelsang. This analysis allows us to analyze the power properties of the tests with regard to bandwidth and kernel choices. Our analysis shows that among popular kernels, specific kernel and bandwidth choices deliver tests with maximal power within a specific class of tests. Based on the theoretical results, we propose a data-dependent bandwidth rule that maximizes integrated power. Our recommended test is shown to have power that dominates a related test proposed by Vogelsang. We apply the recommended test to the logarithm of a net barter terms of trade series and we find that this series has a statistically significant negative slope. This finding is consistent with the well-known Prebisch–Singer hypothesis.  相似文献   

11.
A consistent approach to the problem of testing non‐correlation between two univariate infinite‐order autoregressive models was proposed by Hong (1996). His test is based on a weighted sum of squares of residual cross‐correlations, with weights depending on a kernel function. In this paper, the author follows Hong's approach to test non‐correlation of two cointegrated (or partially non‐stationary) ARMA time series. The test of Pham, Roy & Cédras (2003) may be seen as a special case of his approach, as it corresponds to the choice of a truncated uniform kernel. The proposed procedure remains valid for testing non‐correlation between two stationary invertible multivariate ARMA time series. The author derives the asymptotic distribution of his test statistics under the null hypothesis and proves that his procedures are consistent. He also studies the level and power of his proposed tests in finite samples through simulation. Finally, he presents an illustration based on real data.  相似文献   

12.
In nonparametric statistics, a hypothesis testing problem based on the ranks of the data gives rise to two separate permutation sets corresponding to the null and to the alternative hypothesis, respectively. A modification of Critchlow's unified approach to hypothesis testing is proposed. By defining the distance between permutation sets to be the average distance between pairs of permutations, one from each set, various test statistics are derived for the multi-sample location problem and the two-way layout. The asymptotic distributions of the test statistics are computed under both the null and alternative hypotheses. Some comparisons are made on the basis of the asymptotic relative efficiency.  相似文献   

13.
Recently, Perron has carried out tests of the unit-root hypothesis against the alternative hypothesis of trend stationarity with a break in the trend occurring at the Great Crash of 1929 or at the 1973 oil-price shock. His analysis covers the Nelson–Plosser macroeconomic data series as well as a postwar quarterly real gross national product (GNP) series. His tests reject the unit-root null hypothesis for most of the series. This article takes issue with the assumption used by Perron that the Great Crash and the oil-price shock can be treated as exogenous events. A variation of Perron's test is considered in which the breakpoint is estimated rather than fixed. We argue that this test is more appropriate than Perron's because it circumvents the problem of data-mining. The asymptotic distribution of the estimated breakpoint test statistic is determined. The data series considered by Perron are reanalyzed using this test statistic. The empirical results make use of the asymptotics developed for the test statistic as well as extensive finite-sample corrections obtained by simulation. The effect on the empirical results of fat-tailed and temporally dependent innovations is investigated, in brief, by treating the breakpoint as endogenous, we find that there is less evidence against the unit-root hypothesis than Perron finds for many of the data series but stronger evidence against it for several of the series, including the Nelson-Plosser industrial-production, nominal-GNP, and real-GNP series.  相似文献   

14.
It is well known that the traditional Pearson correlation in many cases fails to capture non-linear dependence structures in bivariate data. Other scalar measures capable of capturing non-linear dependence exist. A common disadvantage of such measures, however, is that they cannot distinguish between negative and positive dependence, and typically the alternative hypothesis of the accompanying test of independence is simply “dependence”. This paper discusses how a newly developed local dependence measure, the local Gaussian correlation, can be used to construct local and global tests of independence. A global measure of dependence is constructed by aggregating local Gaussian correlation on subsets of \(\mathbb{R}^{2}\) , and an accompanying test of independence is proposed. Choice of bandwidth is based on likelihood cross-validation. Properties of this measure and asymptotics of the corresponding estimate are discussed. A bootstrap version of the test is implemented and tried out on both real and simulated data. The performance of the proposed test is compared to the Brownian distance covariance test. Finally, when the hypothesis of independence is rejected, local independence tests are used to investigate the cause of the rejection.  相似文献   

15.
In this paper we consider testing that an economic time series follows a martingale difference process. The martingale difference hypothesis has typically been tested using information contained in the second moments of a process, that is, using test statistics based on the sample autocovariances or periodograms. Tests based on these statistics are inconsistent since they cannot detect nonlinear alternatives. In this paper we consider tests that detect linear and nonlinear alternatives. Given that the asymptotic distributions of the considered tests statistics depend on the data generating process, we propose to implement the tests using a modified wild bootstrap procedure. The paper theoretically justifies the proposed tests and examines their finite sample behavior by means of Monte Carlo experiments.  相似文献   

16.
Graphical analysis of complex brain networks is a fundamental area of modern neuroscience. Functional connectivity is important since many neurological and psychiatric disorders, including schizophrenia, are described as ‘dys-connectivity’ syndromes. Using electroencephalogram time series collected on each of a group of 15 individuals with a common medical diagnosis of positive syndrome schizophrenia we seek to build a single, representative, brain functional connectivity group graph. Disparity/distance measures between spectral matrices are identified and used to define the normalized graph Laplacian enabling clustering of the spectral matrices for detecting ‘outlying’ individuals. Two such individuals are identified. For each remaining individual, we derive a test for each edge in the connectivity graph based on average estimated partial coherence over frequencies, and associated p-values are found. For each edge these are used in a multiple hypothesis test across individuals and the proportion rejecting the hypothesis of no edge is used to construct a connectivity group graph. This study provides a framework for integrating results on multiple individuals into a single overall connectivity structure.  相似文献   

17.
《Econometric Reviews》2013,32(4):351-377
Abstract

In this paper we consider testing that an economic time series follows a martingale difference process. The martingale difference hypothesis has typically been tested using information contained in the second moments of a process, that is, using test statistics based on the sample autocovariances or periodograms. Tests based on these statistics are inconsistent since they cannot detect nonlinear alternatives. In this paper we consider tests that detect linear and nonlinear alternatives. Given that the asymptotic distributions of the considered tests statistics depend on the data generating process, we propose to implement the tests using a modified wild bootstrap procedure. The paper theoretically justifies the proposed tests and examines their finite sample behavior by means of Monte Carlo experiments.  相似文献   

18.
This article considers the problem of testing the null hypothesis of stochastic stationarity in time series characterized by variance shifts at some (known or unknown) point in the sample. It is shown that existing stationarity tests can be severely biased in the presence of such shifts, either oversized or undersized, with associated spurious power gains or losses, depending on the values of the breakpoint parameter and on the ratio of the prebreak to postbreak variance. Under the assumption of a serially independent Gaussian error term with known break date and known variance ratio, a locally best invariant (LBI) test of the null hypothesis of stationarity in the presence of variance shifts is then derived. Both the test statistic and its asymptotic null distribution depend on the breakpoint parameter and also, in general, on the variance ratio. Modifications of the LBI test statistic are proposed for which the limiting distribution is independent of such nuisance parameters and belongs to the family of Cramér–von Mises distributions. One such modification is particularly appealing in that it is simultaneously exact invariant to variance shifts and to structural breaks in the slope and/or level of the series. Monte Carlo simulations demonstrate that the power loss from using our modified statistics in place of the LBI statistic is not large, even in the neighborhood of the null hypothesis, and particularly for series with shifts in the slope and/or level. The tests are extended to cover the cases of weakly dependent error processes and unknown breakpoints. The implementation of the tests are illustrated using output, inflation, and exchange rate data series.  相似文献   

19.
The main purpose of this paper is to introduce first a new family of empirical test statistics for testing a simple null hypothesis when the vector of parameters of interest is defined through a specific set of unbiased estimating functions. This family of test statistics is based on a distance between two probability vectors, with the first probability vector obtained by maximizing the empirical likelihood (EL) on the vector of parameters, and the second vector defined from the fixed vector of parameters under the simple null hypothesis. The distance considered for this purpose is the phi-divergence measure. The asymptotic distribution is then derived for this family of test statistics. The proposed methodology is illustrated through the well-known data of Newcomb's measurements on the passage time for light. A simulation study is carried out to compare its performance with that of the EL ratio test when confidence intervals are constructed based on the respective statistics for small sample sizes. The results suggest that the ‘empirical modified likelihood ratio test statistic’ provides a competitive alternative to the EL ratio test statistic, and is also more robust than the EL ratio test statistic in the presence of contamination in the data. Finally, we propose empirical phi-divergence test statistics for testing a composite null hypothesis and present some asymptotic as well as simulation results for evaluating the performance of these test procedures.  相似文献   

20.
This paper considers the likelihood ratio (LR) tests of stationarity, common trends and cointegration for multivariate time series. As the distribution of these tests is not known, a bootstrap version is proposed via a state- space representation. The bootstrap samples are obtained from the Kalman filter innovations under the null hypothesis. Monte Carlo simulations for the Gaussian univariate random walk plus noise model show that the bootstrap LR test achieves higher power for medium-sized deviations from the null hypothesis than a locally optimal and one-sided Lagrange Multiplier (LM) test that has a known asymptotic distribution. The power gains of the bootstrap LR test are significantly larger for testing the hypothesis of common trends and cointegration in multivariate time series, as the alternative asymptotic procedure – obtained as an extension of the LM test of stationarity – does not possess properties of optimality. Finally, it is shown that the (pseudo-)LR tests maintain good size and power properties also for the non-Gaussian series. An empirical illustration is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号