首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article builds on the test proposed by Lyhagen [The seasonal KPSS statistic, Econom. Bull. 3 (2006), pp. 1–9] for seasonal time series and having the null hypothesis of level stationarity against the alternative of unit root behaviour at some or all of the zero and seasonal frequencies. This new test is qualified as seasonal-frequency Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test and it is not originally supported by a regression framework.

The purpose of this paper is twofold. Firstly, we propose a model-based regression method and provide a clear illustration of Lyhagen's test and we establish its asymptotic theory in the time domain. Secondly, we use the Monte Carlo method to study the finite-sample performance of the seasonal KPSS test in the presence of additive outliers. Our simulation analysis shows that this test is robust to the magnitude and the number of outliers and the statistical results obtained cast an overall good performance of the test finite-sample properties.  相似文献   

2.
The Dickey-Fuller [rcirc]τ and [pcirc]τ tests are based on a regression of a variable on its lagged value, an intercept, and a trend term. The distributions of both statistics depend on the coefficient of the trend, and the usual Dickey-Fuller tabulations assume that this coefficient equals zero. This paper provides tabulations for the case that the coefficient of the trend is non-zero.  相似文献   

3.
Summary: In this paper the seasonal unit root test of Hylleberg et al. (1990) is generalized to cover a heterogenous panel. The procedure follows the work of Im, Pesaran and Shin (2002) and is independently proposed by Otero et al. (2004). Test statistics are given and critical values are obtained by simulation. Moreover, the properties of the tests are analyzed for different deterministic and dynamic specifications. Evidence is presented that for a small time series dimension the power is low even for increasing cross section dimension. Therefore, it seems necessary to have a higher time series dimension than cross section dimension. The test is applied to unemployment data in industrialized countries. In some cases seasonal unit roots are detected. However, the null hypotheses of panel seasonal unit roots are rejected. The null hypothesis of a unit root at the zero frequency is not rejected, thereby supporting the presence of hysteresis effects. * The research of this paper was supported by the Deutsche Forschungsgemeinschaft. The paper was presented at the workshop “Unit roots and cointegration in panel data” in Frankfurt, October 2004 and in the poster-session at the EC2 meeting in Marseille, December 2004. We are grateful to the participants of the workshops and an anonymous referee for their helpful comments.  相似文献   

4.
Based on the maximal invariant principle, we derive two ratio tests (locally best invariant test and point optimal test) for a unit root and compare them with previously proposed ratio tests. We also show that our ratio tests tend to have better powers than the Dickey-Fuller test and the modified Dickey-Fuller test.  相似文献   

5.
The autoregressive Cauchy estimator uses the sign of the first lag as instrumental variable (IV); under independent and identically distributed (i.i.d.) errors, the resulting IV t-type statistic is known to have a standard normal limiting distribution in the unit root case. With unconditional heteroskedasticity, the ordinary least squares (OLS) t statistic is affected in the unit root case; but the paper shows that, by using some nonlinear transformation behaving asymptotically like the sign as instrument, limiting normality of the IV t-type statistic is maintained when the series to be tested has no deterministic trends. Neither estimation of the so-called variance profile nor bootstrap procedures are required to this end. The Cauchy unit root test has power in the same 1/T neighborhoods as the usual unit root tests, also for a wide range of magnitudes for the initial value. It is furthermore shown to be competitive with other, bootstrap-based, robust tests. When the series exhibit a linear trend, however, the null distribution of the Cauchy test for a unit root becomes nonstandard, reminiscent of the Dickey-Fuller distribution. In this case, inference robust to nonstationary volatility is obtained via the wild bootstrap.  相似文献   

6.
In this paper we evaluate the performance of three methods for testing the existence of a unit root in a time series, when the models under consideration in the null hypothesis do not display autocorrelation in the error term. In such cases, simple versions of the Dickey-Fuller test should be used as the most appropriate ones instead of the known augmented Dickey-Fuller or Phillips-Perron tests. Through Monte Carlo simulations we show that, apart from a few cases, testing the existence of a unit root we obtain actual type I error and power very close to their nominal levels. Additionally, when the random walk null hypothesis is true, by gradually increasing the sample size, we observe that p-values for the drift in the unrestricted model fluctuate at low levels with small variance and the Durbin-Watson (DW) statistic is approaching 2 in both the unrestricted and restricted models. If, however, the null hypothesis of a random walk is false, taking a larger sample, the DW statistic in the restricted model starts to deviate from 2 while in the unrestricted model it continues to approach 2. It is also shown that the probability not to reject that the errors are uncorrelated, when they are indeed not correlated, is higher when the DW test is applied at 1% nominal level of significance.  相似文献   

7.
A residual-based test for cointegration is proposed. The method of two-stage least squares is used to estimate the cointegration model parameters. The residuals are then tested for the existence of a unit root using the augmented Dickey-Fuller test.  相似文献   

8.
This paper introduces a modified one-sample test of goodness-of-fit based on the cumulative distribution function. Damico [A new one-sample test for goodness-of-fit. Commun Stat – Theory Methods. 2004;33:181–193] proposed a test for testing goodness-of-fit of univariate distribution that uses the concept of partitioning the probability range into n intervals of equal probability mass 1/n and verifies that the hypothesized distribution evaluated at the observed data would place one case into each interval. The present paper extends this notion by allowing for m intervals of probability mass r/n, where r≥1 and n=m×r. A simulation study for small and moderate sample sizes demonstrates that the proposed test for two observations per interval under various alternatives is more powerful than the test proposed by Damico (2004).  相似文献   

9.
For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests.  相似文献   

10.
We investigate the properties of several statistical tests for comparing treatment groups with respect to multivariate survival data, based on the marginal analysis approach introduced by Wei, Lin and Weissfeld [Regression Analysis of multivariate incomplete failure time data by modelling marginal distributians, JASA vol. 84 pp. 1065–1073]. We consider two types of directional tests, based on a constrained maximization and on linear combinations of the unconstrained maximizer of the working likelihood function, and the omnibus test arising from the same working likelihood. The directional tests are members of a larger class of tests, from which an asymptotically optimal test can be found. We compare the asymptotic powers of the tests under general contiguous alternatives for a variety of settings, and also consider the choice of the number of survival times to include in the multivariate outcome. We illustrate the results with simulations and with the results from a clinical trial examining recurring opportunistic infections in persons with HIV.  相似文献   

11.
This paper provides a means of accurately simulating explosive autoregressive processes and uses this method to analyze the distribution of the likelihood ratio test statistic for an explosive second-order autoregressive process of a unit root. While the standard Dickey-Fuller distribution is known to apply in this case, simulations of statistics in the explosive region are beset by the magnitude of the numbers involved, which cause numerical inaccuracies. This has previously constituted a bar on supporting asymptotic results by means of simulation, and analyzing the finite sample properties of tests in the explosive region.  相似文献   

12.
A frequent question raised by practitioners doing unit root tests is whether these tests are sensitive to the presence of heteroscedasticity. Theoretically this is not the case for a wide range of heteroscedastic models. However, for some limiting cases such as degenerate and integrated heteroscedastic processes it is not obvious whether this will have an effect. In this paper we report a Monte Carlo study analyzing the implications of various types of heteroscedasticity on three types of unit root tests: The usual Dickey-Fuller test, Phillips' (1987) semi-parametric test and finally a Dickey-Fuller type test using White's (1980) heteroscedasticity consistent standard errors. The sorts of heteroscedasticity we examine are the GARCH model of Bollerslev (1986) and the Exponential ARCH model of Nelson (1991). In particular, we call attention to situations where the conditional variances exhibit a high degree of persistence as is frequently observed for returns of financial time series, and the case where, in fact, the variance process for the first class of models becomes degenerate.  相似文献   

13.
This paper presents a consistent Generalized Method of Moments (GMM) residuals-based test of functional form for time series models. By relating two moments we deliver a vector moment condition in which at least one element must be nonzero if the model is misspecified. The test will never fail to detect misspecification of any form for large samples, and is asymptotically chi-squared under the null, allowing for fast and simple inference. A simulation study reveals randomly selecting the nuisance parameter leads to more power than supremum-tests, and can obtain empirical power nearly equivalent to the most powerful test for even relatively small n.  相似文献   

14.
When testing treatment effects in multi‐arm clinical trials, the Bonferroni method or the method of Simes 1986) is used to adjust for the multiple comparisons. When control of the family‐wise error rate is required, these methods are combined with the close testing principle of Marcus et al. (1976). Under weak assumptions, the resulting p‐values all give rise to valid tests provided that the basic test used for each treatment is valid. However, standard tests can be far from valid, especially when the endpoint is binary and when sample sizes are unbalanced, as is common in multi‐arm clinical trials. This paper looks at the relationship between size deviations of the component test and size deviations of the multiple comparison test. The conclusion is that multiple comparison tests are as imperfect as the basic tests at nominal size α/m where m is the number of treatments. This, admittedly not unexpected, conclusion implies that these methods should only be used when the component test is very accurate at small nominal sizes. For binary end‐points, this suggests use of the parametric bootstrap test. All these conclusions are supported by a detailed numerical study.  相似文献   

15.
This article proposes a modified p-value for the two-sided test of the location of the normal distribution when the parameter space is restricted. A commonly used test for the two-sided test of the normal distribution is the uniformly most powerful unbiased (UMPU) test, which is also the likelihood ratio test. The p-value of the test is used as evidence against the null hypothesis. Note that the usual p-value does not depend on the parameter space but only on the observation and the assumption of the null hypothesis. When the parameter space is known to be restricted, the usual p-value cannot sufficiently utilize this information to make a more accurate decision. In this paper, a modified p-value (also called the rp-value) dependent on the parameter space is proposed, and the test derived from the modified p-value is also shown to be the UMPU test.  相似文献   

16.
In this paper, we consider tests for assessing whether two stationary and independent time series have the same spectral densities (or same autocovariance functions). Both frequency domain and time domain test statistics for this purpose are reviewed. The adaptive Neyman tests are then introduced and their performances are investigated. Our tests are adaptive, that is, they are constructed completely by the data and do not involve any unknown smoothing parameters. Simulation studies show that our proposed tests are at least comparable to the current tests in most cases. Furthermore, our tests are much more powerful in some cases, such as against the long orders of autoregressive moving average (ARMA) models such as seasonal ARMA series.  相似文献   

17.
Abstract

In time series, it is essential to check the independence of data by means of a proper method or an appropriate statistical test before any further analysis. Therefore, among different independence tests, a powerful and productive test has been introduced by Matilla-García and Marín via m-dimensional vectorial process, in which the value of the process at time t includes m-histories of the primary process. However, this method causes a dependency for the vectors even when the independence assumption of random variables is considered. Considering this dependency, a modified test is obtained in this article through presenting a new asymptotic distribution based on weighted chi-square random variables. Also, some other alterations to the test have been made via bootstrap method and by controlling the overlap. Compared with the primary test, it is obtained that not only the modified test is more accurate but also, it possesses higher power.  相似文献   

18.
The cause-of-death test of Peto et al.(1980)pools information from a Hoel-Walburg test on incidental tumors with information from a logrank test on fatal tumors in order to compare the tumor rate of a group of rodents exposed to a carcinogen against the tumor rate of a group of unexposed animals. The cause-of-death test, which can arise as a partial likelihood score test from a model that assumes proportional odds for tumor prevalence and proportional hazards for tumor mortality, is not, in general, a direct test for equality of tumor onset distributions for occult tumors that are observed in both fatal and incidental contexts. This paper develops a direct cause-of-death test for comparing distributions of time to onset of occultumors. The test is derived as a partial likelihood score test under an assumed proportional hazards model for tumor onset distributions. The size and power of the proposed test are compared in a Monte Carlo simulation study to the size and power of competitive procedures, including procedures that do not require cause-of-death information.  相似文献   

19.
Hartley's test for homogeneity of k normal‐distribution variances is based on the ratio between the maximum sample variance and the minimum sample variance. In this paper, the author uses the same statistic to test for equivalence of k variances. Equivalence is defined in terms of the ratio between the maximum and minimum population variances, and one concludes equivalence when Hartley's ratio is small. Exact critical values for this test are obtained by using an integral expression for the power function and some theoretical results about the power function. These exact critical values are available both when sample sizes are equal and when sample sizes are unequal. One related result in the paper is that Hartley's test for homogeneity of variances is no longer unbiased when the sample sizes are unequal. The Canadian Journal of Statistics 38: 647–664; 2010 © 2010 Statistical Society of Canada  相似文献   

20.
Using kernel density estimation, this paper describes the real income distribution and how it evolved over time in Italy. Data are cross-sectional samples from the population of Italian households during the period 1987–1998. A non parametric test is applied to asses whether the observed changes in the distribution are statistically significant, while the presence of more than one mode in the distributions is investigated by a bootstrap test. Empirical results show that the Italian income distribution significantly changed over time, accompanied by a decreasing inequality pattern until 1991 and a subsequent rise. No marked income gains were perceived, while the real “losers” of the decade seem to be households in the middle-upper income range. Supported by the MURST project 98-13-45. We would like to thank Nicholas Longford for his precious support and encouragement, two anonymous referees, the participants of the seminar at CEPS/INSTEAD in Luxembarg, and of the 40 th SIS Conference in Florence, for helpful comments and suggestions. The usual disclaimers apply.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号