首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article considers the problem of testing for an explosive bubble in financial data in the presence of time-varying volatility. We propose a weighted least squares-based variant of the Phillips et al.) test for explosive autoregressive behavior. We find that such an approach has appealing asymptotic power properties, with the potential to deliver substantially greater power than the established OLS-based approach for many volatility and bubble settings. Given that the OLS-based test can outperform the weighted least squares-based test for other volatility and bubble specifications, we also suggest a union of rejections procedure that succeeds in capturing the better power available from the two constituent tests for a given alternative. Our approach involves a nonparametric kernel-based volatility function estimator for computation of the weighted least squares-based statistic, together with the use of a wild bootstrap procedure applied jointly to both individual tests, delivering a powerful testing procedure that is asymptotically size-robust to a wide range of time-varying volatility specifications.  相似文献   

2.
In a recent paper Kwiatkowski et al. (1992) propose the so-called KPSS statistic for testing the null hypothesis of stationarity against the alternative of a unit root process. The statistic employs a spectral estimator which can be shown to diverge with increasing sample size, given the alternative is true. Here, we suggest a modified spectral estimator which is shown to stabilize for moving average models. It is shown that this test statistic uniformly outperforms the KPSS statistic in an MA(1) model. Furthermore, a two-step nonparametric correction procedure is suggested, giving a test statistic with similar asymptotic properties as the original KPSS statistic. However, in small samples this correction performs better especially in detecting large random walk components. This paper was written while the author was a post-doctoral fellow at the University of Amsterdam. The author likes to thank Peter Boswijk, Inge van den Doel, Noud van Giersbergen and Jan F.Kiviet for their help during that time. Moreover, I would like to thank an anonymous referee for a number of helpful comments.  相似文献   

3.
ABSTRACT

In this paper, we examine the issue of detecting explosive behavior in economic and financial time series when an explosive episode is both ongoing at the end of the sample and of finite length. We propose a testing strategy based on a subsampling method in which a suitable test statistic is calculated on a finite number of end-of-sample observations, with a critical value obtained using subsample test statistics calculated on the remaining observations. This approach also has the practical advantage that, by virtue of how the critical values are obtained, it can deliver tests which are robust to, among other things, conditional heteroskedasticity and serial correlation in the driving shocks. We also explore modifications of the raw statistics to account for unconditional heteroskedasticity using studentization and a White-type correction. We evaluate the finite sample size and power properties of our proposed procedures and find that they offer promising levels of power, suggesting the possibility for earlier detection of end-of-sample bubble episodes compared to existing procedures.  相似文献   

4.
The surveillance of multivariate processes has received growing attention during the last decade. Several generalizations of well-known methods such as Shewhart, CUSUM and EWMA charts have been proposed. Many of these multivariate procedures are based on a univariate summarized statistic of the multivariate observations, usually the likelihood ratio statistic. In this paper we consider the surveillance of multivariate observation processes for a shift between two fully specified alternatives. The effect of the dimension reduction using likelihood ratio statistics are discussed in the context of sufficiency properties. Also, an example of the loss of efficiency when not using the univariate sufficient statistic is given. Furthermore, a likelihood ratio method, the LR method, for constructing surveillance procedures is suggested for multivariate surveillance situations. It is shown to produce univariate surveillance procedures based on the sufficient likelihood ratios. As the LR procedure has several optimality properties in the univariate, it is also used here as a benchmark for comparisons between multivariate surveillance procedures  相似文献   

5.
This paper provides a means of accurately simulating explosive autoregressive processes and uses this method to analyze the distribution of the likelihood ratio test statistic for an explosive second-order autoregressive process of a unit root. While the standard Dickey–Fuller distribution is known to apply in this case, simulations of statistics in the explosive region are beset by the magnitude of the numbers involved, which cause numerical inaccuracies. This has previously constituted a bar on supporting asymptotic results by means of simulation, and analyzing the finite sample properties of tests in the explosive region.  相似文献   

6.
This paper provides a means of accurately simulating explosive autoregressive processes and uses this method to analyze the distribution of the likelihood ratio test statistic for an explosive second-order autoregressive process of a unit root. While the standard Dickey-Fuller distribution is known to apply in this case, simulations of statistics in the explosive region are beset by the magnitude of the numbers involved, which cause numerical inaccuracies. This has previously constituted a bar on supporting asymptotic results by means of simulation, and analyzing the finite sample properties of tests in the explosive region.  相似文献   

7.
In survival analysis, it is often of interest to test whether or not two survival time distributions are equal, specifically in the presence of censored data. One very popular test statistic utilized in this testing procedure is the weighted logrank statistic. Much attention has been focused on finding flexible weight functions to use within the weighted logrank statistic, and we propose yet another. We demonstrate our weight function to be more stable than one of the most popular, which is given by Fleming and Harrington, by means of asymptotic normal tests, bootstrap tests and permutation tests performed on two datasets with a variety of characteristics.  相似文献   

8.
In this paper we evaluate the performance of three methods for testing the existence of a unit root in a time series, when the models under consideration in the null hypothesis do not display autocorrelation in the error term. In such cases, simple versions of the Dickey-Fuller test should be used as the most appropriate ones instead of the known augmented Dickey-Fuller or Phillips-Perron tests. Through Monte Carlo simulations we show that, apart from a few cases, testing the existence of a unit root we obtain actual type I error and power very close to their nominal levels. Additionally, when the random walk null hypothesis is true, by gradually increasing the sample size, we observe that p-values for the drift in the unrestricted model fluctuate at low levels with small variance and the Durbin-Watson (DW) statistic is approaching 2 in both the unrestricted and restricted models. If, however, the null hypothesis of a random walk is false, taking a larger sample, the DW statistic in the restricted model starts to deviate from 2 while in the unrestricted model it continues to approach 2. It is also shown that the probability not to reject that the errors are uncorrelated, when they are indeed not correlated, is higher when the DW test is applied at 1% nominal level of significance.  相似文献   

9.
The performance of Anderson's classification statistic based on a post-stratified random sample is examined. It is assumed that the training sample is a random sample from a stratified population consisting of two strata with unknown stratum weights. The sample is first segregated into the two strata by post-stratification. The unknown parameters for each of the two populations are then estimated and used in the construction of the plug-in discriminant. Under this procedure, it is shown that additional estimation of the stratum weight will not seriously affect the performance of Anderson's classification statistic. Furthermore, our discriminant enjoys a much higher efficiency than the procedure based on an unclassified sample from a mixture of normals investigated by Ganesalingam and McLachlan (1978).  相似文献   

10.
A statistical test procedure is proposed to check whether the parameters in the parametric component of the partially linear spatial autoregressive models satisfy certain linear constraint conditions, in which a residual-based bootstrap procedure is suggested to derive the p-value of the test. Some simulations are conducted to assess the performance of the test and the results show that the bootstrap approximation to the null distribution of the test statistic is valid and the test is of satisfactory power. Furthermore, a real-world example is given to demonstrate the application of the proposed test.  相似文献   

11.
Since the treatment effect of an experimental drug is generally not known at the onset of a clinical trial, it may be wise to allow for an adjustment in the sample size after an interim analysis of the unblinded data. Using a particular adaptive test statistic, a procedure is demonstrated for finding the optimal design. Both the timing of the interim analysis and the way the sample size is adjusted can influence the power of the resulting procedure. It is possible to have smaller average sample size using the adaptive test statistic, even if the initial estimate of the treatment effect is wrong, compared to the sample size needed using a standard test statistic without an interim look and assuming a correct initial estimate of the effect. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
Clustering high-dimensional data is often a challenging task both because of the computational burden required to run any technique, and because the difficulty in interpreting clusters generally increases with the data dimension. In this work, a method for finding low-dimensional representations of high-dimensional data is discussed, specifically conceived to preserve possible clusters in data. It is based on the critical bandwidth, a nonparametric statistic to test unimodality, related to kernel density estimation. Some useful properties of the aforementioned statistic are enlightened and an adjustment to use it as a basis for reducing dimensionality is suggested. The method is illustrated by simulated and real data examples.  相似文献   

13.
A general rank test procedure based on an underlying multinomial distribution is suggested for randomized block experiments with multifactor treatment combinations within each block. The Wald statistic for the multinomial is used to test hypotheses about the within–block rankings. This statistic is shown to be related to the one–sample Hotellingt's T2 statistic, suggesting a method for computing the test statistic using the standard statistical computer packages.  相似文献   

14.
In this paper, estimation of coefficients of simultaneous linear partially explosive model of higher orders with moving average errors is considered. It has been shown that the above model can be decomposed into a purely explosive model and an autoregressive model. A two stage estimation, procedure is carried out towards proposing estimators for the partially explosive model. The asymptotic properties of these estimators are also studied.  相似文献   

15.
韩猛等 《统计研究》2018,35(6):97-108
为了内生地识别动态因子模型因子载荷矩阵的结构突变(包括因子个数的变化),本文利用主成分估计得伪因子序列构造累积平方和统计量检验因子载荷矩阵的结构突变性,进一步利用迭代累积平方和算法对多个结构突变点的位置进行探测。研究发现,本文提出的检验统计量对于因子个数误设具有稳健性;并且该检验具有良好的有限样本性质和渐近性;另外,实证分析发现,中国沪市A股市场制造业上市公司的对数收益率序列存在结构突变的共同因子。  相似文献   

16.
On the basis of the outcome of a preliminary test of significance for the population correlation coefficient it is decided as to whether the variance ratio or the sample correlation coefficient between u=(x+y)/2 and v=x?y)/2 is to be used as a test statistic for testing the equality of variances. A method for determining the critical points for the preliminary test and the main test has been suggested. The power of the test procedure is compared with those of standard tests.  相似文献   

17.
Following a procedure applied to the Erlang-2 distribution in a recent paper, an adjusted Kolmogorov-Smirnov statistic and critical values are developed for the Erlang-3 and -4 cases using data from Monte Carlo simulations. The test statistic produced features of compactness and ease of implementation. It is quite accurate for sample sizes as low as ten.  相似文献   

18.
In this article, we study the varying coefficient partially nonlinear model with measurement errors in the nonparametric part. A local corrected profile nonlinear least-square estimation procedure is proposed and the asymptotic properties of the resulting estimators are established. Further, a generalized likelihood ratio (GLR) statistic is proposed to test whether the varying coefficients are constant. The asymptotic null distribution of the statistic is obtained and a residual-based bootstrap procedure is employed to compute the p-value of the statistic. Some simulations are conducted to evaluate the performance of the proposed methods. The results show that the estimating and testing procedures work well in finite samples.  相似文献   

19.
Structural inference as a method of statistical analysis seems to have escaped the attention of many statisticians. This paper focuses on Fraser’s necessary analysis of structural models as a tool to derive classical distribution results.

A structural model analyzed by Zacks (1971) by means of conventional statistical methods and fiducial theory is re-examined by the structural method. It is shown that results obtained by the former methods come as easy consequences of the latter analysis of the structural model. In the process we also simplify Zacks1 methods of obtaining a minimum risk equivariant estimator of a parameter of the model.

A theorem of Basu (1955), often used to prove independence of a complete sufficient statistic and an ancillary statistic, is also reexamined in the light of structural method. It is found that for structural models more can be achieved by necessary analysis without the use of Basu’s theorem. Bain’s (1972) application of Basu’s theorem of constructing confidence intervals for Weibull reliability is given as an example.  相似文献   

20.
An imputation procedure is a procedure by which each missing value in a data set is replaced (imputed) by an observed value using a predetermined resampling procedure. The distribution of a statistic computed from a data set consisting of observed and imputed values, called a completed data set, is affecwd by the imputation procedure used. In a Monte Carlo experiment, three imputation procedures are compared with respect to the empirical behavior of the goodness-of- fit chi-square statistic computed from a completed data set. The results show that each imputation procedure affects the distribution of the goodness-of-fit chi-square statistic in 3. different manner. However, when the empirical behavior of the goodness-of-fit chi-square statistic is compared u, its appropriate asymptotic distribution, there are no substantial differences between these imputation procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号