首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
In this paper, we develop an info-metric framework for testing hypotheses about structural instability in nonlinear, dynamic models estimated from the information in population moment conditions. Our methods are designed to distinguish between three states of the world: (i) the model is structurally stable in the sense that the population moment condition holds at the same parameter value throughout the sample; (ii) the model parameters change at some point in the sample but otherwise the model is correctly specified; and (iii) the model exhibits more general forms of instability than a single shift in the parameters. An advantage of the info-metric approach is that the null hypotheses concerned are formulated in terms of distances between various choices of probability measures constrained to satisfy (i) and (ii), and the empirical measure of the sample. Under the alternative hypotheses considered, the model is assumed to exhibit structural instability at a single point in the sample, referred to as the break point; our analysis allows for the break point to be either fixed a priori or treated as occuring at some unknown point within a certain fraction of the sample. We propose various test statistics that can be thought of as sample analogs of the distances described above, and derive their limiting distributions under the appropriate null hypothesis. The limiting distributions of our statistics are nonstandard but coincide with various distributions that arise in the literature on structural instability testing within the Generalized Method of Moments framework. A small simulation study illustrates the finite sample performance of our test statistics.  相似文献   

2.
Many spatial data such as those in climatology or environmental monitoring are collected over irregular geographical locations. Furthermore, it is common to have multivariate observations at each location. We propose a method of segmentation of a region of interest based on such data that can be carried out in two steps: (1) clustering or classification of irregularly sample points and (2) segmentation of the region based on the classified points.

We develop a spatially-constrained clustering algorithm for segmentation of the sample points by incorporating a geographical-constraint into the standard clustering methods. Both hierarchical and nonhierarchical methods are considered. The latter is a modification of the seeded region growing method known in image analysis. Both algorithms work on a suitable neighbourhood structure, which can for example be defined by the Delaunay triangulation of the sample points. The number of clusters is estimated by testing the significance of successive change in the within-cluster sum-of-squares relative to a null permutation distribution. The methodology is validated on simulated data and used in construction of a climatology map of Ireland based on meteorological data of daily rainfall records from 1294 stations over the period of 37 years.  相似文献   

3.
A stepwise algorithm for selecting categories for the chisquared goodness-of-fit test with completely specified continuous null and alternative distributions is described in this paper. The procedure's starting point is an initial partitioning of the sample space into a large number of categories. A second partition with one fewer category is constructed by combining two categories of the original partition. The procedure continues until there are only two categories; the partition in the sequence with the highest estimated power is the one chosen. For illustartive purposes, the performance of the algorithm is evaluated for several hypothesis tests of the from H0: normal distribution vs. H1: a specific mixed normal distribution. For each test considered, the partition identified by the algorithm was compared to several equiprobable partitions, including the equiprobable partition with the highest estimated power. In all cases but one, the algorithm identified a parttion with higher estimated power than the best equiprobable partition. Applciations of the procedure are discussed.  相似文献   

4.
Many neuroscience experiments record sequential trajectories where each trajectory consists of oscillations and fluctuations around zero. Such trajectories can be viewed as zero-mean functional data. When there are structural breaks in higher-order moments, it is not always easy to spot these by mere visual inspection. Motivated by this challenging problem in brain signal analysis, we propose a detection and testing procedure to find the change point in functional covariance. The detection procedure is based on the cumulative sum statistics (CUSUM). The fully functional testing procedure relies on a null distribution which depends on infinitely many unknown parameters, though in practice only a finite number of these parameters can be included for the hypothesis test of the existence of change point. This paper provides some theoretical insights on the influence of the number of parameters. Meanwhile, the asymptotic properties of the estimated change point are developed. The effectiveness of the proposed method is numerically validated in simulation studies and an application to investigate changes in rat brain signals following an experimentally-induced stroke.  相似文献   

5.
Abstract.  Several testing procedures are proposed that can detect change-points in the error distribution of non-parametric regression models. Different settings are considered where the change-point either occurs at some time point or at some value of the covariate. Fixed as well as random covariates are considered. Weak convergence of the suggested difference of sequential empirical processes based on non-parametrically estimated residuals to a Gaussian process is proved under the null hypothesis of no change-point. In the case of testing for a change in the error distribution that occurs with increasing time in a model with random covariates the test statistic is asymptotically distribution free and the asymptotic quantiles can be used for the test. This special test statistic can also detect a change in the regression function. In all other cases the asymptotic distribution depends on unknown features of the data-generating process and a bootstrap procedure is proposed in these cases. The small sample performances of the proposed tests are investigated by means of a simulation study and the tests are applied to a data example.  相似文献   

6.
This paper studies well-known tests by Kim et?al. (J Econom 109:389?C392, 2002) and Busetti and Taylor (J Econom 123:33?C66, 2004) for the null hypothesis of short memory against a change to nonstationarity, I (1). The potential break point is not assumed to be known but estimated from the data. First, we show that the tests are also applicable for a change from I (0) to a fractional order of integration I (d) with d?>?0 (long memory) in that the tests are consistent. The rates of divergence of the test statistics are derived as functions of the sample size T and d. Second, we compare their finite sample power experimentally. Third, we consider break point estimation for a change from I (0) to I (d) for finite samples in computer simulations. It turns out that the estimators proposed for the integer case (d?=?1) are practically reliable only if d is close enough to 1.  相似文献   

7.
Change point monitoring for distributional changes in time-series models is an important issue. In this article, we propose two monitoring procedures to detect distributional changes of squared residuals in GARCH models. The asymptotic properties of our monitoring statistics are derived under both the null of no change in distribution and the alternative of a change in distribution. The finite sample properties are investigated by a simulation.  相似文献   

8.
We consider a nonparametric autoregression model under conditional heteroscedasticity with the aim to test whether the innovation distribution changes in time. To this end, we develop an asymptotic expansion for the sequential empirical process of nonparametrically estimated innovations (residuals). We suggest a Kolmogorov–Smirnov statistic based on the difference of the estimated innovation distributions built from the first ?ns?and the last n ? ?ns? residuals, respectively (0 ≤ s ≤ 1). Weak convergence of the underlying stochastic process to a Gaussian process is proved under the null hypothesis of no change point. The result implies that the test is asymptotically distribution‐free. Consistency against fixed alternatives is shown. The small sample performance of the proposed test is investigated in a simulation study and the test is applied to a data example.  相似文献   

9.
A two-stage procedure is studied for estimating changes in the parameters of the multi-parameter exponential family, given a sample X 1,…,X n. The first step is a likelihood ratio test of the hypothesis Hoof no change. Upon rejection of this hypothesis, the change point index and pre- and post-change parameters are estimated by maximum likelihood. The asymptotic (n → ∞) distribution of the log-likelihood ratio statistic is obtained under both Hoand local alternatives. The m.l.e.fs o of the pre- and post-change parameters are shown to be asymptotically jointly normal. The distribution of the change point estimate is obtained under local alternatives. Performance of the procedure for moderate samples is studied by Monte Carlo methods.  相似文献   

10.
Summary A two-step method is proposed for evaluating the bootstrap null distribution function of some useful test statistics appropriate for two-sample and multi-sample comparisons. In the first step, the characteristic function of the bootstrap null distribution function is determined by recursive equations; in the second a numerical inversion by the Fast Fourier Transform is performed to evaluate this null distribution function. A simulation experiment is performed to show how computer timings increase with the pooled sample size.  相似文献   

11.
Goodness-of-fit tests for the innovation distribution in GARCH models based on measuring deviations between the empirical characteristic function of the residuals and the characteristic function under the null hypothesis have been proposed in the literature. The asymptotic distributions of these test statistics depend on unknown quantities, so their null distributions are usually estimated through parametric bootstrap (PB). Although easy to implement, the PB can become very computationally expensive for large sample sizes, which is typically the case in applications of these models. This work proposes to approximate the null distribution through a weighted bootstrap. The procedure is studied both theoretically and numerically. Its asymptotic properties are similar to those of the PB, but, from a computational point of view, it is more efficient.  相似文献   

12.
A truncated sequential sign test for location shift is studied when the null or target location has been estimated from a prior, fixed sample. If the randomness of the target is ignored then the test is shown to be strongly anticonservative, the degree being proportional to the ratio of the truncation point to the fixed sample size. The test is distribution-free under the hypothesis of no shift enabling exact Type I errors and null expected samples sizes to be calculated and compared to a modified Brownian motion approximation. A Monte Carlo power study shows that the test compares favorably with thr test against a Xnown target. An abbreviated table of critical values is given.  相似文献   

13.
Tests for the cointegrating rank of a vector autoregressive process are considered that allow for possible exogenous shifts in the mean of the data-generation process. The break points are assumed to be known a priori. It is proposed to estimate and remove the deterministic terms such as mean, linear-trend term, and a shift in a first step. Then systems cointegration tests are applied to the adjusted series. The resulting tests are shown to have known limiting null distributions that are free of nuisance parameters and do not depend on the break point. The tests are applied for analyzing the number of cointegrating relations in two German money-demand systems.  相似文献   

14.
Summary.  Statistical agencies make changes to the data collection methodology of their surveys to improve the quality of the data collected or to improve the efficiency with which they are collected. For reasons of cost it may not be possible to estimate the effect of such a change on survey estimates or response rates reliably, without conducting an experiment that is embedded in the survey which involves enumerating some respondents by using the new method and some under the existing method. Embedded experiments are often designed for repeated and overlapping surveys; however, previous methods use sample data from only one occasion. The paper focuses on estimating the effect of a methodological change on estimates in the case of repeated surveys with overlapping samples from several occasions. Efficient design of an embedded experiment that covers more than one time point is also mentioned. All inference is unbiased over an assumed measurement model, the experimental design and the complex sample design. Other benefits of the approach proposed include the following: it exploits the correlation between the samples on each occasion to improve estimates of treatment effects; treatment effects are allowed to vary over time; it is robust against incorrectly rejecting the null hypothesis of no treatment effect; it allows a wide set of alternative experimental designs. This paper applies the methodology proposed to the Australian Labour Force Survey to measure the effect of replacing pen-and-paper interviewing with computer-assisted interviewing. This application considered alternative experimental designs in terms of their statistical efficiency and their risks to maintaining a consistent series. The approach proposed is significantly more efficient than using only 1 month of sample data in estimation.  相似文献   

15.
The limiting distribution of the log-likelihood-ratio statistic for testing the number of components in finite mixture models can be very complex. We propose two alternative methods. One method is generalized from a locally most powerful test. The test statistic is asymptotically normal, but its asymptotic variance depends on the true null distribution. Another method is to use a bootstrap log-likelihood-ratio statistic which has a uniform limiting distribution in [0,1]. When tested against local alternatives, both methods have the same power asymptotically. Simulation results indicate that the asymptotic results become applicable when the sample size reaches 200 for the bootstrap log-likelihood-ratio test, but the generalized locally most powerful test needs larger sample sizes. In addition, the asymptotic variance of the locally most powerful test statistic must be estimated from the data. The bootstrap method avoids this problem, but needs more computational effort. The user may choose the bootstrap method and let the computer do the extra work, or choose the locally most powerful test and spend quite some time to derive the asymptotic variance for the given model.  相似文献   

16.
Two types of state-switching models for U.S. real output have been proposed: models that switch randomly between states and models that switch states deterministically, as in the threshold autoregressive model of Potter. These models have been justified primarily on how well they fit the sample data, yielding statistically significant estimates of the model coefficients. Here we propose a new approach to the evaluation of an estimated nonlinear time series model that provides a complement to existing methods based on in-sample fit or on out-of-sample forecasting. In this new approach, a battery of distinct nonlinearity tests is applied to the sample data, resulting in a set of p-values for rejecting the null hypothesis of a linear generating mechanism. This set of p-values is taken to be a “stylized fact” characterizing the nonlinear serial dependence in the generating mechanism of the time series. The effectiveness of an estimated nonlinear model for this time series is then evaluated in terms of the congruence between this stylized fact and a set of nonlinearity test results obtained from data simulated using the estimated model. In particular, we derive a portmanteau statistic based on this set of nonlinearity test p-values that allows us to test the proposition that a given model adequately captures the nonlinear serial dependence in the sample data. We apply the method to several estimated state-switching models of U.S. real output.  相似文献   

17.
It is generally assumed that the likelihood ratio statistic for testing the null hypothesis that data arise from a homoscedastic normal mixture distribution versus the alternative hypothesis that data arise from a heteroscedastic normal mixture distribution has an asymptotic χ 2 reference distribution with degrees of freedom equal to the difference in the number of parameters being estimated under the alternative and null models under some regularity conditions. Simulations show that the χ 2 reference distribution will give a reasonable approximation for the likelihood ratio test only when the sample size is 2000 or more and the mixture components are well separated when the restrictions suggested by Hathaway (Ann. Stat. 13:795–800, 1985) are imposed on the component variances to ensure that the likelihood is bounded under the alternative distribution. For small and medium sample sizes, parametric bootstrap tests appear to work well for determining whether data arise from a normal mixture with equal variances or a normal mixture with unequal variances.  相似文献   

18.
In this article, we are concerned with whether the nonparametric functions are parallel from two partial linear models, and propose a test statistic to check the difference of the two functions. The unknown constant α is estimated by using moment method under null models. Nonparametric functions under both null and full models are estimated by using local linear method. The asymptotic properties of parametric and nonparametric components are derived. The test statistic under the null hypothesis is calculated and shown to be asymptotically normal.  相似文献   

19.
Kleinbaum (1973) developed a generalized growth curve model for analyzing incomplete longitudinal data. In this paper the small sample properties of several related test statistics are investigated via Monte Carlo techniques. The covariance matrix is estimated by each of three non-iterative methods. The null and non-null distributions of these test statistics are examined.  相似文献   

20.
In this paper, bootstrap detection and ratio estimation are proposed to analysis mean change in heavy-tailed distribution. First, the test statistic is constructed into a ratio form on the CUSUM process. Then, the asymptotic distribution of test statistic is obtained and the consistency of the test is proved. To solve the problem that the null distribution of the test statistic contains unknown tail index, we present a bootstrap approximation method to determine the critical values of the null distribution. We also discuss how to estimate change point based on ratio method. The consistency and rate of convergence for the change-point estimator are established. Finally, the excellent performance of our method is demonstrated through simulations using artificial and real data sets. Especially the simulation results of bootstrap test are better than those of another existing method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号