首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The dimension reduction in regression is an efficient method of overcoming the curse of dimensionality in non-parametric regression. Motivated by recent developments for dimension reduction in time series, an empirical extension of central mean subspace in time series to a single-input transfer function model is performed in this paper. Here, we use central mean subspace as a tool of dimension reduction for bivariate time series in the case when the dimension and lag are known and estimate the central mean subspace through the Nadaraya–Watson kernel smoother. Furthermore, we develop a data-dependent approach based on a modified Schwarz Bayesian criterion to estimate the unknown dimension and lag. Finally, we show that the approach in bivariate time series works well using an expository demonstration, two simulations, and a real data analysis such as El Niño and fish Population.  相似文献   

2.
We consider the problem of testing time series linearity. Existing time domain and spectral domain tests are discussed. A new approach relying on spectral domain properties of a time series under the null hypothesis of linearity is suggested. Under linearity, the normalized bispectral density function Z is a constant. Under the null hypothesis of linearity, properly constructed estimators of 2|Z|2 have a non-central chi-squared distribution with two degrees of freedom and constant non-centrality parameter 2|Z|2. If the null hypothesis is false, the non-centrality parameter is non-constant. This suggests goodness-of-fit tests might be effective in diagnosing non-linearity. Several approaches are introduced.  相似文献   

3.
Various nonparametric approaches for Bayesian spectral density estimation of stationary time series have been suggested in the literature, mostly based on the Whittle likelihood approximation. A generalization of this approximation involving a nonparametric correction of a parametric likelihood has been proposed in the literature with a proof of posterior consistency for spectral density estimation in combination with the Bernstein–Dirichlet process prior for Gaussian time series. In this article, we will extend the posterior consistency result to non-Gaussian time series by employing a general consistency theorem for dependent data and misspecified models. As a special case, posterior consistency for the spectral density under the Whittle likelihood is also extended to non-Gaussian time series. Small sample properties of this approach are illustrated with several examples of non-Gaussian time series.  相似文献   

4.
5.
New measures of skewness for real-valued random variables are proposed. The measures are based on a functional representation of real-valued random variables. Specifically, the expected value of the transformed random variable can be used to characterize the distribution of the original variable. Firstly, estimators of the proposed skewness measures are analyzed. Secondly, asymptotic tests for symmetry are developed. The tests are consistent for both discrete and continuous distributions. Bootstrap versions improving the empirical results for moderated and small samples are provided. Some simulations illustrate the performance of the tests in comparison to other methods. The results show that our procedures are competitive and have some practical advantages.  相似文献   

6.
The circulant embedding method for generating statistically exact simulations of time series from certain Gaussian distributed stationary processes is attractive because of its advantage in computational speed over a competitive method based upon the modified Cholesky decomposition. We demonstrate that the circulant embedding method can be used to generate simulations from stationary processes whose spectral density functions are dictated by a number of popular nonparametric estimators, including all direct spectral estimators (a special case being the periodogram), certain lag window spectral estimators, all forms of Welch's overlapped segment averaging spectral estimator and all basic multitaper spectral estimators. One application for this technique is to generate time series for bootstrapping various statistics. When used with bootstrapping, our proposed technique avoids some – but not all – of the pitfalls of previously proposed frequency domain methods for simulating time series.  相似文献   

7.
Multivariate (or interchangeably multichannel) autoregressive (MCAR) modeling of stationary and nonstationary time series data is achieved doing things one channel at-a-time using only scalar computations on instantaneous data. The one channel at-a-time modeling is achieved as an instantaneous response multichannel autoregressive model with orthogonal innovations variance. Conventional MCAR models are expressible as linear algebraic transformations of the instantaneous response orthogonal innovations models. By modeling multichannel time series one channel at-a-time, the problems of modeling multichannel time series are reduced to problems in the modeling of scalar autoregressive time series. The three longstanding time series modeling problems of achieving a relatively parsimonious MCAR representation, of multichannel stationary time series spectral estimation and of the modeling of nonstationary covariance time series are addressed using this paradigm.  相似文献   

8.
A procedure for testing simultaneously, the parametric forms of the conditional mean and the conditional variance functions of a real-valued heteroscedastic time series model is proposed. The Wald test statistic is based on a vector whose components are suitable normalized sums of some weighted residual series. The test is consistent under some fixed alternatives. The local power under two sequences of local alternatives is studied. A LAN property for the parametric model of interest is also established. Experiment conducted shows that the test performs well on the examples tested.  相似文献   

9.
Summary.  We propose a general bootstrap procedure to approximate the null distribution of non-parametric frequency domain tests about the spectral density matrix of a multivariate time series. Under a set of easy-to-verify conditions, we establish asymptotic validity of the bootstrap procedure proposed. We apply a version of this procedure together with a new statistic to test the hypothesis that the spectral densities of not necessarily independent time series are equal. The test statistic proposed is based on an L 2-distance between the non-parametrically estimated individual spectral densities and an overall, 'pooled' spectral density, the latter being obtained by using the whole set of m time series considered. The effects of the dependence between the time series on the power behaviour of the test are investigated. Some simulations are presented and a real life data example is discussed.  相似文献   

10.
This paper studies cyclic long-memory processes with Gegenbauer-type spectral densities. For a semiparametric statistical model, new simultaneous estimates for singularity location and long-memory parameters are proposed. This generalized filtered method-of-moments approach is based on general filter transforms that include wavelet transformations as a particular case. It is proved that the estimates are almost surely convergent to the true values of parameters. Solutions of the estimation equations are studied, and adjusted statistics are proposed. Monte-Carlo study results are presented to confirm the theoretical findings.  相似文献   

11.
We propose tests for hypotheses on the parameters of the deterministic trend function of a univariate time series. The tests do not require knowledge of the form of serial correlation in the data, and they are robust to strong serial correlation. The data can contain a unit root and still have the correct size asymptotically. The tests that we analyze are standard heteroscedasticity autocorrelation robust tests based on nonparametric kernel variance estimators. We analyze these tests using the fixed-b asymptotic framework recently proposed by Kiefer and Vogelsang. This analysis allows us to analyze the power properties of the tests with regard to bandwidth and kernel choices. Our analysis shows that among popular kernels, specific kernel and bandwidth choices deliver tests with maximal power within a specific class of tests. Based on the theoretical results, we propose a data-dependent bandwidth rule that maximizes integrated power. Our recommended test is shown to have power that dominates a related test proposed by Vogelsang. We apply the recommended test to the logarithm of a net barter terms of trade series and we find that this series has a statistically significant negative slope. This finding is consistent with the well-known Prebisch–Singer hypothesis.  相似文献   

12.
In statistical data analysis it is often important to compare, classify, and cluster different time series. For these purposes various methods have been proposed in the literature, but they usually assume time series with the same sample size. In this article, we propose a spectral domain method for handling time series of unequal length. The method make the spectral estimates comparable by producing statistics at the same frequency. The procedure is compared with other methods proposed in the literature by a Monte Carlo simulation study. As an illustrative example, the proposed spectral method is applied to cluster industrial production series of some developed countries.  相似文献   

13.
Stationary time series models built from parametric distributions are, in general, limited in scope due to the assumptions imposed on the residual distribution and autoregression relationship. We present a modeling approach for univariate time series data, which makes no assumptions of stationarity, and can accommodate complex dynamics and capture non-standard distributions. The model for the transition density arises from the conditional distribution implied by a Bayesian nonparametric mixture of bivariate normals. This results in a flexible autoregressive form for the conditional transition density, defining a time-homogeneous, non-stationary Markovian model for real-valued data indexed in discrete time. To obtain a computationally tractable algorithm for posterior inference, we utilize a square-root-free Cholesky decomposition of the mixture kernel covariance matrix. Results from simulated data suggest that the model is able to recover challenging transition densities and non-linear dynamic relationships. We also illustrate the model on time intervals between eruptions of the Old Faithful geyser. Extensions to accommodate higher order structure and to develop a state-space model are also discussed.  相似文献   

14.
Estimation of the long-range dependence parameter in spatial processes using a semiparametric approach is studied. An extended formulation of the averaged periodogram method proposed in Robinson [1994. Semiparametric analysis of long memory time series. Ann. Statist. 22, 515–539] is derived, considering a certain homogeneous and isotropic behaviour of the spectral distribution in the low frequencies. The weak consistency of the estimator proposed is proved.  相似文献   

15.
Resampling methods are proposed to estimate the distributions of sums of m -dependent possibly differently distributed real-valued random variables. The random variables are allowed to have varying mean values. A non parametric resampling method based on the moving blocks bootstrap is proposed for the case in which the mean values are smoothly varying or 'asymptotically equal'. The idea is to resample blocks in pairs. It is also confirmed that a 'circular' block resampling scheme can be used in the case where the mean values are 'asymptotically equal'. A central limit resampling theorem for each of the two cases is proved. The resampling methods have a potential application to time series analysis, to distinguish between two different forecasting models. This is illustrated with an example using Swedish export prices of coated paper products.  相似文献   

16.
Test statistics for checking the independence between the innovations of several time series are developed. The time series models considered allow for general specifications for the conditional mean and variance functions that could depend on common explanatory variables. In testing for independence between more than two time series, checking pairwise independence does not lead to consistent procedures. Thus a finite family of empirical processes relying on multivariate lagged residuals are constructed, and we derive their asymptotic distributions. In order to obtain simple asymptotic covariance structures, Möbius transformations of the empirical processes are studied, and simplifications occur. Under the null hypothesis of independence, we show that these transformed processes are asymptotically Gaussian, independent, and with tractable covariance functions not depending on the estimated parameters. Various procedures are discussed, including Cramér–von Mises test statistics and tests based on non‐parametric measures. The ranks of the residuals are considered in the new methods, giving test statistics which are asymptotically margin‐free. Generalized cross‐correlations are introduced, extending the concept of cross‐correlation to an arbitrary number of time series; portmanteau procedures based on them are discussed. In order to detect the dependence visually, graphical devices are proposed. Simulations are conducted to explore the finite sample properties of the methodology, which is found to be powerful against various types of alternatives when the independence is tested between two and three time series. An application is considered, using the daily log‐returns of Apple, Intel and Hewlett‐Packard traded on the Nasdaq financial market. The Canadian Journal of Statistics 40: 447–479; 2012 © 2012 Statistical Society of Canada  相似文献   

17.
18.
We develop the empirical likelihood approach for a class of vector‐valued, not necessarily Gaussian, stationary processes with unknown parameters. In time series analysis, it is known that the Whittle likelihood is one of the most fundamental tools with which to obtain a good estimator of unknown parameters, and that the score functions are asymptotically normal. Motivated by the Whittle likelihood, we apply the empirical likelihood approach to its derivative with respect to unknown parameters. We also consider the empirical likelihood approach to minimum contrast estimation based on a spectral disparity measure, and apply the approach to the derivative of the spectral disparity. This paper provides rigorous proofs on the convergence of our two empirical likelihood ratio statistics to sums of gamma distributions. Because the fitted spectral model may be different from the true spectral structure, the results enable us to construct confidence regions for various important time series parameters without assuming specified spectral structures and the Gaussianity of the process.  相似文献   

19.
The aim of this paper is to study the concept of separability in multiple nonstationary time series displaying both common stochastic trends and common stochastic cycles. When modeling the dynamics of multiple time series for a panel of several entities such as countries, sectors, firms, imposing some form of separability and commonalities is often required to restrict the dimension of the parameter space. For this purpose we introduce the concept of common feature separation and investigate the relationships between separation in cointegration and separation in serial correlation common features. Loosely speaking we investigate whether a set of time series can be partitioned into subsets such that there are serial correlation common features within the sub-groups only. The paper investigates three issues. First, it provides conditions for separating joint cointegrating vectors into marginal cointegrating vectors as well as separating joint short-term dynamics into marginal short-term dynamics. Second, conditions for making permanent-transitory decompositions based on marginal systems are given. Third, issues of weak exogeneity are considered. Likelihood ratio type tests for the different hypotheses under study are proposed. An empirical analysis of the link between economic fluctuations in the United States and Canada shows the practical relevance of the approach proposed in this paper.  相似文献   

20.
A note on the correlation structure of transformed Gaussian random fields   总被引:1,自引:0,他引:1  
Transformed Gaussian random fields can be used to model continuous time series and spatial data when the Gaussian assumption is not appropriate. The main features of these random fields are specified in a transformed scale, while for modelling and parameter interpretation it is useful to establish connections between these features and those of the random field in the original scale. This paper provides evidence that for many ‘normalizing’ transformations the correlation function of a transformed Gaussian random field is not very dependent on the transformation that is used. Hence many commonly used transformations of correlated data have little effect on the original correlation structure. The property is shown to hold for some kinds of transformed Gaussian random fields, and a statistical explanation based on the concept of parameter orthogonality is provided. The property is also illustrated using two spatial datasets and several ‘normalizing’ transformations. Some consequences of this property for modelling and inference are also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号