首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Singular spectrum analysis (SSA) is a non-parametric time series modelling technique where an observed time series is unfolded into the column vectors of a Hankel structured matrix, known as a trajectory matrix. For noise-free signals the column vectors of the trajectory matrix lie on a single R-flat. Singular value decomposition (SVD) can be used to find the orthonormal base vectors of the linear subspace parallel to this R-flat. SSA can essentially handle functions that are governed by a linear recurrent formula (LRF) and include the broad class of functions that was proposed by Buchstaber [1994. Time series analysis and Grassmannians. Amer. Math. Soc. Transl. 162 (2), 1–17]. SSA is useful to model time series with complex cyclical patterns that increase over time.Various methods have been studied to extend SSA for application to several time series, see Golyandina et al. [2003. Variants of the Caterpillar SSA-method for analysis of multidimensional time series (in Russian) hhttp://www.gistatgroup.com/cat/i]. Prior to that Von Storch and Zwiers (1999) and Allen and Robertson (1996) (see Ghil et al. [2002. Advanced spectral methods for climatic time series. Rev. Geophys. 40 (1), 3.1–3.41]) used multi-channel SSA (M-SSA), to apply SSA to “grand” block matrices. Our approach is different from all of these by using the common principal components approaches introduced by Flury [1988. Common Principal Components and Related Multivariate Models. Wiley, New York]. In this paper SSA is extended to several time series which are similar in some respects, like cointegrated, i.e. sharing a common R-flat. By using the common principal component (CPC) approach of Flury [1988. Common Principal Components and Related Multivariate Models. Wiley, New York] the SSA method is extended to common singular spectrum analysis (CSSA) where common features of several time series can be studied. CSSA decomposes the different original time series into the sum of a common small number of components which are related to common trend and oscillatory components and noise. The determination of the most likely dimension of the supporting linear subspace is studied using a heuristic approach and a hierarchical selection procedure.  相似文献   

2.
The first two stages in modelling times series are hypothesis testing and estimation. For long memory time series, the second stage was studied in the paper published in [M. Boutahar et al., Estimation methods of the long memory parameter: monte Carlo analysis and application, J. Appl. Statist. 34(3), pp. 261–301.] in which we have presented some estimation methods of the long memory parameter. The present paper is intended for the first stage, and hence completes the former, by exploring some tests for detecting long memory in time series. We consider two kinds of tests: the non-parametric class and the semi-parametric one. We precise the limiting distribution of the non-parametric tests under the null of short memory and we show that they are consistent against the alternative of long memory. We perform also some Monte Carlo simulations to analyse the size distortion and the power of all proposed tests. We conclude that for large sample size, the two classes are equivalent but for small sample size the non-parametric class is better than the semi-parametric one.  相似文献   

3.
ABSTRACT

We propose a semiparametric approach to estimate the existence and location of a statistical change-point to a nonlinear multivariate time series contaminated with an additive noise component. In particular, we consider a p-dimensional stochastic process of independent multivariate normal observations where the mean function varies smoothly except at a single change-point. Our approach involves conducting a Bayesian analysis on the empirical detail coefficients of the original time series after a wavelet transform. If the mean function of our time series can be expressed as a multivariate step function, we find our Bayesian-wavelet method performs comparably with classical parametric methods such as maximum likelihood estimation. The advantage of our multivariate change-point method is seen in how it applies to a much larger class of mean functions that require only general smoothness conditions.  相似文献   

4.
《随机性模型》2013,29(2-3):643-668
Abstract

We investigate polynomial factorization as a classical analysis method for servers with semi-Markov arrival and service processes. The modeling approach is directly applicable to queueing systems and servers in production lines and telecommunication networks, where the flexibility in adaptation to autocorrelated processes is essential.

Although the method offers a compact form of the solution with favourable computation time complexity enabling to consider large state spaces and system equations of high degree, numerical stability is not guaranteed for this approach. Therefore we apply interval arithmetic in order to get verified results for the workload distributions, or otherwise to indicate that the precision of the computation has to be improved. The paper gives an overview of numerical and performance aspects of factorization in comparison to alternative methods.  相似文献   

5.
This article presents a review of some modern approaches to trend extraction for one-dimensional time series, which is one of the major tasks of time series analysis. The trend of a time series is usually defined as a smooth additive component which contains information about the time series global change, and we discuss this and other definitions of the trend. We do not aim to review all the novel approaches, but rather to observe the problem from different viewpoints and from different areas of expertise. The article contributes to understanding the concept of a trend and the problem of its extraction. We present an overview of advantages and disadvantages of the approaches under consideration, which are: the model-based approach (MBA), nonparametric linear filtering, singular spectrum analysis (SSA), and wavelets. The MBA assumes the specification of a stochastic time series model, which is usually either an autoregressive integrated moving average (ARIMA) model or a state space model. The nonparametric filtering methods do not require specification of model and are popular because of their simplicity in application. We discuss the Henderson, LOESS, and Hodrick–Prescott filters and their versions derived by exploiting the Reproducing Kernel Hilbert Space methodology. In addition to these prominent approaches, we consider SSA and wavelet methods. SSA is widespread in the geosciences; its algorithm is similar to that of principal components analysis, but SSA is applied to time series. Wavelet methods are the de facto standard for denoising in signal procession, and recent works revealed their potential in trend analysis.  相似文献   

6.
A new and innovative procedure based on time varying amplitudes for the classification of cyclical time series is proposed. In many practical situations, the amplitude of a cyclical component of a time series is not constant. Estimated time varying amplitudes obtained through complex demodulation of the time series are used as the discriminating variables in classical discriminant analysis. The aim of this paper is to demonstrate through simulation studies and applications to well-known data sets, that time varying amplitudes have very good discriminating power and hence their use in classical discriminant analysis is a simple alternative to more complex methods of time series discrimination.  相似文献   

7.

This paper presents a method of customizing goodness-of-fit tests that transforms the empirical distribution function in such a way as to create tests for certain alternatives. Using the @ , g transform described in Blom(1958), one can create non-parametric tests for an assortment of alternative distributions. As examples, three new ( f , g )-corrected Kolmogorov-Smirnov tests for goodness-of-fit are discussed. One of these tests is powerful for testing whether or not the data come from an alternative that is heavier in the tails. Another test identifies whether or not the data come from an alternative which is heavier in the middle of the distribution. The last test identifies if the data come from an alternative in which the first or third quartile is far from the corresponding quartile of the hypothesized distribution. The behavior of the three new tests is investigated through a power study.  相似文献   

8.
ABSTRACT

Local linear estimator is a popularly used method to estimate the non-parametric regression functions, and many methods have been derived to estimate the smoothing parameter, or the bandwidth in this case. In this article, we propose an information criterion-based bandwidth selection method, with the degrees of freedom originally derived for non-parametric inferences. Unlike the plug-in method, the new method does not require preliminary parameters to be chosen in advance, and is computationally efficient compared to the cross-validation (CV) method. Numerical study shows that the new method performs better or comparable to existing plug-in method or CV method in terms of the estimation of the mean functions, and has lower variability than CV selectors. Real data applications are also provided to illustrate the effectiveness of the new method.  相似文献   

9.
ABSTRACT

In this work, we deal with a bivariate time series of wind speed and direction. Our observed data have peculiar features, such as informative missing values, non-reliable measures under a specific condition and interval-censored data, that we take into account in the model specification. We analyse the time series with a non-parametric Bayesian hidden Markov model, introducing a new emission distribution, suitable to model our data, based on the invariant wrapped Poisson, the Poisson and the hurdle density. The model is estimated on simulated datasets and on the real data example that motivated this work.  相似文献   

10.
ABSTRACT

In this article, we obtain exact expression for the distribution of the time to failure of discrete time cold standby repairable system under the classical assumptions that both working time and repair time of components are geometric. Our method is based on alternative representation of lifetime as a waiting time random variable on a binary sequence, and combinatorial arguments. Such an exact expression for the time to failure distribution is new in the literature. Furthermore, we obtain the probability generating function and the first two moments of the lifetime random variable.  相似文献   

11.
Summary.  There is a large literature on methods of analysis for randomized trials with noncompliance which focuses on the effect of treatment on the average outcome. The paper considers evaluating the effect of treatment on the entire distribution and general functions of this effect. For distributional treatment effects, fully non-parametric and fully parametric approaches have been proposed. The fully non-parametric approach could be inefficient but the fully parametric approach is not robust to the violation of distribution assumptions. We develop a semiparametric instrumental variable method based on the empirical likelihood approach. Our method can be applied to general outcomes and general functions of outcome distributions and allows us to predict a subject's latent compliance class on the basis of an observed outcome value in observed assignment and treatment received groups. Asymptotic results for the estimators and likelihood ratio statistic are derived. A simulation study shows that our estimators of various treatment effects are substantially more efficient than the currently used fully non-parametric estimators. The method is illustrated by an analysis of data from a randomized trial of an encouragement intervention to improve adherence to prescribed depression treatments among depressed elderly patients in primary care practices.  相似文献   

12.
The aim of this research is to apply the singular spectrum analysis (SSA) technique, which is a relatively new and powerful technique in time series analysis and forecasting, to forecast the 2008 UK recession, using eight economic time series. These time series were selected as they represent the most important economic indicators in the UK. The ability to understand the underlying structure of these series and to quickly identify turning points such as the on-set of the recent recession is of key interest to users. In recent years, the SSA technique has been further developed and applied to many practical problems. Hence, these series will provide an ideal practical test of the potential benefits from SSA during one of the most challenging periods for econometric analyses of recent years. The results are compared with those obtained using the ARIMA and Holt–Winters models as these methods are currently used as standard forecasting methods in the Office for National Statistics in the UK.  相似文献   

13.

In time series analysis, signal extraction model (SEM) is used to estimate unobserved signal component from observed time series data. Since parameters of the components in SEM are often unknown in practice, a commonly used method is to estimate unobserved signal component using the maximum likelihood estimates (MLEs) of parameters of the components. This paper explores an alternative way to estimate unobserved signal component when parameters of the components are unknown. The suggested method makes use of importance sampling (IS) with Bayesian inference. The basic idea is to treat parameters of the components in SEM as a random vector and compute a posterior probability density function of the parameters using Bayesian inference. Then IS method is applied to integrate out the parameters and thus estimates of unobserved signal component, unconditional to the parameters, can be obtained. This method is illustrated with a real time series data. Then a Monte Carlo study with four different types of time series models is carried out to compare a performance of this method with that of a commonly used method. The study shows that IS method with Bayesian inference is computationally feasible and robust, and more efficient in terms of mean square errors (MSEs) than a commonly used method.  相似文献   

14.
15.
The dimension reduction in regression is an efficient method of overcoming the curse of dimensionality in non-parametric regression. Motivated by recent developments for dimension reduction in time series, an empirical extension of central mean subspace in time series to a single-input transfer function model is performed in this paper. Here, we use central mean subspace as a tool of dimension reduction for bivariate time series in the case when the dimension and lag are known and estimate the central mean subspace through the Nadaraya–Watson kernel smoother. Furthermore, we develop a data-dependent approach based on a modified Schwarz Bayesian criterion to estimate the unknown dimension and lag. Finally, we show that the approach in bivariate time series works well using an expository demonstration, two simulations, and a real data analysis such as El Niño and fish Population.  相似文献   

16.
RATES OF CONVERGENCE IN SEMI-PARAMETRIC MODELLING OF LONGITUDINAL DATA   总被引:2,自引:0,他引:2  
We consider the problem of semi-parametric regression modelling when the data consist of a collection of short time series for which measurements within series are correlated. The objective is to estimate a regression function of the form E[Y(t) | x] =x'ß+μ(t), where μ(.) is an arbitrary, smooth function of time t, and x is a vector of explanatory variables which may or may not vary with t. For the non-parametric part of the estimation we use a kernel estimator with fixed bandwidth h. When h is chosen without reference to the data we give exact expressions for the bias and variance of the estimators for β and μ(t) and an asymptotic analysis of the case in which the number of series tends to infinity whilst the number of measurements per series is held fixed. We also report the results of a small-scale simulation study to indicate the extent to which the theoretical results continue to hold when h is chosen by a data-based cross-validation method.  相似文献   

17.
Singular spectrum analysis (SSA) is an increasingly popular and widely adopted filtering and forecasting technique which is currently exploited in a variety of fields. Given its increasing application and superior performance in comparison to other methods, it is pertinent to study and distinguish between the two forecasting variations of SSA. These are referred to as Vector SSA (SSA-V) and Recurrent SSA (SSA-R). The general notion is that SSA-V is more robust and provides better forecasts than SSA-R. This is especially true when faced with time series which are non-stationary and asymmetric, or affected by unit root problems, outliers or structural breaks. However, currently there exists no empirical evidence for proving the above notions or suggesting that SSA-V is better than SSA-R. In this paper, we evaluate out-of-sample forecasting capabilities of the optimised SSA-V and SSA-R forecasting algorithms via a simulation study and an application to 100 real data sets with varying structures, to provide a statistically reliable answer to the question of which SSA algorithm is best for forecasting at both short and long run horizons based on several important criteria.  相似文献   

18.
Abstract

This paper is devoted to application of the singular-spectrum analysis to sequential detection of changes in time series. An algorithm of change-point detection in time series, based on sequential application of the singular-spectrum analysis is developed and studied. The algorithm is applied to different data sets and extensively studied numerically. For specific models, several numerical approximations to the error probabilities and the power function of the algorithm are obtained. Numerical comparisons with other methods are given.  相似文献   

19.
The Shewhart, Bonferroni-adjustment, and analysis of means (ANOM) control charts are typically applied to monitor the mean of a quality characteristic. The Shewhart and Bonferroni procedure are utilized to recognize special causes in production process, where the control limits are constructed by assuming normal distribution for known parameters (mean and standard deviation), and approximately normal distribution regarding to unknown parameters. The ANOM method is an alternative to the analysis of variance method. It can be used to establish the mean control charts by applying equicorrelated multivariate non central t distribution. In this article, we establish new control charts, in phases I and II monitoring, based on normal and t distributions having as a cause a known (or unknown) parameter (standard deviation). Our proposed methods are at least as effective as the classical Shewhart methods and have some advantages.  相似文献   

20.
R/S分析法是揭示金融时间序列长记忆性的主要方法之一。针对经典R/S与修正R/S分析法之不足,对R/S分析法进行改进,设计含控制因子的R/S统计量,并应用蒙特卡洛模拟说明改进的方法比经典R/S与修正R/S分析法在估计H指数上的有效性。运用新方法对上证综合指数和深圳成分指数收益率序列的长记忆性与两市的平均非周期循环长度进行实证分析,研究表明:沪、深股市的收益率序列都具有长记忆性,但是沪市的收益率序列不存在明显的平均非周期循环长度,而深市的收益率序列则存在一个大约308天的平均非周期循环长度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号