首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
This paper suggests a simple nonmetric method for smoothing time series data. The smoothed series is the closest polytone curve to the presmoothed series in terms of least sum of absolute deviations. The method is exemplified on several seasonally adjusted series in order to estimate their trend component.  相似文献   

2.
3.
A new method for calculating the confluent hypergeometric function of vector argument is presented. This function serves to normalize the integral which defines the Bingham distribution. By using a polynomial and series expansion, the double integral can be reduced to a single infinite summation whose coefficients may be computed recursively. A short c program implementing the method is included in the appendix.  相似文献   

4.
We develop an entropy-based test for randomness of binary time series of finite length. The test uses the frequencies of contiguous blocks of different lengths. A simple condition ib the block lengths and the length of the time series enables one to estimate the entropy rate for the data, and this information is used to develop a statistic to test the hypothesis of randomness. This static measures the deviation of the estimated entropy of the observed data from the theoretical maximum under the randomness hypothesis. This test offers a real alternative to the conventional runs test. Critical percentage points, based on simulations, are provided for testing the hypothesis of randomness. Power calculations using dependent data show that the proposed test has higher power against the runs test for short series, and it is similar to the runs test for long series. The test is applied to two published data sets that wree investigated by others with respect to their randomness.  相似文献   

5.
This paper extends the univariate time series smoothing approach provided by penalized least squares to a multivariate setting, thus allowing for joint estimation of several time series trends. The theoretical results are valid for the general multivariate case, but particular emphasis is placed on the bivariate situation from an applied point of view. The proposal is based on a vector signal-plus-noise representation of the observed data that requires the first two sample moments and specifying only one smoothing constant. A measure of the amount of smoothness of an estimated trend is introduced so that an analyst can set in advance a desired percentage of smoothness to be achieved by the trend estimate. The required smoothing constant is determined by the chosen percentage of smoothness. Closed form expressions for the smoothed estimated vector and its variance-covariance matrix are derived from a straightforward application of generalized least squares, thus providing best linear unbiased estimates for the trends. A detailed algorithm applicable for estimating bivariate time series trends is also presented and justified. The theoretical results are supported by a simulation study and two real applications. One corresponds to Mexican and US macroeconomic data within the context of business cycle analysis, and the other one to environmental data pertaining to a monitored site in Scotland.  相似文献   

6.
Recently, Perron has carried out tests of the unit-root hypothesis against the alternative hypothesis of trend stationarity with a break in the trend occurring at the Great Crash of 1929 or at the 1973 oil-price shock. His analysis covers the Nelson–Plosser macroeconomic data series as well as a postwar quarterly real gross national product (GNP) series. His tests reject the unit-root null hypothesis for most of the series. This article takes issue with the assumption used by Perron that the Great Crash and the oil-price shock can be treated as exogenous events. A variation of Perron's test is considered in which the breakpoint is estimated rather than fixed. We argue that this test is more appropriate than Perron's because it circumvents the problem of data-mining. The asymptotic distribution of the estimated breakpoint test statistic is determined. The data series considered by Perron are reanalyzed using this test statistic. The empirical results make use of the asymptotics developed for the test statistic as well as extensive finite-sample corrections obtained by simulation. The effect on the empirical results of fat-tailed and temporally dependent innovations is investigated, in brief, by treating the breakpoint as endogenous, we find that there is less evidence against the unit-root hypothesis than Perron finds for many of the data series but stronger evidence against it for several of the series, including the Nelson-Plosser industrial-production, nominal-GNP, and real-GNP series.  相似文献   

7.
This paper studies the effect of autocorrelation on the smoothness of the trend of a univariate time series estimated by means of penalized least squares. An index of smoothness is deduced for the case of a time series represented by a signal-plus-noise model, where the noise follows an autoregressive process of order one. This index is useful for measuring the distortion of the amount of smoothness by incorporating the effect of autocorrelation. Different autocorrelation values are used to appreciate the numerical effect on smoothness for estimated trends of time series with different sample sizes. For comparative purposes, several graphs of two simulated time series are presented, where the estimated trend is compared with and without autocorrelation in the noise. Some findings are as follows, on the one hand, when the autocorrelation is negative (no matter how large) or positive but small, the estimated trend gets very close to the true trend. Even in this case, the estimation is improved by fixing the index of smoothness according to the sample size. On the other hand, when the autocorrelation is positive and large the simulated and estimated trends lie far away from the true trend. This situation is mitigated by fixing an appropriate index of smoothness for the estimated trend in accordance to the sample size at hand. Finally, an empirical example serves to illustrate the use of the smoothness index when estimating the trend of Mexico’s quarterly GDP.  相似文献   

8.
This paper shows how cubic smoothing splines fitted to univariate time series data can be used to obtain local linear forecasts. The approach is based on a stochastic state‐space model which allows the use of likelihoods for estimating the smoothing parameter, and which enables easy construction of prediction intervals. The paper shows that the model is a special case of an ARIMA(0, 2, 2) model; it provides a simple upper bound for the smoothing parameter to ensure an invertible model; and it demonstrates that the spline model is not a special case of Holt's local linear trend method. The paper compares the spline forecasts with Holt's forecasts and those obtained from the full ARIMA(0, 2, 2) model, showing that the restricted parameter space does not impair forecast performance. The advantage of this approach over a full ARIMA(0, 2, 2) model is that it gives a smooth trend estimate as well as a linear forecast function.  相似文献   

9.
Summary.  We present an approach for correcting for interobserver measurement error in an ordinal logistic regression model taking into account also the variability of the estimated correction terms. The different scoring behaviour of the 16 examiners complicated the identification of a geographical trend in a recent study on caries experience in Flemish children (Belgium) who were 7 years old. Since the measurement error is on the response the factor 'examiner' could be included in the regression model to correct for its confounding effect. However, controlling for examiner largely removed the geographical east–west trend. Instead, we suggest a (Bayesian) ordinal logistic model which corrects for the scoring error (compared with a gold standard) using a calibration data set. The marginal posterior distribution of the regression parameters of interest is obtained by integrating out the correction terms pertaining to the calibration data set. This is done by processing two Markov chains sequentially, whereby one Markov chain samples the correction terms. The sampled correction term is imputed in the Markov chain pertaining to the regression parameters. The model was fitted to the oral health data of the Signal–Tandmobiel® study. A WinBUGS program was written to perform the analysis.  相似文献   

10.
In this article we consider the problem of detecting changes in level and trend in time series model in which the number of change-points is unknown. The approach of Bayesian stochastic search model selection is introduced to detect the configuration of changes in a time series. The number and positions of change-points are determined by a sequence of change-dependent parameters. The sequence is estimated by its posterior distribution via the maximum a posteriori (MAP) estimation. Markov chain Monte Carlo (MCMC) method is used to estimate posterior distributions of parameters. Some actual data examples including a time series of traffic accidents and two hydrological time series are analyzed.  相似文献   

11.
This note reconsiders the 'classical' approach to trend estimation and presents a modern treatment of this technique that enables trend filters which incorporate end-effects to be constructed easily and efficiently. The approach is illustrated by estimating recent Northern Hemispheric temperature trends. In so doing, it shows how classical trend models may be selected in empirical applications and indicates how this choice determines the properties of the latest trend estimates.  相似文献   

12.
This paper describes an alternative approach for testing for the existence of trend among time series. The test method has been constructed using wavelet analysis which has the ability of decomposing a time series into low frequencies (trend) and high-frequency (noise) components. Under the normality assumption, the test is distributed as F. However, using generated empirical critical values, the properties of the test statistic have been investigated under different conditions and different types of wavelet. The Harr wavelet has shown to exhibit the highest power among the other wavelet types.

The methodology here has been applied to real temperature data in Sweden for the period 1850-1999. The results indicate a significant increasing trend which agrees with the 'global warming' hypothesis during the last 100 years.  相似文献   

13.
A Bayesian analysis is presented of a time series which is the sum of a stationary component with a smooth spectral density and a deterministic component consisting of a linear combination of a trend and periodic terms. The periodic terms may have known or unknown frequencies. The advantage of our approach is that different features of the data—such as the regression parameters, the spectral density, unknown frequencies and missing observations—are combined in a hierarchical Bayesian framework and estimated simultaneously. A Bayesian test to detect deterministic components in the data is also constructed. By using an asymptotic approximation to the likelihood, the computation is carried out efficiently using the Markov chain Monte Carlo method in O ( Mn ) operations, where n is the sample size and M is the number of iterations. We show empirically that our approach works well on real and simulated samples.  相似文献   

14.
The theory and properties of trend-free (TF) and nearly trend-free (NTF) block designs are wel1 developed. Applications have been hampered because a methodology for design construction has not been available.

This article begins with a short review of concepts and properties of TF and NTF block designs. The major contribution is provision of an algorithm for the construction of linear and nearly linear TF block designs. The algorithm is incorporated in a computer program in FORTRAN 77 provided in an appendix for the IBM PC or compatible microcomputer, a program adaptable also to other computers. Three sets of block designs generated by the program are given as examples.

A numerical example of analysis of a linear trend-free balanced incomplete block design is provided.  相似文献   

15.
This article discusses the discretization of continuous-time filters for application to discrete time series sampled at any fixed frequency. In this approach, the filter is first set up directly in continuous-time; since the filter is expressed over a continuous range of lags, we also refer to them as continuous-lag filters. The second step is to discretize the filter itself. This approach applies to different problems in signal extraction, including trend or business cycle analysis, and the method allows for coherent design of discrete filters for observed data sampled as a stock or a flow, for nonstationary data with stochastic trend, and for different sampling frequencies. We derive explicit formulas for the mean squared error (MSE) optimal discretization filters. We also discuss the problem of optimal interpolation for nonstationary processes – namely, how to estimate the values of a process and its components at arbitrary times in-between the sampling times. A number of illustrations of discrete filter coefficient calculations are provided, including the local level model (LLM) trend filter, the smooth trend model (STM) trend filter, and the Band Pass (BP) filter. The essential methodology can be applied to other kinds of trend extraction problems. Finally, we provide an extended demonstration of the method on CPI flow data measured at monthly and annual sampling frequencies.  相似文献   

16.
The spectral analysis of Gaussian linear time-series processes is usually based on uni-frequential tools because the spectral density functions of degree 2 and higher are identically zero and there is no polyspectrum in this case. In finite samples, such an approach does not allow the resolution of closely adjacent spectral lines, except by using autoregressive models of excessively high order in the method of maximum entropy. In this article, multi-frequential periodograms designed for the analysis of discrete and mixed spectra are defined and studied for their properties in finite samples. For a given vector of frequencies ω, the sum of squares of the corresponding trigonometric regression model fitted to a time series by unweighted least squares defines the multi-frequential periodogram statistic IM(ω). When ω is unknown, it follows from the properties of nonlinear models whose parameters separate (i.e., the frequencies and the cosine and sine coefficients here) that the least-squares estimator of frequencies is obtained by maximizing I M(ω). The first-order, second-order and distribution properties of I M(ω) are established theoretically in finite samples, and are compared with those of Schuster's uni-frequential periodogram statistic. In the multi-frequential periodogram analysis, the least-squares estimator of frequencies is proved to be theoretically unbiased in finite samples if the number of periodic components of the time series is correctly estimated. Here, this number is estimated at the end of a stepwise procedure based on pseudo-Flikelihood ratio tests. Simulations are used to compare the stepwise procedure involving I M(ω) with a stepwise procedure using Schuster's periodogram, to study an approximation of the asymptotic theory for the frequency estimators in finite samples in relation to the proximity and signal-to-noise ratio of the periodic components, and to assess the robustness of I M(ω) against autocorrelation in the analysis of mixed spectra. Overall, the results show an improvement of the new method over the classical approach when spectral lines are adjacent. Finally, three examples with real data illustrate specific aspects of the method, and extensions (i.e., unequally spaced observations, trend modeling, replicated time series, periodogram matrices) are outlined.  相似文献   

17.
This article presents a review of some modern approaches to trend extraction for one-dimensional time series, which is one of the major tasks of time series analysis. The trend of a time series is usually defined as a smooth additive component which contains information about the time series global change, and we discuss this and other definitions of the trend. We do not aim to review all the novel approaches, but rather to observe the problem from different viewpoints and from different areas of expertise. The article contributes to understanding the concept of a trend and the problem of its extraction. We present an overview of advantages and disadvantages of the approaches under consideration, which are: the model-based approach (MBA), nonparametric linear filtering, singular spectrum analysis (SSA), and wavelets. The MBA assumes the specification of a stochastic time series model, which is usually either an autoregressive integrated moving average (ARIMA) model or a state space model. The nonparametric filtering methods do not require specification of model and are popular because of their simplicity in application. We discuss the Henderson, LOESS, and Hodrick–Prescott filters and their versions derived by exploiting the Reproducing Kernel Hilbert Space methodology. In addition to these prominent approaches, we consider SSA and wavelet methods. SSA is widespread in the geosciences; its algorithm is similar to that of principal components analysis, but SSA is applied to time series. Wavelet methods are the de facto standard for denoising in signal procession, and recent works revealed their potential in trend analysis.  相似文献   

18.
This paper considers estimating the model coefficients when the observed periodic autoregressive time series is contaminated by a trend. The proposed Yule–Walker estimators are obtained by a two-step procedure. In the first step, the trend is estimated by a weighted local polynomial, and the residuals are obtained by subtracting the trend estimates from the observations; in the second step, the model coefficients are estimated by the well-known Yule–Walker method via the residuals. It is shown that under certain conditions such Yule–Walker estimators are oracally efficient, i.e., they are asymptotically equivalent to those obtained from periodic autoregressive time series without a trend. An easy-to-use implementation procedure is provided. The performance of the estimators is illustrated by simulation studies and real data analysis. In particular, the simulation studies show that the proposed estimator outperforms that obtained from the residuals when the trend is estimated by kernel smoothing without taking the heteroscedasticity into consideration.  相似文献   

19.
Abstract

We develop and exemplify application of new classes of dynamic models for time series of nonnegative counts. Our novel univariate models combine dynamic generalized linear models for binary and conditionally Poisson time series, with dynamic random effects for over-dispersion. These models estimate dynamic regression coefficients in both binary and nonzero count components. Sequential Bayesian analysis allows fast, parallel analysis of sets of decoupled time series. New multivariate models then enable information sharing in contexts when data at a more highly aggregated level provide more incisive inferences on shared patterns such as trends and seasonality. A novel multiscale approach—one new example of the concept of decouple/recouple in time series—enables information sharing across series. This incorporates cross-series linkages while insulating parallel estimation of univariate models, and hence enables scalability in the number of series. The major motivating context is supermarket sales forecasting. Detailed examples drawn from a case study in multistep forecasting of sales of a number of related items showcase forecasting of multiple series, with discussion of forecast accuracy metrics, comparisons with existing methods, and broader questions of probabilistic forecast assessment.  相似文献   

20.
This article assumes the goal of proposing a simulation-based theoretical model comparison methodology with application to two time series road accident models. The model comparison exercise helps to quantify the main differences and similarities between the two models and comprises of three main stages: (1) simulation of time series through a true model with predefined properties; (2) estimation of the alternative model using the simulated data; (3) sensitivity analysis to quantify the effect of changes in the true model parameters on alternative model parameter estimates through analysis of variance, ANOVA. The proposed methodology is applied to two time series road accident models: UCM (unobserved components model) and DRAG (Demand for Road Use, Accidents and their Severity). Assuming that the real data-generating process is the UCM, new datasets approximating the road accident data are generated, and DRAG models are estimated using the simulated data. Since these two methodologies are usually assumed to be equivalent, in a sense that both models accurately capture the true effects of the regressors, we are specifically addressing the modeling of the stochastic trend, through the alternative model. Stochastic trend is the time-varying component and is one of the crucial factors in time series road accident data. Theoretically, it can be easily modeled through UCM, given its modeling properties. However, properly capturing the effect of a non-stationary component such as stochastic trend in a stationary explanatory model such as DRAG is challenging. After obtaining the parameter estimates of the alternative model (DRAG), the estimates of both true and alternative models are compared and the differences are quantified through experimental design and ANOVA techniques. It is observed that the effects of the explanatory variables used in the UCM simulation are only partially captured by the respective DRAG coefficients. This a priori, could be due to multicollinearity but the results of both simulation of UCM data and estimating of DRAG models reveal that there is no significant static correlation among regressors. Moreover, in fact, using ANOVA, it is determined that this regression coefficient estimation bias is caused by the presence of the stochastic trend present in the simulated data. Thus, the results of the methodological development suggest that the stochastic component present in the data should be treated accordingly through a preliminary, exploratory data analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号