首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 608 毫秒
1.
This paper extends the univariate time series smoothing approach provided by penalized least squares to a multivariate setting, thus allowing for joint estimation of several time series trends. The theoretical results are valid for the general multivariate case, but particular emphasis is placed on the bivariate situation from an applied point of view. The proposal is based on a vector signal-plus-noise representation of the observed data that requires the first two sample moments and specifying only one smoothing constant. A measure of the amount of smoothness of an estimated trend is introduced so that an analyst can set in advance a desired percentage of smoothness to be achieved by the trend estimate. The required smoothing constant is determined by the chosen percentage of smoothness. Closed form expressions for the smoothed estimated vector and its variance-covariance matrix are derived from a straightforward application of generalized least squares, thus providing best linear unbiased estimates for the trends. A detailed algorithm applicable for estimating bivariate time series trends is also presented and justified. The theoretical results are supported by a simulation study and two real applications. One corresponds to Mexican and US macroeconomic data within the context of business cycle analysis, and the other one to environmental data pertaining to a monitored site in Scotland.  相似文献   

2.
Both kriging and non-parametric regression smoothing can model a non-stationary regression function with spatially correlated errors. However comparisons have mainly been based on ordinary kriging and smoothing with uncorrelated errors. Ordinary kriging attributes smoothness of the response to spatial autocorrelation whereas non-parametric regression attributes trends to a smooth regression function. For spatial processes it is reasonable to suppose that the response is due to both trend and autocorrelation. This paper reviews methodology for non-parametric regression with autocorrelated errors which is a natural compromise between the two methods. Re-analysis of the one-dimensional stationary spatial data of Laslett (1994) and a clearly non-stationary time series demonstrates the rather surprising result that for these data, ordinary kriging outperforms more computationally intensive models including both universal kriging and correlated splines for spatial prediction. For estimating the regression function, non-parametric regression provides adaptive estimation, but the autocorrelation must be accounted for in selecting the smoothing parameter.  相似文献   

3.
In model-based estimation of unobserved components, the minimum mean squared error estimator of the noise component is different from white noise. In this article, some of the differences are analyzed. It is seen how the variance of the component is always underestimated, and the smaller the noise variance, the larger the underestimation. Estimators of small-variance noise components will also have large autocorrelations. Finally, in the context of an application, the sample autocorrelation function of the estimated noise is seen to perform well as a diagnostic tool, even when the variance is small and the series is of relatively short length.  相似文献   

4.
Many estimation procedures for quantitative linear models with autocorrelated errors have been proposed in the literature. A number of these procedures have been compared in various ways for different sample sizes and autocorrelation parameters values and for structured or random explanatory vaiables. In this paper, we revisit three situations that were considered to some extent in previous studies, by comparing ten estimation procedures: Ordinary Least Squares (OLS), Generalized Least Squares (GLS), estimated Generalized Least Squares (six procedures), Maximum Likelihood (ML), and First Differences (FD). The six estimated GLS procedures and the ML procedure differ in the way the error autocovariance matrix is estimated. The three situations can be defined as follows: Case 1, the explanatory variable x in the simple linear regression is fixed; Case 2,x is purely random; and Case 3x is first-order autoregressive. Following a theoretical presentation, the ten estimation procedures are compared in a Monte Carlo study conducted in the time domain, where the errors are first-order autoregressive in Cases 1-3. The measure of comparison for the estimation procedures is their efficiency relative to OLS. It is evaluated as a function of the time series length and the magnitude and sign of the error autocorrelation parameter. Overall, knowledge of the model of the time series process generating the errors enhances efficiency in estimated GLS. Differences in the efficiency of estimation procedures between Case 1 and Cases 2 and 3 as well as differences in efficiency among procedures in a given situation are observed and discussed.  相似文献   

5.
We consider the problem of estimating a trend with different amounts of smoothness for segments of a time series subjected to different variability regimes. We propose using an unobserved components model to consider the existence of at least two data segments. We first fix some desired percentages of smoothness for the trend segments and deduce the corresponding smoothing parameters involved. Once the size of each segment is chosen, the smoothing formulas here derived produce trend estimates for all segments with the desired smoothness as well as their corresponding estimated variances. Empirical examples from demography and economics illustrate our proposal.  相似文献   

6.
This paper concerns model selection for autoregressive time series when the observations are contaminated with trend. We propose an adaptive least absolute shrinkage and selection operator (LASSO) type model selection method, in which the trend is estimated by B-splines, the detrended residuals are calculated, and then the residuals are used as if they were observations to optimize an adaptive LASSO type objective function. The oracle properties of such an adaptive LASSO model selection procedure are established; that is, the proposed method can identify the true model with probability approaching one as the sample size increases, and the asymptotic properties of estimators are not affected by the replacement of observations with detrended residuals. The intensive simulation studies of several constrained and unconstrained autoregressive models also confirm the theoretical results. The method is illustrated by two time series data sets, the annual U.S. tobacco production and annual tree ring width measurements.  相似文献   

7.
A number of parametric and non-parametric linear trend tests for time series are evaluated in terms of test size and power, using also resampling techniques to form the empirical distribution of the test statistics under the null hypothesis of no linear trend. For resampling, both bootstrap and surrogate data are considered. Monte Carlo simulations were done for several types of residuals (uncorrelated and correlated with normal and nonnormal distributions) and a range of small magnitudes of the trend coefficient. In particular for AR(1) and ARMA(1, 1) residual processes, we investigate the discrimination of strong autocorrelation from linear trend with respect to the sample size. The correct test size is obtained for larger data sizes as autocorrelation increases and only when a randomization test that accounts for autocorrelation is used. The overall results show that the type I and II errors of the trend tests are reduced with the use of resampled data. Following the guidelines suggested by the simulation results, we could find significant linear trend in the data of land air temperature and sea surface temperature.  相似文献   

8.
This article assumes the goal of proposing a simulation-based theoretical model comparison methodology with application to two time series road accident models. The model comparison exercise helps to quantify the main differences and similarities between the two models and comprises of three main stages: (1) simulation of time series through a true model with predefined properties; (2) estimation of the alternative model using the simulated data; (3) sensitivity analysis to quantify the effect of changes in the true model parameters on alternative model parameter estimates through analysis of variance, ANOVA. The proposed methodology is applied to two time series road accident models: UCM (unobserved components model) and DRAG (Demand for Road Use, Accidents and their Severity). Assuming that the real data-generating process is the UCM, new datasets approximating the road accident data are generated, and DRAG models are estimated using the simulated data. Since these two methodologies are usually assumed to be equivalent, in a sense that both models accurately capture the true effects of the regressors, we are specifically addressing the modeling of the stochastic trend, through the alternative model. Stochastic trend is the time-varying component and is one of the crucial factors in time series road accident data. Theoretically, it can be easily modeled through UCM, given its modeling properties. However, properly capturing the effect of a non-stationary component such as stochastic trend in a stationary explanatory model such as DRAG is challenging. After obtaining the parameter estimates of the alternative model (DRAG), the estimates of both true and alternative models are compared and the differences are quantified through experimental design and ANOVA techniques. It is observed that the effects of the explanatory variables used in the UCM simulation are only partially captured by the respective DRAG coefficients. This a priori, could be due to multicollinearity but the results of both simulation of UCM data and estimating of DRAG models reveal that there is no significant static correlation among regressors. Moreover, in fact, using ANOVA, it is determined that this regression coefficient estimation bias is caused by the presence of the stochastic trend present in the simulated data. Thus, the results of the methodological development suggest that the stochastic component present in the data should be treated accordingly through a preliminary, exploratory data analysis.  相似文献   

9.
In this paper, a new hybrid model of vector autoregressive moving average (VARMA) models and Bayesian networks is proposed to improve the forecasting performance of multivariate time series. In the proposed model, the VARMA model, which is a popular linear model in time series forecasting, is specified to capture the linear characteristics. Then the errors of the VARMA model are clustered into some trends by K-means algorithm with Krzanowski–Lai cluster validity index determining the number of trends, and a Bayesian network is built to learn the relationship between the data and the trend of its corresponding VARMA error. Finally, the estimated values of the VARMA model are compensated by the probabilities of their corresponding VARMA errors belonging to each trend, which are obtained from the Bayesian network. Compared with VARMA models, the experimental results with a simulation study and two multivariate real-world data sets indicate that the proposed model can effectively improve the prediction performance.  相似文献   

10.
We propose generalized linear models for time or age-time tables of seasonal counts, with the goal of better understanding seasonal patterns in the data. The linear predictor contains a smooth component for the trend and the product of a smooth component (the modulation) and a periodic time series of arbitrary shape (the carrier wave). To model rates, a population offset is added. Two-dimensional trends and modulation are estimated using a tensor product B-spline basis of moderate dimension. Further smoothness is ensured using difference penalties on the rows and columns of the tensor product coefficients. The optimal penalty tuning parameters are chosen based on minimization of a quasi-information criterion. Computationally efficient estimation is achieved using array regression techniques, avoiding excessively large matrices. The model is applied to female death rate in the US due to cerebrovascular diseases and respiratory diseases.  相似文献   

11.
Traditional control charts assume independence of observations obtained from the monitored process. However, if the observations are autocorrelated, these charts often do not perform as intended by the design requirements. Recently, several control charts have been proposed to deal with autocorrelated observations. The residual chart, modified Shewhart chart, EWMAST chart, and ARMA chart are such charts widely used for monitoring the occurrence of assignable causes in a process when the process exhibits inherent autocorrelation. Besides autocorrelation, one other issue is the unknown values of true process parameters to be used in the control chart design, which are often estimated from a reference sample of in-control observations. Performances of the above-mentioned control charts for autocorrelated processes are significantly affected by the sample size used in a Phase I study to estimate the control chart parameters. In this study, we investigate the effect of Phase I sample size on the run length performance of these four charts for monitoring the changes in the mean of an autocorrelated process, namely an AR(1) process. A discussion of the practical implications of the results and suggestions on the sample size requirements for effective process monitoring are provided.  相似文献   

12.
Impacts of complex emergencies or relief interventions have often been evaluated by absolute mortality compared to international standardized mortality rates. A better evaluation would be to compare with local baseline mortality of the affected populations. A projection of population-based survival data into time of emergency or intervention based on information from before the emergency may create a local baseline reference. We find a log-transformed Gaussian time series model where standard errors of the estimated rates are included in the variance to have the best forecasting capacity. However, if time-at-risk during the forecasted period is known then forecasting might be done using a Poisson time series model with overdispersion. Whatever, the standard error of the estimated rates must be included in the variance of the model either in an additive form in a Gaussian model or in a multiplicative form by overdispersion in a Poisson model. Data on which the forecasting is based must be modelled carefully concerning not only calendar-time trends but also periods with excessive frequency of events (epidemics) and seasonal variations to eliminate residual autocorrelation and to make a proper reference for comparison, reflecting changes over time during the emergency. Hence, when modelled properly it is possible to predict a reference to an emergency-affected population based on local conditions. We predicted childhood mortality during the war in Guinea-Bissau 1998-1999. We found an increased mortality in the first half-year of the war and a mortality corresponding to the expected one in the last half-year of the war.  相似文献   

13.
The presence of outliers in time series gives rise to important effects on the sample autocorrelation coefficients. In the case where these outliers are not adequately treated, their presence causes errors in the identification of the stochastic process generator of the time series under study. In this respect, Chan has demonstrated that, independent of the underlying process of the outlier-free series, a level shift (LS) at the limit (i.e. asymptotically and considering an LS of a sufficiently large size) will lead to the identification of non-stationary processes; with respect to a temporary change (TC), this will lead, again at the limit, to the identification of an AR(1) autoregressive process with a coefficient equal to the dampening factor that defines this TC. The objective of this paper is to analyze, by way of a simulation exercise, how large the LS and TC present in the time series must be for the limiting result to be relevant, in the sense of seriously affecting the instruments used at the identification stage of the ARIMA models, i.e. the sample autocorrelation function and the sample partial autocorrelation function.  相似文献   

14.
Summary.  Time series arise often in environmental monitoring settings, which typically involve measuring processes repeatedly over time. In many such applications, observations are irregularly spaced and, additionally, are not distributed normally. An example is water monitoring data collected in Boston Harbor by the Massachusetts Water Resources Authority. We describe a simple robust approach for estimating regression parameters and a first-order autocorrelation parameter in a time series where the observations are irregularly spaced. Estimates are obtained from an estimating equation that is constructed as a linear combination of estimated innovation errors, suitably made robust by symmetric and possibly bounded functions. Under an assumption of data missing completely at random and mild regularity conditions, the proposed estimating equation yields consistent and asymptotically normal estimates. Simulations suggest that our estimator performs well in moderate sample sizes. We demonstrate our method on Secchi depth data collected from Boston Harbor.  相似文献   

15.
New approaches to prior specification and structuring in autoregressive time series models are introduced and developed. We focus on defining classes of prior distributions for parameters and latent variables related to latent components of an autoregressive model for an observed time series. These new priors naturally permit the incorporation of both qualitative and quantitative prior information about the number and relative importance of physically meaningful components that represent low frequency trends, quasi-periodic subprocesses and high frequency residual noise components of observed series. The class of priors also naturally incorporates uncertainty about model order and hence leads in posterior analysis to model order assessment and resulting posterior and predictive inferences that incorporate full uncertainties about model order as well as model parameters. Analysis also formally incorporates uncertainty and leads to inferences about unknown initial values of the time series, as it does for predictions of future values. Posterior analysis involves easily implemented iterative simulation methods, developed and described here. One motivating field of application is climatology, where the evaluation of latent structure, especially quasi-periodic structure, is of critical importance in connection with issues of global climatic variability. We explore the analysis of data from the southern oscillation index, one of several series that has been central in recent high profile debates in the atmospheric sciences about recent apparent trends in climatic indicators.  相似文献   

16.
We study a Bayesian approach to recovering the initial condition for the heat equation from noisy observations of the solution at a later time. We consider a class of prior distributions indexed by a parameter quantifying “smoothness” and show that the corresponding posterior distributions contract around the true parameter at a rate that depends on the smoothness of the true initial condition and the smoothness and scale of the prior. Correct combinations of these characteristics lead to the optimal minimax rate. One type of priors leads to a rate-adaptive Bayesian procedure. The frequentist coverage of credible sets is shown to depend on the combination of the prior and true parameter as well, with smoother priors leading to zero coverage and rougher priors to (extremely) conservative results. In the latter case, credible sets are much larger than frequentist confidence sets, in that the ratio of diameters diverges to infinity. The results are numerically illustrated by a simulated data example.  相似文献   

17.
This study considers testing for a unit root in a time series characterized by a structural change in its mean level. My approach follows the “intervention analysis” of Box and Tiao (1975) in the sense that I consider the change as being exogenous and as occurring at a known date. Standard unit-root tests are shown to be biased toward nonrejection of the hypothesis of a unit root when the full sample is used. Since tests using split sample regressions usually have low power, I design test statistics that allow the presence of a change in the mean of the series under both the null and alternative hypotheses. The limiting distribution of the statistics is derived and tabulated under the null hypothesis of a unit root. My analysis is illustrated by considering the behavior of various univariate time series for which the unit-root hypothesis has been advanced in the literature. This study complements that of Perron (1989), which considered time series with trends.  相似文献   

18.
A Bayesian analysis is presented of a time series which is the sum of a stationary component with a smooth spectral density and a deterministic component consisting of a linear combination of a trend and periodic terms. The periodic terms may have known or unknown frequencies. The advantage of our approach is that different features of the data—such as the regression parameters, the spectral density, unknown frequencies and missing observations—are combined in a hierarchical Bayesian framework and estimated simultaneously. A Bayesian test to detect deterministic components in the data is also constructed. By using an asymptotic approximation to the likelihood, the computation is carried out efficiently using the Markov chain Monte Carlo method in O ( Mn ) operations, where n is the sample size and M is the number of iterations. We show empirically that our approach works well on real and simulated samples.  相似文献   

19.
We consider measurement error models within the time series unobserved component framework. A variable of interest is observed with some measurement error and modelled as an unobserved component. The forecast and the prediction of this variable given the observed values is given by the Kalman filter and smoother along with their conditional variances. By expressing the forecasts and predictions as weighted averages of the observed values, we investigate the effect of estimation error in the measurement and observation noise variances. We also develop corrected standard errors for prediction and forecasting accounting for the fact that the measurement and observation error variances are estimated by the same sample that is used for forecasting and prediction purposes. We apply the theory to the Yellowstone grizzly bears and US index of production datasets.  相似文献   

20.
Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号