首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper shows how cubic smoothing splines fitted to univariate time series data can be used to obtain local linear forecasts. The approach is based on a stochastic state‐space model which allows the use of likelihoods for estimating the smoothing parameter, and which enables easy construction of prediction intervals. The paper shows that the model is a special case of an ARIMA(0, 2, 2) model; it provides a simple upper bound for the smoothing parameter to ensure an invertible model; and it demonstrates that the spline model is not a special case of Holt's local linear trend method. The paper compares the spline forecasts with Holt's forecasts and those obtained from the full ARIMA(0, 2, 2) model, showing that the restricted parameter space does not impair forecast performance. The advantage of this approach over a full ARIMA(0, 2, 2) model is that it gives a smooth trend estimate as well as a linear forecast function.  相似文献   

2.
In this work, we introduce a class of dynamic models for time series taking values on the unit interval. The proposed model follows a generalized linear model approach where the random component, conditioned on the past information, follows a beta distribution, while the conditional mean specification may include covariates and also an extra additive term given by the iteration of a map that can present chaotic behavior. The resulting model is very flexible and its systematic component can accommodate short‐ and long‐range dependence, periodic behavior, laminar phases, etc. We derive easily verifiable conditions for the stationarity of the proposed model, as well as conditions for the law of large numbers and a Birkhoff‐type theorem to hold. A Monte Carlo simulation study is performed to assess the finite sample behavior of the partial maximum likelihood approach for parameter estimation in the proposed model. Finally, an application to the proportion of stored hydroelectrical energy in Southern Brazil is presented.  相似文献   

3.
Estimation of the lifetime distribution of industrial components and systems yields very important information for manufacturers and consumers. However, obtaining reliability data is time consuming and costly. In this context, degradation tests are a useful alternative approach to lifetime and accelerated life tests in reliability studies. The approximate method is one of the most used techniques for degradation data analysis. It is very simple to understand and easy to implement numerically in any statistical software package. This paper uses time series techniques in order to propose a modified approximate method (MAM). The MAM improves the standard one in two aspects: (1) it uses previous observations in the degradation path as a Markov process for future prediction and (2) it is not necessary to specify a parametric form for the degradation path. Characteristics of interest such as mean or median time to failure and percentiles, among others, are obtained by using the modified method. A simulation study is performed in order to show the improved properties of the modified method over the standard one. Both methods are also used to estimate the failure time distribution of the fatigue-crack-growth data set.  相似文献   

4.
The Lasso has sparked interest in the use of penalization of the log‐likelihood for variable selection, as well as for shrinkage. We are particularly interested in the more‐variables‐than‐observations case of characteristic importance for modern data. The Bayesian interpretation of the Lasso as the maximum a posteriori estimate of the regression coefficients, which have been given independent, double exponential prior distributions, is adopted. Generalizing this prior provides a family of hyper‐Lasso penalty functions, which includes the quasi‐Cauchy distribution of Johnstone and Silverman as a special case. The properties of this approach, including the oracle property, are explored, and an EM algorithm for inference in regression problems is described. The posterior is multi‐modal, and we suggest a strategy of using a set of perfectly fitting random starting values to explore modes in different regions of the parameter space. Simulations show that our procedure provides significant improvements on a range of established procedures, and we provide an example from chemometrics.  相似文献   

5.
This paper is about vector autoregressive‐moving average models with time‐dependent coefficients to represent non‐stationary time series. Contrary to other papers in the univariate case, the coefficients depend on time but not on the series' length n. Under appropriate assumptions, it is shown that a Gaussian quasi‐maximum likelihood estimator is almost surely consistent and asymptotically normal. The theoretical results are illustrated by means of two examples of bivariate processes. It is shown that the assumptions underlying the theoretical results apply. In the second example, the innovations are marginally heteroscedastic with a correlation ranging from ?0.8 to 0.8. In the two examples, the asymptotic information matrix is obtained in the Gaussian case. Finally, the finite‐sample behaviour is checked via a Monte Carlo simulation study for n from 25 to 400. The results confirm the validity of the asymptotic properties even for short series and the asymptotic information matrix deduced from the theory.  相似文献   

6.
Incomplete data subject to non‐ignorable non‐response are often encountered in practice and have a non‐identifiability problem. A follow‐up sample is randomly selected from the set of non‐respondents to avoid the non‐identifiability problem and get complete responses. Glynn, Laird, & Rubin analyzed non‐ignorable missing data with a follow‐up sample under a pattern mixture model. In this article, maximum likelihood estimation of parameters of the categorical missing data is considered with a follow‐up sample under a selection model. To estimate the parameters with non‐ignorable missing data, the EM algorithm with weighting, proposed by Ibrahim, is used. That is, in the E‐step, the weighted mean is calculated using the fractional weights for imputed data. Variances are estimated using the approximated jacknife method. Simulation results are presented to compare the proposed method with previously presented methods.  相似文献   

7.
Time‐varying coefficient models are widely used in longitudinal data analysis. These models allow the effects of predictors on response to vary over time. In this article, we consider a mixed‐effects time‐varying coefficient model to account for the within subject correlation for longitudinal data. We show that when kernel smoothing is used to estimate the smooth functions in time‐varying coefficient models for sparse or dense longitudinal data, the asymptotic results of these two situations are essentially different. Therefore, a subjective choice between the sparse and dense cases might lead to erroneous conclusions for statistical inference. In order to solve this problem, we establish a unified self‐normalized central limit theorem, based on which a unified inference is proposed without deciding whether the data are sparse or dense. The effectiveness of the proposed unified inference is demonstrated through a simulation study and an analysis of Baltimore MACS data.  相似文献   

8.
Abstract. Although generalized cross‐validation (GCV) has been frequently applied to select bandwidth when kernel methods are used to estimate non‐parametric mixed‐effect models in which non‐parametric mean functions are used to model covariate effects, and additive random effects are applied to account for overdispersion and correlation, the optimality of the GCV has not yet been explored. In this article, we construct a kernel estimator of the non‐parametric mean function. An equivalence between the kernel estimator and a weighted least square type estimator is provided, and the optimality of the GCV‐based bandwidth is investigated. The theoretical derivations also show that kernel‐based and spline‐based GCV give very similar asymptotic results. This provides us with a solid base to use kernel estimation for mixed‐effect models. Simulation studies are undertaken to investigate the empirical performance of the GCV. A real data example is analysed for illustration.  相似文献   

9.
Abstract. In this article, we propose a new parametric family of models for real‐valued spatio‐temporal stochastic processes S ( x , t ) and show how low‐rank approximations can be used to overcome the computational problems that arise in fitting the proposed class of models to large datasets. Separable covariance models, in which the spatio‐temporal covariance function of S ( x , t ) factorizes into a product of purely spatial and purely temporal functions, are often used as a convenient working assumption but are too inflexible to cover the range of covariance structures encountered in applications. We define positive and negative non‐separability and show that in our proposed family we can capture positive, zero and negative non‐separability by varying the value of a single parameter.  相似文献   

10.
The author studies state space models for multivariate binomial time series, focussing on the development of the Kalman filter and smoothing for state variables. He proposes a Monte Carlo approach employing the latent variable representation which transplants the classical Kalman filter and smoothing developed for Gaussian state space models to discrete models and leads to a conceptually simple and computationally convenient approach. The method is illustrated through simulations and concrete examples.  相似文献   

11.
This paper considers quantile regression for a wide class of time series models including autoregressive and moving average (ARMA) models with asymmetric generalized autoregressive conditional heteroscedasticity errors. The classical mean‐variance models are reinterpreted as conditional location‐scale models so that the quantile regression method can be naturally geared into the considered models. The consistency and asymptotic normality of the quantile regression estimator is established in location‐scale time series models under mild conditions. In the application of this result to ARMA‐generalized autoregressive conditional heteroscedasticity models, more primitive conditions are deduced to obtain the asymptotic properties. For illustration, a simulation study and a real data analysis are provided.  相似文献   

12.
Conservation biology aims at assessing the status of a population, based on information which is often incomplete. Integrated population modelling based on state‐space models appears to be a powerful and relevant way of combining into a single likelihood several types of information such as capture‐recapture data and population surveys. In this paper, the authors describe the principles of integrated population modelling and they evaluate its performance for conservation biology based on a case study, that of the black‐footed albatross, a northern Pacific albatross species suspected to be impacted by longline fishing  相似文献   

13.
We consider the problem of parameter estimation for inhomogeneous space‐time shot‐noise Cox point processes. We explore the possibility of using a stepwise estimation method and dimensionality‐reducing techniques to estimate different parts of the model separately. We discuss the estimation method using projection processes and propose a refined method that avoids projection to the temporal domain. This remedies the main flaw of the method using projection processes – possible overlapping in the projection process of clusters, which are clearly separated in the original space‐time process. This issue is more prominent in the temporal projection process where the amount of information lost by projection is higher than in the spatial projection process. For the refined method, we derive consistency and asymptotic normality results under the increasing domain asymptotics and appropriate moment and mixing assumptions. We also present a simulation study that suggests that cluster overlapping is successfully overcome by the refined method.  相似文献   

14.
Supremum score test statistics are often used to evaluate hypotheses with unidentifiable nuisance parameters under the null hypothesis. Although these statistics provide an attractive framework to address non‐identifiability under the null hypothesis, little attention has been paid to their distributional properties in small to moderate sample size settings. In situations where there are identifiable nuisance parameters under the null hypothesis, these statistics may behave erratically in realistic samples as a result of a non‐negligible bias induced by substituting these nuisance parameters by their estimates under the null hypothesis. In this paper, we propose an adjustment to the supremum score statistics by subtracting the expected bias from the score processes and show that this adjustment does not alter the limiting null distribution of the supremum score statistics. Using a simple example from the class of zero‐inflated regression models for count data, we show empirically and theoretically that the adjusted tests are superior in terms of size and power. The practical utility of this methodology is illustrated using count data in HIV research.  相似文献   

15.
Abstract. Let {Zt}t 0 be a Lévy process with Lévy measure ν and let be a random clock, where g is a non‐negative function and is an ergodic diffusion independent of Z. Time‐changed Lévy models of the form are known to incorporate several important stylized features of asset prices, such as leptokurtic distributions and volatility clustering. In this article, we prove central limit theorems for a type of estimators of the integral parameter β(?):=∫?(x)ν(dx), valid when both the sampling frequency and the observation time‐horizon of the process get larger. Our results combine the long‐run ergodic properties of the diffusion process with the short‐term ergodic properties of the Lévy process Z via central limit theorems for martingale differences. The performance of the estimators are illustrated numerically for Normal Inverse Gaussian process Z and a Cox–Ingersoll–Ross process .  相似文献   

16.
Summary. The development of time series models for traffic volume data constitutes an important step in constructing automated tools for the management of computing infrastructure resources. We analyse two traffic volume time series: one is the volume of hard disc activity, aggregated into half-hour periods, measured on a workstation, and the other is the volume of Internet requests made to a workstation. Both of these time series exhibit features that are typical of network traffic data, namely strong seasonal components and highly non-Gaussian distributions. For these time series, a particular class of non-linear state space models is proposed, and practical techniques for model fitting and forecasting are demonstrated.  相似文献   

17.
Two contributions to the statistical analysis of circular data are given. First we construct data‐driven smooth goodness‐of‐fit tests for the circular von Mises assumption. As a second method, we propose a new graphical diagnostic tool for the detection of lack‐of‐fit for circular distributions. We illustrate our methods on two real datasets.  相似文献   

18.
In recent years, modelling count data has become one of the most important and popular topics in time‐series analysis. At the same time, variable selection methods have become widely used in many fields as an effective statistical modelling tool. In this paper, we consider using a variable selection method to solve a modelling problem regarding the first‐order Poisson integer‐valued autoregressive (PINAR(1)) model with covariables. The PINAR(1) model with covariables is widely used in many areas because of its practicality. When using this model to deal with practical problems, multiple covariables are added to the model because it is impossible to know in advance which covariables will affect the results. But the inclusion of some insignificant covariables is almost impossible to avoid. Unfortunately, the usual estimation method is not adequate for the task of deleting the insignificant covariables that cause statistical inferences to become biased. To overcome this defect, we propose a penalised conditional least squares (PCLS) method, which can consistently select the true model. The PCLS estimator is also provided and its asymptotic properties are established. Simulation studies demonstrate that the PCLS method is effective for estimation and variable selection. One practical example is also presented to illustrate the practicability of the PCLS method.  相似文献   

19.
This study demonstrates the decomposition of seasonality and long‐term trend in seismological data observed at irregular time intervals. The decomposition was applied to the estimation of earthquake detection capability using cubic B‐splines and a Bayesian approach, which is similar to the seasonal adjustment model frequently used to analyse economic time‐series data. We employed numerical simulation to verify the method and then applied it to real earthquake datasets obtained in and around the northern Honshu island, Japan. With this approach, we obtained the seasonality of the detection capability related to the annual variation of wind speed and the long‐term trend corresponding to the recent improvement of the seismic network in the studied region.  相似文献   

20.
This paper is concerned with the analysis of a time series comprising the eruption inter‐arrival times of the Old Faithful geyser in 2009. The series is much longer than other well‐documented ones and thus gives a more comprehensive insight into the dynamics of the geyser. Basic hidden Markov models with gamma state‐dependent distributions and several extensions are implemented. In order to better capture the stochastic dynamics exhibited by Old Faithful, the different non‐standard models under consideration seek to increase the flexibility of the basic models in various ways: (i) by allowing non‐geometric distributions for the times spent in the different states; (ii) by increasing the memory of the underlying Markov chain, with or without assuming additional structure implied by mixture transition distribution models; and (iii) by incorporating feedback from the observation process on the latent process. In each case it is shown how the likelihood can be formulated as a matrix product which can be conveniently maximized numerically.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号