首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose of the paper is to propose an autocorrelogram estimation procedure for irregularly spaced data which are modelled as subordinated continuous time-series processes. Such processes, also called time-deformed stochastic processes, have been proposed in a variety of contexts. Before entertaining the possibility of modelling such time series, one is interested in examining simple diagnostics and data summaries. With continuous-time processes this is a challenging task which can be accomplished via kernel estimation. The paper develops the conceptual framework, the estimation procedure and its asymptotic properties. An illustrative empirical example is also provided.  相似文献   

2.
This article discusses the discretization of continuous-time filters for application to discrete time series sampled at any fixed frequency. In this approach, the filter is first set up directly in continuous-time; since the filter is expressed over a continuous range of lags, we also refer to them as continuous-lag filters. The second step is to discretize the filter itself. This approach applies to different problems in signal extraction, including trend or business cycle analysis, and the method allows for coherent design of discrete filters for observed data sampled as a stock or a flow, for nonstationary data with stochastic trend, and for different sampling frequencies. We derive explicit formulas for the mean squared error (MSE) optimal discretization filters. We also discuss the problem of optimal interpolation for nonstationary processes – namely, how to estimate the values of a process and its components at arbitrary times in-between the sampling times. A number of illustrations of discrete filter coefficient calculations are provided, including the local level model (LLM) trend filter, the smooth trend model (STM) trend filter, and the Band Pass (BP) filter. The essential methodology can be applied to other kinds of trend extraction problems. Finally, we provide an extended demonstration of the method on CPI flow data measured at monthly and annual sampling frequencies.  相似文献   

3.
4.
This article assumes the goal of proposing a simulation-based theoretical model comparison methodology with application to two time series road accident models. The model comparison exercise helps to quantify the main differences and similarities between the two models and comprises of three main stages: (1) simulation of time series through a true model with predefined properties; (2) estimation of the alternative model using the simulated data; (3) sensitivity analysis to quantify the effect of changes in the true model parameters on alternative model parameter estimates through analysis of variance, ANOVA. The proposed methodology is applied to two time series road accident models: UCM (unobserved components model) and DRAG (Demand for Road Use, Accidents and their Severity). Assuming that the real data-generating process is the UCM, new datasets approximating the road accident data are generated, and DRAG models are estimated using the simulated data. Since these two methodologies are usually assumed to be equivalent, in a sense that both models accurately capture the true effects of the regressors, we are specifically addressing the modeling of the stochastic trend, through the alternative model. Stochastic trend is the time-varying component and is one of the crucial factors in time series road accident data. Theoretically, it can be easily modeled through UCM, given its modeling properties. However, properly capturing the effect of a non-stationary component such as stochastic trend in a stationary explanatory model such as DRAG is challenging. After obtaining the parameter estimates of the alternative model (DRAG), the estimates of both true and alternative models are compared and the differences are quantified through experimental design and ANOVA techniques. It is observed that the effects of the explanatory variables used in the UCM simulation are only partially captured by the respective DRAG coefficients. This a priori, could be due to multicollinearity but the results of both simulation of UCM data and estimating of DRAG models reveal that there is no significant static correlation among regressors. Moreover, in fact, using ANOVA, it is determined that this regression coefficient estimation bias is caused by the presence of the stochastic trend present in the simulated data. Thus, the results of the methodological development suggest that the stochastic component present in the data should be treated accordingly through a preliminary, exploratory data analysis.  相似文献   

5.
State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplace's method, an asymptotic series expansion, to approximate the state's conditional mean and variance, together with a Gaussian conditional distribution. This Laplace-Gaussian filter (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.  相似文献   

6.
Periodically integrated time series require a periodic differencing filter to remove the stochastic trend. A non-periodic integrated time series needs the first-difference filter for similar reasons. When the changing sea- sonal fluctuations for the non-periodic integrated series can be described by seasonal dummy variables for which the corresponding parameters are not constant within the sampie, such a series may not be easily & stinguished from a periodically integrated time series. In this paper, nested and non-nested testing procedures are proposed to distinguish between these two alternative stochastic and non-stochastic seasonal processes, When it is assumed there is a single unknown structural break in the seasonal dummy parameters. Several empirical examples using quarterly real macroeconomic time series for the United Kingdom illustrate the nested and non-nested approaches.  相似文献   

7.
In this paper, we propose a value-at-risk (VaR) estimation technique based on a new stochastic volatility model with leverage effect, nonconstant conditional mean and jump. In order to estimate the model parameters and latent state variables, we integrate the particle filter and adaptive Markov Chain Monte Carlo (MCMC) algorithms to develop a novel adaptive particle MCMC (A-PMCMC) algorithm. Comprehensive simulation experiments based on three stock indices and two foreign exchange time series show effectiveness of the proposed A-PMCMC algorithm and the VaR estimation technique.  相似文献   

8.
《Econometric Reviews》2013,32(4):385-424
This paper introduces nonlinear dynamic factor models for various applications related to risk analysis. Traditional factor models represent the dynamics of processes driven by movements of latent variables, called the factors. Our approach extends this setup by introducing factors defined as random dynamic parameters and stochastic autocorrelated simulators. This class of factor models can represent processes with time varying conditional mean, variance, skewness and excess kurtosis. Applications discussed in the paper include dynamic risk analysis, such as risk in price variations (models with stochastic mean and volatility), extreme risks (models with stochastic tails), risk on asset liquidity (stochastic volatility duration models), and moral hazard in insurance analysis.

We propose estimation procedures for models with the marginal density of the series and factor dynamics parameterized by distinct subsets of parameters. Such a partitioning of the parameter vector found in many applications allows to simplify considerably statistical inference. We develop a two- stage Maximum Likelihood method, called the Finite Memory Maximum Likelihood, which is easy to implement in the presence of multiple factors. We also discuss simulation based estimation, testing, prediction and filtering.  相似文献   

9.
In a seminal paper, Godambe [1985. The foundations of finite sample estimation in stochastic processes. Biometrika 72, 419–428.] introduced the ‘estimating function’ approach to estimation of parameters in semi-parametric models under a filtering associated with a martingale structure. Later, Godambe [1987. The foundations of finite sample estimation in stochastic processes II. Bernoulli, Vol. 2. V.N.V. Science Press, 49–54.] and Godambe and Thompson [1989. An extension of quasi-likelihood Estimation. J. Statist. Plann. Inference 22, 137–172.] replaced this filtering by a more flexible conditioning. Abraham et al. [1997. On the prediction for some nonlinear time-series models using estimating functions. In: Basawa, I.V., et al. (Eds.), IMS Selected Proceedings of the Symposium on Estimating Functions, Vol. 32. pp. 259–268.] and Thavaneswaran and Heyde [1999. Prediction via estimating functions. J. Statist. Plann. Inference 77, 89–101.] invoked the theory of estimating functions for one-step ahead prediction in time-series models. This paper addresses the problem of simultaneous estimation of parameters and multi-step ahead prediction of a vector of future random variables in semi-parametric models by extending the inimitable approach of 13 and 14. The proposed technique is in conformity with the paradigm of the modern theory of estimating functions leading to finite sample optimality within a chosen class of estimating functions, which in turn are used to get the predictors. Particular applications of the technique give predictors that enjoy optimality properties with respect to other well-known criteria.  相似文献   

10.
Summary. Many economic and social phenomena are measured by composite indicators computed as weighted averages of a set of elementary time series. Often data are collected by means of large sample surveys, and processing takes a long time, whereas the values of some elementary component series may be available a considerable time before the others and may be used for forecasting the composite index. This problem is addressed within the framework of prediction theory for stochastic processes. A method is proposed for exploiting anticipated information to minimize the mean-square forecast error, and for selecting the most useful elementary series. An application to the Italian general industrial production index is illustrated, which demonstrates that knowledge of anticipated values of some, or even just one, component series may reduce the forecast error considerably.  相似文献   

11.
The paper has its origin in the finding that the frequency-domain estimation of ARh4A models can produce estimates which may be remarkably biased. Both of the frequency-domain estimation methods considered in the paper are based on the frequency-domain likelihood function, which depends on the periodogram ordinates of the time series. It is found that, as estimates of the spectrum ordinates, the corresponding periodogram ordinates may contain a rather remarkable bias, which again causes bias in the estimates of parameters produced by a frequency-domain estimation method of an ARMA model. The bias is reduced by tapering the observed time series. An example is given of estimation experiments for simulated time series from a pure autoregressive process of order two.  相似文献   

12.
This paper extends stochastic conditional duration (SCD) models for financial transaction data to allow for correlation between error processes and innovations of observed duration process and latent log duration process. Suitable algorithms of Markov Chain Monte Carlo (MCMC) are developed to fit the resulting SCD models under various distributional assumptions about the innovation of the measurement equation. Unlike the estimation methods commonly used to estimate the SCD models in the literature, we work with the original specification of the model, without subjecting the observation equation to a logarithmic transformation. Results of simulation studies suggest that our proposed models and corresponding estimation methodology perform quite well. We also apply an auxiliary particle filter technique to construct one-step-ahead in-sample and out-of-sample duration forecasts of the fitted models. Applications to the IBM transaction data allow comparison of our models and methods to those existing in the literature.  相似文献   

13.
In this paper, we study a nonparametric additive regression model suitable for a wide range of time series applications. Our model includes a periodic component, a deterministic time trend, various component functions of stochastic explanatory variables, and an AR(p) error process that accounts for serial correlation in the regression error. We propose an estimation procedure for the nonparametric component functions and the parameters of the error process based on smooth backfitting and quasimaximum likelihood methods. Our theory establishes convergence rates and the asymptotic normality of our estimators. Moreover, we are able to derive an oracle‐type result for the estimators of the AR parameters: Under fairly mild conditions, the limiting distribution of our parameter estimators is the same as when the nonparametric component functions are known. Finally, we illustrate our estimation procedure by applying it to a sample of climate and ozone data collected on the Antarctic Peninsula.  相似文献   

14.
This paper describes inference methods for functional data under the assumption that the functional data of interest are smooth latent functions, characterized by a Gaussian process, which have been observed with noise over a finite set of time points. The methods we propose are completely specified in a Bayesian environment that allows for all inferences to be performed through a simple Gibbs sampler. Our main focus is in estimating and describing uncertainty in the covariance function. However, these models also encompass functional data estimation, functional regression where the predictors are latent functions, and an automatic approach to smoothing parameter selection. Furthermore, these models require minimal assumptions on the data structure as the time points for observations do not need to be equally spaced, the number and placement of observations are allowed to vary among functions, and special treatment is not required when the number of functional observations is less than the dimensionality of those observations. We illustrate the effectiveness of these models in estimating latent functional data, capturing variation in the functional covariance estimate, and in selecting appropriate smoothing parameters in both a simulation study and a regression analysis of medfly fertility data.  相似文献   

15.
An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating model from which the latent process can be simulated. Given the presence of a latent long-memory process, we require a modification of the importance sampling technique. In particular, the long-memory process needs to be approximated by a finite dynamic linear process. Two possible approximations are discussed and are compared with each other. We show that an autoregression obtained from minimizing mean squared prediction errors leads to an effective and feasible method. In our empirical study, we analyze ten daily log-return series from the S&P 500 stock index by univariate and multivariate long-memory stochastic volatility models. We compare the in-sample and out-of-sample performance of a number of models within the class of long-memory stochastic volatility models.  相似文献   

16.
Markov-switching (MS) models are becoming increasingly popular as efficient tools of modeling various phenomena in different disciplines, in particular for non Gaussian time series. In this articlept", we propose a broad class of Markov-switching BILINEARGARCH processes (MS ? BLGARCH hereafter) obtained by adding to a MS ? GARCH model one or more interaction components between the observed series and its volatility process. This parameterization offers remarkably rich dynamics and complex behavior for modeling and forecasting financial time-series data which exhibit structural changes. In these models, the parameters of conditional variance are allowed to vary according to some latent time-homogeneous Markov chain with finite state space or “regimes.” The main aim of this new model is to capture asymmetric and hence purported to be able to capture leverage effect characterized by the negativity of the correlation between returns shocks and subsequent shocks in volatility patterns in different regimes. So, first, some basic structural properties of this new model including sufficient conditions ensuring the existence of stationary, causal, ergodic solutions, and moments properties are given. Second, since the second-order structure provides a useful information to identify an appropriate time-series model, we derive the expression of the covariance function of for MS ? BLGARCH and for its powers. As a consequence, we find that the second (resp. higher)-order structure is similar to some linear processes, and hence MS ? BLGARCH (resp. its powers) admit an ARMA representation. This finding allows us for parameter estimation via GMM procedure proved by a Monte Carlo study and applied to foreign exchange rate of the Algerian Dinar against the single European currency.  相似文献   

17.
Bayesian model building techniques are developed for data with a strong time series structure and possibly exogenous explanatory variables that have strong explanatory and predictive power. The emphasis is on finding whether there are any explanatory variables that might be used for modelling if the data have a strong time series structure that should also be included. We use a time series model that is linear in past observations and that can capture both stochastic and deterministic trend, seasonality and serial correlation. We propose the plotting of absolute predictive error against predictive standard deviation. A series of such plots is utilized to determine which of several nested and non-nested models is optimal in terms of minimizing the dispersion of the predictive distribution and restricting predictive outliers. We apply the techniques to modelling monthly counts of fatal road crashes in Australia where economic, consumption and weather variables are available and we find that three such variables should be included in addition to the time series filter. The approach leads to graphical techniques to determine strengths of relationships between the dependent variable and covariates and to detect model inadequacy as well as determining useful numerical summaries.  相似文献   

18.
Summary. We propose modelling short-term pollutant exposure effects on health by using dynamic generalized linear models. The time series of count data are modelled by a Poisson distribution having mean driven by a latent Markov process; estimation is performed by the extended Kalman filter and smoother. This modelling strategy allows us to take into account possible overdispersion and time-varying effects of the covariates. These ideas are illustrated by reanalysing data on the relationship between daily non-accidental deaths and air pollution in the city of Birmingham, Alabama.  相似文献   

19.
This paper considers model selection and forecasting issues in two closely related models for nonstationary periodic autoregressive time series [PAR]. Periodically integrated seasonal time series [PIAR] need a periodic differencing filter to remove the stochastic trend. On the other hand, when the nonperiodic first order differencing filter can be applied, one can have a periodic model with a nonseasonal unit root [PARI]. In this paper, we discuss and evaluate two testing strategies to select between these two models. Furthermore, we compare the relative forecasting performance of each model using Monte Carlo simulations and some U.K. macroeconomic seasonal time series. One result is that forecasting with PARI models while the data generating process is a PIAR process seems to be worse thanvice versa.  相似文献   

20.
We present a mathematical theory of objective, frequentist chance phenomena that uses as a model a set of probability measures. In this work, sets of measures are not viewed as a statistical compound hypothesis or as a tool for modeling imprecise subjective behavior. Instead we use sets of measures to model stable (although not stationary in the traditional stochastic sense) physical sources of finite time series data that have highly irregular behavior. Such models give a coarse-grained picture of the phenomena, keeping track of the range of the possible probabilities of the events. We present methods to simulate finite data sequences coming from a source modeled by a set of probability measures, and to estimate the model from finite time series data. The estimation of the set of probability measures is based on the analysis of a set of relative frequencies of events taken along subsequences selected by a collection of rules. In particular, we provide a universal methodology for finding a family of subsequence selection rules that can estimate any set of probability measures with high probability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号