首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sequential Monte Carlo methods (also known as particle filters and smoothers) are used for filtering and smoothing in general state-space models. These methods are based on importance sampling. In practice, it is often difficult to find a suitable proposal which allows effective importance sampling. This article develops an original particle filter and an original particle smoother which employ nonparametric importance sampling. The basic idea is to use a nonparametric estimate of the marginally optimal proposal. The proposed algorithms provide a better approximation of the filtering and smoothing distributions than standard methods. The methods’ advantage is most distinct in severely nonlinear situations. In contrast to most existing methods, they allow the use of quasi-Monte Carlo (QMC) sampling. In addition, they do not suffer from weight degeneration rendering a resampling step unnecessary. For the estimation of model parameters, an efficient on-line maximum-likelihood (ML) estimation technique is proposed which is also based on nonparametric approximations. All suggested algorithms have almost linear complexity for low-dimensional state-spaces. This is an advantage over standard smoothing and ML procedures. Particularly, all existing sequential Monte Carlo methods that incorporate QMC sampling have quadratic complexity. As an application, stochastic volatility estimation for high-frequency financial data is considered, which is of great importance in practice. The computer code is partly available as supplemental material.  相似文献   

2.
Most system identification approaches and statistical inference methods rely on the availability of the analytic knowledge of the probability distribution function of the system output variables. In the case of dynamic systems modelled by hidden Markov chains or stochastic nonlinear state-space models, these distributions as well as that of the state variables themselves, can be unknown or untractable. In that situation, the usual particle Monte Carlo filters for system identification or likelihood-based inference and model selection methods have to rely, whenever possible, on some hazardous approximations and are often at risk. This review shows how a recent nonparametric particle filtering approach can be efficiently used in that context, not only for consistent filtering of these systems but also to restore these statistical inference methods, allowing, for example, consistent particle estimation of Bayes factors or the generalisation of model parameter change detection sequential tests. Real-life applications of these particle approaches to a microbiological growth model are proposed as illustrations.  相似文献   

3.
We propose a state-space approach for GARCH models with time-varying parameters able to deal with non-stationarity that is usually observed in a wide variety of time series. The parameters of the non-stationary model are allowed to vary smoothly over time through non-negative deterministic functions. We implement the estimation of the time-varying parameters in the time domain through Kalman filter recursive equations, finding a state-space representation of a class of time-varying GARCH models. We provide prediction intervals for time-varying GARCH models and, additionally, we propose a simple methodology for handling missing values. Finally, the proposed methodology is applied to the Chilean Stock Market (IPSA) and to the American Standard&Poor's 500 index (S&P500).  相似文献   

4.
State-space models provide an important body of techniques for analyzing time-series, but their use requires estimating unobserved states. The optimal estimate of the state is its conditional expectation given the observation histories, and computing this expectation is hard when there are nonlinearities. Existing filtering methods, including sequential Monte Carlo, tend to be either inaccurate or slow. In this paper, we study a nonlinear filter for nonlinear/non-Gaussian state-space models, which uses Laplace's method, an asymptotic series expansion, to approximate the state's conditional mean and variance, together with a Gaussian conditional distribution. This Laplace-Gaussian filter (LGF) gives fast, recursive, deterministic state estimates, with an error which is set by the stochastic characteristics of the model and is, we show, stable over time. We illustrate the estimation ability of the LGF by applying it to the problem of neural decoding and compare it to sequential Monte Carlo both in simulations and with real data. We find that the LGF can deliver superior results in a small fraction of the computing time.  相似文献   

5.
Appropriately designing the proposal kernel of particle filters is an issue of significant importance, since a bad choice may lead to deterioration of the particle sample and, consequently, waste of computational power. In this paper we introduce a novel algorithm adaptively approximating the so-called optimal proposal kernel by a mixture of integrated curved exponential distributions with logistic weights. This family of distributions, referred to as mixtures of experts, is broad enough to be used in the presence of multi-modality or strongly skewed distributions. The mixtures are fitted, via online-EM methods, to the optimal kernel through minimisation of the Kullback-Leibler divergence between the auxiliary target and instrumental distributions of the particle filter. At each iteration of the particle filter, the algorithm is required to solve only a single optimisation problem for the whole particle sample, yielding an algorithm with only linear complexity. In addition, we illustrate in a simulation study how the method can be successfully applied to optimal filtering in nonlinear state-space models.  相似文献   

6.
Time series of counts occur in many different contexts, the counts being usually of certain events or objects in specified time intervals. In this paper we introduce a model called parameter-driven state-space model to analyse integer-valued time series data. A key property of such model is that the distribution of the observed count data is independent, conditional on the latent process, although the observations are correlated marginally. Our simulation shows that the Monte Carlo Expectation Maximization (MCEM) algorithm and the particle method are useful for the parameter estimation of the proposed model. In the application to Malaysia dengue data, our model fits better when compared with several other models including that of Yang et al. (2015)  相似文献   

7.
Due to rapid data growth, statistical analysis of massive datasets often has to be carried out in a distributed fashion, either because several datasets stored in separate physical locations are all relevant to a given problem, or simply to achieve faster (parallel) computation through a divide-and-conquer scheme. In both cases, the challenge is to obtain valid inference that does not require processing all data at a single central computing node. We show that for a very widely used class of spatial low-rank models, which can be written as a linear combination of spatial basis functions plus a fine-scale-variation component, parallel spatial inference and prediction for massive distributed data can be carried out exactly, meaning that the results are the same as for a traditional, non-distributed analysis. The communication cost of our distributed algorithms does not depend on the number of data points. After extending our results to the spatio-temporal case, we illustrate our methodology by carrying out distributed spatio-temporal particle filtering inference on total precipitable water measured by three different satellite sensor systems.  相似文献   

8.
We present a simulation methodology for Bayesian estimation of rate parameters in Markov jump processes arising for example in stochastic kinetic models. To handle the problem of missing components and measurement errors in observed data, we embed the Markov jump process into the framework of a general state space model. We do not use diffusion approximations. Markov chain Monte Carlo and particle filter type algorithms are introduced which allow sampling from the posterior distribution of the rate parameters and the Markov jump process also in data-poor scenarios. The algorithms are illustrated by applying them to rate estimation in a model for prokaryotic auto-regulation and the stochastic Oregonator, respectively.  相似文献   

9.
This article designs a Sequential Monte Carlo (SMC) algorithm for estimation of Bayesian semi-parametric Stochastic Volatility model for financial data. In particular, it makes use of one of the most recent particle filters called Particle Learning (PL). SMC methods are especially well suited for state-space models and can be seen as a cost-efficient alternative to Markov Chain Monte Carlo (MCMC), since they allow for online type inference. The posterior distributions are updated as new data is observed, which is exceedingly costly using MCMC. Also, PL allows for consistent online model comparison using sequential predictive log Bayes factors. A simulated data is used in order to compare the posterior outputs for the PL and MCMC schemes, which are shown to be almost identical. Finally, a short real data application is included.  相似文献   

10.
Spatio-temporal processes are often high-dimensional, exhibiting complicated variability across space and time. Traditional state-space model approaches to such processes in the presence of uncertain data have been shown to be useful. However, estimation of state-space models in this context is often problematic since parameter vectors and matrices are of high dimension and can have complicated dependence structures. We propose a spatio-temporal dynamic model formulation with parameter matrices restricted based on prior scientific knowledge and/or common spatial models. Estimation is carried out via the expectation–maximization (EM) algorithm or general EM algorithm. Several parameterization strategies are proposed and analytical or computational closed form EM update equations are derived for each. We apply the methodology to a model based on an advection–diffusion partial differential equation in a simulation study and also to a dimension-reduced model for a Palmer Drought Severity Index (PDSI) data set.  相似文献   

11.
We present particle-based algorithms for sequential filtering and parameter learning in state-space autoregressive (AR) models with structured priors. Non-conjugate priors are specified on the AR coefficients at the system level by imposing uniform or truncated normal priors on the moduli and wavelengths of the reciprocal roots of the AR characteristic polynomial. Sequential Monte Carlo algorithms are considered and implemented for on-line filtering and parameter learning within this modeling framework. More specifically, three SMC approaches are considered and compared by applying them to data simulated from different state-space AR models. An analysis of a human electroencephalogram signal is also presented to illustrate the use of the structured state-space AR models in describing biomedical signals.  相似文献   

12.
We apply the particle filter for the quick and accurate estimation of a switching point in a financial market based on a recently developed theoretical model, the potentials of unbalanced complex kinetics (PUCK) model, which fulfils all empirically stylized facts such as fat-tailed distribution of price changes and the anomalous diffusion in a short-time scale. We show the efficiency of an optimized driving force in particle filtering for the estimation of the parameters of the PUCK model, using a simulation study. As an example, we apply the method to the dollar–yen exchange market before and after the biggest earthquake in Japan in March 2011. With this fast and efficient estimation method, we can clearly confirm that the statistics of the time series of exchange rate changed drastically at the time of the arrival of the quake in Tokyo area, implying that the earthquake worked as a trigger for the market's switching point.  相似文献   

13.
New data collection and storage technologies have given rise to a new field of streaming data analytics, called real-time statistical methodology for online data analyses. Most existing online learning methods are based on homogeneity assumptions, which require the samples in a sequence to be independent and identically distributed. However, inter-data batch correlation and dynamically evolving batch-specific effects are among the key defining features of real-world streaming data such as electronic health records and mobile health data. This article is built under a state-space mixed model framework in which the observed data stream is driven by a latent state process that follows a Markov process. In this setting, online maximum likelihood estimation is made challenging by high-dimensional integrals and complex covariance structures. In this article, we develop a real-time Kalman-filter-based regression analysis method that updates both point estimates and their standard errors for fixed population average effects while adjusting for dynamic hidden effects. Both theoretical justification and numerical experiments demonstrate that our proposed online method has statistical properties similar to those of its offline counterpart and enjoys great computational efficiency. We also apply this method to analyze an electronic health record dataset.  相似文献   

14.
M-quantile models with application to poverty mapping   总被引:1,自引:0,他引:1  
Over the last decade there has been growing demand for estimates of population characteristics at small area level. Unfortunately, cost constraints in the design of sample surveys lead to small sample sizes within these areas and as a result direct estimation, using only the survey data, is inappropriate since it yields estimates with unacceptable levels of precision. Small area models are designed to tackle the small sample size problem. The most popular class of models for small area estimation is random effects models that include random area effects to account for between area variations. However, such models also depend on strong distributional assumptions, require a formal specification of the random part of the model and do not easily allow for outlier robust inference. An alternative approach to small area estimation that is based on the use of M-quantile models was recently proposed by Chambers and Tzavidis (Biometrika 93(2):255–268, 2006) and Tzavidis and Chambers (Robust prediction of small area means and distributions. Working paper, 2007). Unlike traditional random effects models, M-quantile models do not depend on strong distributional assumption and automatically provide outlier robust inference. In this paper we illustrate for the first time how M-quantile models can be practically employed for deriving small area estimates of poverty and inequality. The methodology we propose improves the traditional poverty mapping methods in the following ways: (a) it enables the estimation of the distribution function of the study variable within the small area of interest both under an M-quantile and a random effects model, (b) it provides analytical, instead of empirical, estimation of the mean squared error of the M-quantile small area mean estimates and (c) it employs a robust to outliers estimation method. The methodology is applied to data from the 2002 Living Standards Measurement Survey (LSMS) in Albania for estimating (a) district level estimates of the incidence of poverty in Albania, (b) district level inequality measures and (c) the distribution function of household per-capita consumption expenditure in each district. Small area estimates of poverty and inequality show that the poorest Albanian districts are in the mountainous regions (north and north east) with the wealthiest districts, which are also linked with high levels of inequality, in the coastal (south west) and southern part of country. We discuss the practical advantages of our methodology and note the consistency of our results with results from previous studies. We further demonstrate the usefulness of the M-quantile estimation framework through design-based simulations based on two realistic survey data sets containing small area information and show that the M-quantile approach may be preferable when the aim is to estimate the small area distribution function.  相似文献   

15.
In this paper, we discuss the problem of estimating the mean and standard deviation of a logistic population based on multiply Type-II censored samples. First, we discuss the best linear unbiased estimation and the maximum likelihood estimation methods. Next, by appropriately approximating the likelihood equations we derive approximate maximum likelihood estimators for the two parameters and show that these estimators are quite useful as they do not need the construction of any special tables (as required for the best linear unbiased estimators) and are explicit estimators (unlike the maximum likelihood estimators which need to be determined by numerical methods). We show that these estimators are also quite efficient, and derive the asymptotic variances and covariance of the estimators. Finally, we present an example to illustrate the methods of estimation discussed in this paper.  相似文献   

16.
Drug discovery is the process of identifying compounds which have potentially meaningful biological activity. A major challenge that arises is that the number of compounds to search over can be quite large, sometimes numbering in the millions, making experimental testing intractable. For this reason computational methods are employed to filter out those compounds which do not exhibit strong biological activity. This filtering step, also called virtual screening reduces the search space, allowing for the remaining compounds to be experimentally tested.In this paper we propose several novel approaches to the problem of virtual screening based on Canonical Correlation Analysis (CCA) and on a kernel-based extension. Spectral learning ideas motivate our proposed new method called Indefinite Kernel CCA (IKCCA). We show the strong performance of this approach both for a toy problem as well as using real world data with dramatic improvements in predictive accuracy of virtual screening over an existing methodology.  相似文献   

17.
This paper contributes to the problem of estimation of state space model parameters by proposing estimators for the mean, the autoregressive parameters and the noise variances which, contrarily to maximum likelihood, may be calculated without assuming any specific distribution for the errors. The estimators suggested widen the scope of the application of the generalized method of moments to some heteroscedastic models, as in the case of state-space models with varying coefficients, and give sufficient conditions for their consistency. The paper includes a simulation study comparing the proposed estimators with maximum likelihood estimators. Finally, these methods are applied to the calibration of the meteorological radar and estimation of area rainfall.  相似文献   

18.
We propose an estimation method that incorporates the correlation/covariance structure between repeated measurements in covariate-adjusted regression models for distorted longitudinal data. In this distorted data setting, neither the longitudinal response nor (possibly time-varying) predictors are directly observable. The unobserved response and predictors are assumed to be distorted/contaminated by unknown functions of a common observable confounder. The proposed estimation methodology adjusts for the distortion effects both in estimation of the covariance structure and in the regression parameters using generalized least squares. The finite-sample performance of the proposed estimators is studied numerically by means of simulations. The consistency and convergence rates of the proposed estimators are also established. The proposed method is illustrated with an application to data from a longitudinal study of cognitive and social development in children.  相似文献   

19.
In this paper, we consider estimation of the mean squared prediction error (MSPE) of the best linear predictor of (possibly) nonlinear functions of finitely many future observations in a stationary time series. We develop a resampling methodology for estimating the MSPE when the unknown parameters in the best linear predictor are estimated. Further, we propose a bias corrected MSPE estimator based on the bootstrap and establish its second order accuracy. Finite sample properties of the method are investigated through a simulation study.  相似文献   

20.
We study Bayesian dynamic models for detecting changepoints in count time series that present structural breaks. As the inferential approach, we develop a parameter learning version of the algorithm proposed by Chopin [Chopin N. Dynamic detection of changepoints in long time series. Annals of the Institute of Statistical Mathematics 2007;59:349–366.], called the Chopin filter with parameter learning, which allows us to estimate the static parameters in the model. In this extension, the static parameters are addressed by using the kernel smoothing approximations proposed by Liu and West [Liu J, West M. Combined parameters and state estimation in simulation-based filtering. In: Doucet A, de Freitas N, Gordon N, editors. Sequential Monte Carlo methods in practice. New York: Springer-Verlag; 2001]. The proposed methodology is then applied to both simulated and real data sets and the time series models include distributions that allow for overdispersion and/or zero inflation. Since our procedure is general, robust and naturally adaptive because the particle filter approach does not require restrictive specifications to ensure its validity and effectiveness, we believe it is a valuable alternative for dealing with the problem of detecting changepoints in count time series. The proposed methodology is also suitable for count time series with no changepoints and for independent count data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号