首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 346 毫秒
1.
Summary. Solving Bayesian estimation problems where the posterior distribution evolves over time through the accumulation of data has many applications for dynamic models. A large number of algorithms based on particle filtering methods, also known as sequential Monte Carlo algorithms, have recently been proposed to solve these problems. We propose a special particle filtering method which uses random mixtures of normal distributions to represent the posterior distributions of partially observed Gaussian state space models. This algorithm is based on a marginalization idea for improving efficiency and can lead to substantial gains over standard algorithms. It differs from previous algorithms which were only applicable to conditionally linear Gaussian state space models. Computer simulations are carried out to evaluate the performance of the proposed algorithm for dynamic tobit and probit models.  相似文献   

2.
This paper concerns Kalman filtering when the measurements of the process are censored. The censored measurements are addressed by the Tobit model of Type I and are one-dimensional with two censoring limits, while the (hidden) state vectors are multidimensional. For this model, Bayesian estimates for the state vectors are provided through a recursive algorithm of Kalman filtering type. Experiments are presented to illustrate the effectiveness and applicability of the algorithm. The experiments show that the proposed method outperforms other filtering methodologies in minimizing the computational cost as well as the overall Root Mean Square Error (RMSE) for synthetic and real data sets.  相似文献   

3.
On sequential Monte Carlo sampling methods for Bayesian filtering   总被引:145,自引:0,他引:145  
In this article, we present an overview of methods for sequential simulation from posterior distributions. These methods are of particular interest in Bayesian filtering for discrete time dynamic models that are typically nonlinear and non-Gaussian. A general importance sampling framework is developed that unifies many of the methods which have been proposed over the last few decades in several different scientific disciplines. Novel extensions to the existing methods are also proposed. We show in particular how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature; these lead to very effective importance distributions. Furthermore we describe a method which uses Rao-Blackwellisation in order to take advantage of the analytic structure present in some important classes of state-space models. In a final section we develop algorithms for prediction, smoothing and evaluation of the likelihood in dynamic models.  相似文献   

4.
New techniques for the analysis of stochastic volatility models in which the logarithm of conditional variance follows an autoregressive model are developed. A cyclic Metropolis algorithm is used to construct a Markov-chain simulation tool. Simulations from this Markov chain converge in distribution to draws from the posterior distribution enabling exact finite-sample inference. The exact solution to the filtering/smoothing problem of inferring about the unobserved variance states is a by-product of our Markov-chain method. In addition, multistep-ahead predictive densities can be constructed that reflect both inherent model variability and parameter uncertainty. We illustrate our method by analyzing both daily and weekly data on stock returns and exchange rates. Sampling experiments are conducted to compare the performance of Bayes estimators to method of moments and quasi-maximum likelihood estimators proposed in the literature. In both parameter estimation and filtering, the Bayes estimators outperform these other approaches.  相似文献   

5.
Justice (1977) has presented a Leviiison-typc- solution for two dimensional Wiener filtering problem. Since the solution is based on the Szego polynomials, it is necessary to calculate the bivariate Szego polynomials to obtain it In this paper, an alternative solution of the problem is proposed, which is based on the block Cholesky decomposition of the inverse of a Hermi-tian block Toeplitz matrix. Since the block Choiesky decomposition can be accomplished through the Whittle algorithm, the new solution is easy to implement into a computer program.  相似文献   

6.
The Hodrick–Prescott (HP) filtering is widely applied to decompose macroeconomic time series, such as real Gross Domestic Product, into cyclical and trend components. This paper presents a small but practically useful modification to this approach. The reason why this modified filtering is of practical use is that it provides not only identical trend estimates as the HP filtering but also extrapolations of the trend. We provide a proof based on a ridge regression representation of the modified HP filtering. This is mainly because it enhances our understanding of the approach.  相似文献   

7.
We propose a new regression-based filter for extracting signals online from multivariate high frequency time series. It separates relevant signals of several variables from noise and (multivariate) outliers.

Unlike parallel univariate filters, the new procedure takes into account the local covariance structure between the single time series components. It is based on high-breakdown estimates, which makes it robust against (patches of) outliers in one or several of the components as well as against outliers with respect to the multivariate covariance structure. Moreover, the trade-off problem between bias and variance for the optimal choice of the window width is approached by choosing the size of the window adaptively, depending on the current data situation.

Furthermore, we present an advanced algorithm of our filtering procedure that includes the replacement of missing observations in real time. Thus, the new procedure can be applied in online-monitoring practice. Applications to physiological time series from intensive care show the practical effect of the proposed filtering technique.  相似文献   

8.
Sample covariance matrices play a central role in numerous popular statistical methodologies, for example principal components analysis, Kalman filtering and independent component analysis. However, modern random matrix theory indicates that, when the dimension of a random vector is not negligible with respect to the sample size, the sample covariance matrix demonstrates significant deviations from the underlying population covariance matrix. There is an urgent need to develop new estimation tools in such cases with high‐dimensional data to recover the characteristics of the population covariance matrix from the observed sample covariance matrix. We propose a novel solution to this problem based on the method of moments. When the parametric dimension of the population spectrum is finite and known, we prove that the proposed estimator is strongly consistent and asymptotically Gaussian. Otherwise, we combine the first estimation method with a cross‐validation procedure to select the unknown model dimension. Simulation experiments demonstrate the consistency of the proposed procedure. We also indicate possible extensions of the proposed estimator to the case where the population spectrum has a density.  相似文献   

9.
The Hodrick–Prescott (HP) filtering is frequently used in macroeconometrics to decompose time series, such as real gross domestic product, into their trend and cyclical components. Because the HP filtering is a basic econometric tool, it is necessary to have a precise understanding of the nature of it. This article contributes to the literature by listing several (penalized) least-squares problems that are related to the HP filtering, three of which are newly introduced in the article, and showing their properties. We also remark on their generalization.  相似文献   

10.
ABSTRACT

The standard Kalman filter cannot handle inequality constraints imposed on the state variables, as state truncation induces a nonlinear and non-Gaussian model. We propose a Rao-Blackwellized particle filter with the optimal importance function for forward filtering and the likelihood function evaluation. The particle filter effectively enforces the state constraints when the Kalman filter violates them. Monte Carlo experiments demonstrate excellent performance of the proposed particle filter with Rao-Blackwellization, in which the Gaussian linear sub-structure is exploited at both the cross-sectional and temporal levels.  相似文献   

11.
Most system identification approaches and statistical inference methods rely on the availability of the analytic knowledge of the probability distribution function of the system output variables. In the case of dynamic systems modelled by hidden Markov chains or stochastic nonlinear state-space models, these distributions as well as that of the state variables themselves, can be unknown or untractable. In that situation, the usual particle Monte Carlo filters for system identification or likelihood-based inference and model selection methods have to rely, whenever possible, on some hazardous approximations and are often at risk. This review shows how a recent nonparametric particle filtering approach can be efficiently used in that context, not only for consistent filtering of these systems but also to restore these statistical inference methods, allowing, for example, consistent particle estimation of Bayes factors or the generalisation of model parameter change detection sequential tests. Real-life applications of these particle approaches to a microbiological growth model are proposed as illustrations.  相似文献   

12.
Sequential Monte Carlo methods (also known as particle filters and smoothers) are used for filtering and smoothing in general state-space models. These methods are based on importance sampling. In practice, it is often difficult to find a suitable proposal which allows effective importance sampling. This article develops an original particle filter and an original particle smoother which employ nonparametric importance sampling. The basic idea is to use a nonparametric estimate of the marginally optimal proposal. The proposed algorithms provide a better approximation of the filtering and smoothing distributions than standard methods. The methods’ advantage is most distinct in severely nonlinear situations. In contrast to most existing methods, they allow the use of quasi-Monte Carlo (QMC) sampling. In addition, they do not suffer from weight degeneration rendering a resampling step unnecessary. For the estimation of model parameters, an efficient on-line maximum-likelihood (ML) estimation technique is proposed which is also based on nonparametric approximations. All suggested algorithms have almost linear complexity for low-dimensional state-spaces. This is an advantage over standard smoothing and ML procedures. Particularly, all existing sequential Monte Carlo methods that incorporate QMC sampling have quadratic complexity. As an application, stochastic volatility estimation for high-frequency financial data is considered, which is of great importance in practice. The computer code is partly available as supplemental material.  相似文献   

13.
Classification of high-dimensional data set is a big challenge for statistical learning and data mining algorithms. To effectively apply classification methods to high-dimensional data sets, feature selection is an indispensable pre-processing step of learning process. In this study, we consider the problem of constructing an effective feature selection and classification scheme for data set which has a small number of sample size with a large number of features. A novel feature selection approach, named four-Staged Feature Selection, has been proposed to overcome high-dimensional data classification problem by selecting informative features. The proposed method first selects candidate features with number of filtering methods which are based on different metrics, and then it applies semi-wrapper, union and voting stages, respectively, to obtain final feature subsets. Several statistical learning and data mining methods have been carried out to verify the efficiency of the selected features. In order to test the adequacy of the proposed method, 10 different microarray data sets are employed due to their high number of features and small sample size.  相似文献   

14.
Information before unblinding regarding the success of confirmatory clinical trials is highly uncertain. Current techniques using point estimates of auxiliary parameters for estimating expected blinded sample size: (i) fail to describe the range of likely sample sizes obtained after the anticipated data are observed, and (ii) fail to adjust to the changing patient population. Sequential MCMC-based algorithms are implemented for purposes of sample size adjustments. The uncertainty arising from clinical trials is characterized by filtering later auxiliary parameters through their earlier counterparts and employing posterior distributions to estimate sample size and power. The use of approximate expected power estimates to determine the required additional sample size are closely related to techniques employing Simple Adjustments or the EM algorithm. By contrast with these, our proposed methodology provides intervals for the expected sample size using the posterior distribution of auxiliary parameters. Future decisions about additional subjects are better informed due to our ability to account for subject response heterogeneity over time. We apply the proposed methodologies to a depression trial. Our proposed blinded procedures should be considered for most studies due to ease of implementation.  相似文献   

15.
We study Bayesian dynamic models for detecting changepoints in count time series that present structural breaks. As the inferential approach, we develop a parameter learning version of the algorithm proposed by Chopin [Chopin N. Dynamic detection of changepoints in long time series. Annals of the Institute of Statistical Mathematics 2007;59:349–366.], called the Chopin filter with parameter learning, which allows us to estimate the static parameters in the model. In this extension, the static parameters are addressed by using the kernel smoothing approximations proposed by Liu and West [Liu J, West M. Combined parameters and state estimation in simulation-based filtering. In: Doucet A, de Freitas N, Gordon N, editors. Sequential Monte Carlo methods in practice. New York: Springer-Verlag; 2001]. The proposed methodology is then applied to both simulated and real data sets and the time series models include distributions that allow for overdispersion and/or zero inflation. Since our procedure is general, robust and naturally adaptive because the particle filter approach does not require restrictive specifications to ensure its validity and effectiveness, we believe it is a valuable alternative for dealing with the problem of detecting changepoints in count time series. The proposed methodology is also suitable for count time series with no changepoints and for independent count data.  相似文献   

16.
This paper proposes an identification method of ARIMA models for seasonal time series using an intermediary model and a filtering method. This method is found to be useful when conventional methods, such as using sample ACF and PACF, fail to reveal a clear-cut model. This filtering identification method is also found to be particularly effective when a seasonal time series is subjected to calendar variations, moving-holiday effects, and interventions.  相似文献   

17.
In this article, using the representation that the Kalman filter recursions in state-space models can be expressed as a matrix-weighted average of prior and sample estimates, we supplement the usual filtering algorithm by an extreme bounds analysis. Specifically, as the covariance matrix of the state error is varied in the class of symmetric and positive-definite matrices, the filtering estimates are shown to be in an ellipsoid.  相似文献   

18.
A common situation in filtering where classical Kalman filtering does not perform particularly well is tracking in the presence of propagating outliers. This calls for robustness understood in a distributional sense, i.e.; we enlarge the distribution assumptions made in the ideal model by suitable neighborhoods. Based on optimality results for distributional-robust Kalman filtering from Ruckdeschel (Ansätze zur Robustifizierung des Kalman-Filters, vol 64, 2001; Optimally (distributional-)robust Kalman filtering, arXiv: 1004.3393, 2010a), we propose new robust recursive filters and smoothers designed for this purpose as well as specialized versions for non-propagating outliers. We apply these procedures in the context of a GPS problem arising in the car industry. To better understand these filters, we study their behavior at stylized outlier patterns (for which they are not designed) and compare them to other approaches for the tracking problem. Finally, in a simulation study we discuss efficiency of our procedures in comparison to competitors.  相似文献   

19.
We propose a novel approach for distributed statistical detection of change-points in high-volume network traffic. We consider more specifically the task of detecting and identifying the targets of Distributed Denial of Service (DDoS) attacks. The proposed algorithm, called DTopRank, performs distributed network anomaly detection by aggregating the partial information gathered in a set of network monitors. In order to address massive data while limiting the communication overhead within the network, the approach combines record filtering at the monitor level and a nonparametric rank test for doubly censored time series at the central decision site. The performance of the DTopRank algorithm is illustrated both on synthetic data as well as from a traffic trace provided by a major Internet service provider.  相似文献   

20.
Drug discovery is the process of identifying compounds which have potentially meaningful biological activity. A major challenge that arises is that the number of compounds to search over can be quite large, sometimes numbering in the millions, making experimental testing intractable. For this reason computational methods are employed to filter out those compounds which do not exhibit strong biological activity. This filtering step, also called virtual screening reduces the search space, allowing for the remaining compounds to be experimentally tested.In this paper we propose several novel approaches to the problem of virtual screening based on Canonical Correlation Analysis (CCA) and on a kernel-based extension. Spectral learning ideas motivate our proposed new method called Indefinite Kernel CCA (IKCCA). We show the strong performance of this approach both for a toy problem as well as using real world data with dramatic improvements in predictive accuracy of virtual screening over an existing methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号