首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Eden UT  Brown EN 《Statistica Sinica》2008,18(4):1293-1310
Neural spike trains, the primary communication signals in the brain, can be accurately modeled as point processes. For many years, significant theoretical work has been done on the construction of exact and approximate filters for state estimation from point process observations in continuous-time. We have previously developed approximate filters for state estimation from point process observations in discrete-time and applied them in the study of neural systems. Here, we present a coherent framework for deriving continuous-time filters from their discrete-counterparts. We present an accessible derivation of the well-known unnormalized conditional density equation for state evolution, construct a new continuous-time filter based on a Gaussian approximation, and propose a method for assessing the validity of the approximation following an approach by Brockett and Clark. We apply these methods to the problem of reconstructing arm reaching movements from simulated neural spiking activity from the primary motor cortex. This work makes explicit the connections between adaptive point process filters for analyzing neural spiking activity in continuous-time, and standard continuous-time filters for state estimation from continuous and point process observations.  相似文献   

2.
Recursive and en-bloc approaches to signal extraction   总被引:1,自引:0,他引:1  
In the literature on unobservable component models , three main statistical instruments have been used for signal extraction: fixed interval smoothing (FIS), which derives from Kalman's seminal work on optimal state-space filter theory in the time domain; Wiener-Kolmogorov-Whittle optimal signal extraction (OSE) theory, which is normally set in the frequency domain and dominates the field of classical statistics; and regularization , which was developed mainly by numerical analysts but is referred to as 'smoothing' in the statistical literature (such as smoothing splines, kernel smoothers and local regression). Although some minor recognition of the interrelationship between these methods can be discerned from the literature, no clear discussion of their equivalence has appeared. This paper exposes clearly the interrelationships between the three methods; highlights important properties of the smoothing filters used in signal extraction; and stresses the advantages of the FIS algorithms as a practical solution to signal extraction and smoothing problems. It also emphasizes the importance of the classical OSE theory as an analytical tool for obtaining a better understanding of the problem of signal extraction.  相似文献   

3.
This article presents a model-based signal extraction seasonal adjustment procedure to extract estimates of the independent unobserved seasonal and nonseasonal components from an observed time series. The decomposition yields a one-sided filter that is optimal for adjusting the most recent observation under the assumption of using only the past observed series. Some advantages of this procedure are that no forecasts are required for implementation and there are no problems of revision of estimates or questions of concurrent adjustment. Comparisons are made with existing procedures using two-sided filters.  相似文献   

4.
Many fields of research need to classify individual systems based on one or more data series, which are obtained by sampling an unknown continuous curve with noise. In other words, the underlying process is an unknown function which the observed variables represent only imperfectly. Although functional logistic regression has many attractive features for this classification problem, this method is applicable only when the number of individuals to be classified (or available to estimate the model) is large compared to the number of curves sampled per individual.To overcome this limitation, we use penalized optimal scoring to construct a new method for the classification of multi-dimensional functional data. The proposed method consists of two stages. First, the series of observed discrete values available for each individual are expressed as a set of continuous curves. Next, the penalized optimal scoring model is estimated on the basis of these curves. A similar penalized optimal scoring method was described in my previous work, but this model is not suitable for the analysis of continuous functions. In this paper we adopt a Gaussian kernel approach to extend the previous model. The high accuracy of the new method is demonstrated on Monte Carlo simulations, and used to predict defaulting firms on the Japanese Stock Exchange.  相似文献   

5.
Particle filters for mixture models with an unknown number of components   总被引:2,自引:1,他引:1  
We consider the analysis of data under mixture models where the number of components in the mixture is unknown. We concentrate on mixture Dirichlet process models, and in particular we consider such models under conjugate priors. This conjugacy enables us to integrate out many of the parameters in the model, and to discretize the posterior distribution. Particle filters are particularly well suited to such discrete problems, and we propose the use of the particle filter of Fearnhead and Clifford for this problem. The performance of this particle filter, when analyzing both simulated and real data from a Gaussian mixture model, is uniformly better than the particle filter algorithm of Chen and Liu. In many situations it outperforms a Gibbs Sampler. We also show how models without the required amount of conjugacy can be efficiently analyzed by the same particle filter algorithm.  相似文献   

6.
Time series seasonal extraction techniques are quite often applied in the context of a policy aimed at controlling the nonseasonal components of a time series. Monetary policies targeting the nonseasonal components of monetary aggregates are an example. Such policies can be studied as a quadratic optimal control model in which observations are contaminated by seasonal noise. Optimal extraction filters in such models do not correspond to univariate time series seasonal extraction filters. The linear quadratic control model components are nonorthogonal due to the presence of control feedback. This article presents the Kalman filter as a conceptual and computational device used to extract seasonal noise in the presence of feedback.  相似文献   

7.
Computer simulations of point processes are important either to verify the results of certain theoretical calculations that can be very awkward at times or to obtain practical results when these calculations become almost impossible. One of the most common methods for the simulation of nonstationary Poisson processes is random thinning. Its extension when the intensity becomes random (doubly stochastic Poisson processes) depends on the structure of this intensity. If the random density takes only discrete values, which is a common situation in many physical problems where quantum mechanics introduces discrete states, it is shown that the thinning method can be applied without error. We study in particular the case of binary density and present the kind of theoretical calculations that then become possible. The results of various experiments realized with data obtained by simulation show a fairly good agreement with the theoretical calculations.  相似文献   

8.
Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.  相似文献   

9.
Junbum Lee 《Statistics》2017,51(5):949-968
In this paper, general quadratic forms of nonstationary, α-mixing time series are considered. Under mixing and moment assumptions, asymptotically normality of these forms are derived. These results do not assume that the variance of the generalized quadratic form has a limit, thus allowing for general types of nonstationarity. However, without well-defined limits, it is not possible to understand the differences in sampling properties of quadratic forms of nonstationary and stationary processes. To understand these differences, the nonstationary process is placed within the locally stationary framework. Under the assumption that the nonstationary process is locally stationary the asymptotic expectation and variance of the weighted sample covariance of the discrete Fourier transforms (an important class of quadratic forms) is derived and shown to be very different to its stationary counterpart.  相似文献   

10.
The paper concerns the design of nonparametric low-pass filters that have the property of reproducing a polynomial of a given degree. Two approaches are considered. The first is locally weighted polynomial regression (LWPR), which leads to linear filters depending on three parameters: the bandwidth, the order of the fitting polynomial, and the kernel. We find a remarkable linear (hyperbolic) relationship between the cut-off period (frequency) and the bandwidth, conditional on the choices of the order and the kernel, upon which we build the design of a low-pass filter.The second hinges on a generalization of the maximum concentration approach, leading to filters related to discrete prolate spheroidal sequences (DPSS). In particular, we propose a new class of low-pass filters that maximize the concentration over a specified frequency range, subject to polynomial reproducing constraints. The design of generalized DPSS filters depends on three parameters: the bandwidth, the polynomial order, and the concentration frequency. We discuss the properties of the corresponding filters in relation to the LWPR filters, and illustrate their use for the design of low-pass filters by investigating how the three parameters are related to the cut-off frequency.  相似文献   

11.
Sequential Monte Carlo methods (also known as particle filters and smoothers) are used for filtering and smoothing in general state-space models. These methods are based on importance sampling. In practice, it is often difficult to find a suitable proposal which allows effective importance sampling. This article develops an original particle filter and an original particle smoother which employ nonparametric importance sampling. The basic idea is to use a nonparametric estimate of the marginally optimal proposal. The proposed algorithms provide a better approximation of the filtering and smoothing distributions than standard methods. The methods’ advantage is most distinct in severely nonlinear situations. In contrast to most existing methods, they allow the use of quasi-Monte Carlo (QMC) sampling. In addition, they do not suffer from weight degeneration rendering a resampling step unnecessary. For the estimation of model parameters, an efficient on-line maximum-likelihood (ML) estimation technique is proposed which is also based on nonparametric approximations. All suggested algorithms have almost linear complexity for low-dimensional state-spaces. This is an advantage over standard smoothing and ML procedures. Particularly, all existing sequential Monte Carlo methods that incorporate QMC sampling have quadratic complexity. As an application, stochastic volatility estimation for high-frequency financial data is considered, which is of great importance in practice. The computer code is partly available as supplemental material.  相似文献   

12.
Particle filters are a powerful and flexible tool for performing inference on state-space models. They involve a collection of samples evolving over time through a combination of sampling and re-sampling steps. The re-sampling step is necessary to ensure that weight degeneracy is avoided. In several situations of statistical interest, it is important to be able to compare the estimates produced by two different particle filters; consequently, being able to efficiently couple two particle filter trajectories is often of paramount importance. In this text, we propose several ways to do so. In particular, we leverage ideas from the optimal transportation literature. In general, though computing the optimal transport map is extremely computationally expensive, to deal with this, we introduce computationally tractable approximations to optimal transport couplings. We demonstrate that our resulting algorithms for coupling two particle filter trajectories often perform orders of magnitude more efficiently than more standard approaches.  相似文献   

13.
We consider non‐parametric estimation for interarrival times density of a renewal process. For continuous time observation, a projection estimator in the orthonormal Laguerre basis is built. Nonstandard decompositions lead to bounds on the mean integrated squared error (MISE), from which rates of convergence on Sobolev–Laguerre spaces are deduced, when the length of the observation interval gets large. The more realistic setting of discrete time observation is more difficult to handle. A first strategy consists in neglecting the discretization error. A more precise strategy aims at taking into account the convolution structure of the data. Under a simplifying ‘dead‐zone’ condition, the corresponding MISE is given for any sampling step. In the three cases, an automatic model selection procedure is described and gives the best MISE, up to a logarithmic term. The results are illustrated through a simulation study.  相似文献   

14.
A two-parameter discrete gamma distribution is derived corresponding to the continuous two parameters gamma distribution using the general approach for discretization of continuous probability distributions. One parameter discrete gamma distribution is obtained as a particular case. A few important distributional and reliability properties of the proposed distribution are examined. Parameter estimation by different methods is discussed. Performance of different estimation methods are compared through simulation. Data fitting is carried out to investigate the suitability of the proposed distribution in modeling discrete failure time data and other count data.  相似文献   

15.
Survival models with continuous-time data are still superior methods of survival analysis. However when the survival data is discrete, taking it as continuous leads the researchers to incorrect results and interpretations. The discrete-time survival model has some advantages in applications such as it can be used for non-proportional hazards, time-varying covariates and tied observations. However, it has a disadvantage about the reconstruction of the survival data and working with big data sets. Actuaries are often rely on complex and big data whereas they have to be quick and efficient for short period analysis. Using the mass always creates inefficient processes and consumes time. Therefore sampling design becomes more and more important in order to get reliable results. In this study, we take into account sampling methods in discrete-time survival model using a real data set on motor insurance. To see the efficiency of the proposed methodology we conducted a simulation study.  相似文献   

16.
Minimax optimal experimental designs are notoriously difficult to study largely because the optimality criterion is not differentiable and there is no effective algorithm for generating them. We apply semi-infinite programming (SIP) to solve minimax design problems for nonlinear models in a systematic way using a discretization based strategy and solvers from the General Algebraic Modeling System (GAMS). Using popular models from the biological sciences, we show our approach produces minimax optimal designs that coincide with the few theoretical and numerical optimal designs in the literature. We also show our method can be readily modified to find standardized maximin optimal designs and minimax optimal designs for more complicated problems, such as when the ranges of plausible values for the model parameters are dependent and we want to find a design to minimize the maximal inefficiency of estimates for the model parameters.  相似文献   

17.
ABSTRACT

We consider a stochastic process, the homogeneous spatial immigration-death (HSID) process, which is a spatial birth-death process with as building blocks (i) an immigration-death (ID) process (a continuous-time Markov chain) and (ii) a probability distribution assigning iid spatial locations to all events. For the ID process, we derive the likelihood function, reduce the likelihood estimation problem to one dimension, and prove consistency and asymptotic normality for the maximum likelihood estimators (MLEs) under a discrete sampling scheme. We additionally prove consistency for the MLEs of HSID processes. In connection to the growth-interaction process, which has a HSID process as basis, we also fit HSID processes to Scots pine data.  相似文献   

18.
Optimal designs are presented for experiments in which sampling is carried out in stages. There are two Bernoulli populations and it is assumed that the outcomes of the previous stage are available before the sampling design for the next stage is determined. At each stage, the design specifies the number of observations to be taken and the relative proportion to be sampled from each population. Of particular interest are 2- and 3-stage designs.To illustrate that the designs can be used for experiments of useful sample sizes, they are applied to estimation and optimization problems. Results indicate that, for problems of moderate size, published asymptotic analyses do not always represent the true behavior of the optimal stage sizes, and efficiency may be lost if the analytical results are used instead of the true optimal allocation.The exactly optimal few stage designs discussed here are generated computationally, and the examples presented indicate the ease with which this approach can be used to solve problems that present analytical difficulties. The algorithms described are flexible and provide for the accurate representation of important characteristics of the problem.  相似文献   

19.
Three sampling designs are considered for estimating the sum of k population means by the sum of the corresponding sample means. These are (a) the optimal design; (b) equal sample sizes from all populations; and (c) sample sizes that render equal variances to all sample means. Designs (b) and (c) are equally inefficient, and may yield a variance up to k times as large as that of (a). Similar results are true when the cost of sampling is introduced, and they depend on the population sampled.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号