首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

In time series analysis, signal extraction model (SEM) is used to estimate unobserved signal component from observed time series data. Since parameters of the components in SEM are often unknown in practice, a commonly used method is to estimate unobserved signal component using the maximum likelihood estimates (MLEs) of parameters of the components. This paper explores an alternative way to estimate unobserved signal component when parameters of the components are unknown. The suggested method makes use of importance sampling (IS) with Bayesian inference. The basic idea is to treat parameters of the components in SEM as a random vector and compute a posterior probability density function of the parameters using Bayesian inference. Then IS method is applied to integrate out the parameters and thus estimates of unobserved signal component, unconditional to the parameters, can be obtained. This method is illustrated with a real time series data. Then a Monte Carlo study with four different types of time series models is carried out to compare a performance of this method with that of a commonly used method. The study shows that IS method with Bayesian inference is computationally feasible and robust, and more efficient in terms of mean square errors (MSEs) than a commonly used method.  相似文献   

2.
The spectral analysis of Gaussian linear time-series processes is usually based on uni-frequential tools because the spectral density functions of degree 2 and higher are identically zero and there is no polyspectrum in this case. In finite samples, such an approach does not allow the resolution of closely adjacent spectral lines, except by using autoregressive models of excessively high order in the method of maximum entropy. In this article, multi-frequential periodograms designed for the analysis of discrete and mixed spectra are defined and studied for their properties in finite samples. For a given vector of frequencies ω, the sum of squares of the corresponding trigonometric regression model fitted to a time series by unweighted least squares defines the multi-frequential periodogram statistic IM(ω). When ω is unknown, it follows from the properties of nonlinear models whose parameters separate (i.e., the frequencies and the cosine and sine coefficients here) that the least-squares estimator of frequencies is obtained by maximizing I M(ω). The first-order, second-order and distribution properties of I M(ω) are established theoretically in finite samples, and are compared with those of Schuster's uni-frequential periodogram statistic. In the multi-frequential periodogram analysis, the least-squares estimator of frequencies is proved to be theoretically unbiased in finite samples if the number of periodic components of the time series is correctly estimated. Here, this number is estimated at the end of a stepwise procedure based on pseudo-Flikelihood ratio tests. Simulations are used to compare the stepwise procedure involving I M(ω) with a stepwise procedure using Schuster's periodogram, to study an approximation of the asymptotic theory for the frequency estimators in finite samples in relation to the proximity and signal-to-noise ratio of the periodic components, and to assess the robustness of I M(ω) against autocorrelation in the analysis of mixed spectra. Overall, the results show an improvement of the new method over the classical approach when spectral lines are adjacent. Finally, three examples with real data illustrate specific aspects of the method, and extensions (i.e., unequally spaced observations, trend modeling, replicated time series, periodogram matrices) are outlined.  相似文献   

3.
Bandwidth plays an important role in determining the performance of nonparametric estimators, such as the local constant estimator. In this article, we propose a Bayesian approach to bandwidth estimation for local constant estimators of time-varying coefficients in time series models. We establish a large sample theory for the proposed bandwidth estimator and Bayesian estimators of the unknown parameters involved in the error density. A Monte Carlo simulation study shows that (i) the proposed Bayesian estimators for bandwidth and parameters in the error density have satisfactory finite sample performance; and (ii) our proposed Bayesian approach achieves better performance in estimating the bandwidths than the normal reference rule and cross-validation. Moreover, we apply our proposed Bayesian bandwidth estimation method for the time-varying coefficient models that explain Okun’s law and the relationship between consumption growth and income growth in the U.S. For each model, we also provide calibrated parametric forms of the time-varying coefficients. Supplementary materials for this article are available online.  相似文献   

4.
A new sampling-based Bayesian approach to the long memory stochastic volatility (LMSV) process is presented; the method is motivated by the GPH-estimator in fractionally integrated autoregressive moving average (ARFIMA) processes, which was originally proposed by J. Geweke and S. Porter-Hudak [The estimation and application of long memory time series models, Journal of Time Series Analysis, 4 (1983) 221–238]. In this work, we perform an estimation of the memory parameter in the Bayesian framework; an estimator is obtained by maximizing the posterior density of the memory parameter. Finally, we compare the GPH-estimator and the Bayes-estimator by means of a simulation study and our new approach is illustrated using several stock market indices; the new estimator is proved to be relatively stable for the various choices of frequencies used in the regression.  相似文献   

5.
Quasi-random sequences are known to give efficient numerical integration rules in many Bayesian statistical problems where the posterior distribution can be transformed into periodic functions on then-dimensional hypercube. From this idea we develop a quasi-random approach to the generation of resamples used for Monte Carlo approximations to bootstrap estimates of bias, variance and distribution functions. We demonstrate a major difference between quasi-random bootstrap resamples, which are generated by deterministic algorithms and have no true randomness, and the usual pseudo-random bootstrap resamples generated by the classical bootstrap approach. Various quasi-random approaches are considered and are shown via a simulation study to result in approximants that are competitive in terms of efficiency when compared with other bootstrap Monte Carlo procedures such as balanced and antithetic resampling.  相似文献   

6.
The Finnish common toad data of Heikkinen and Hogmander are reanalysed using an alternative fully Bayesian model that does not require a pseudolikelihood approximation and an alternative prior distribution for the true presence or absence status of toads in each 10 km×10 km square. Markov chain Monte Carlo methods are used to obtain posterior probability estimates of the square-specific presences of the common toad and these are presented as a map. The results are different from those of Heikkinen and Hogmander and we offer an explanation in terms of the prior used for square-specific presence of the toads. We suggest that our approach is more faithful to the data and avoids unnecessary confounding of effects. We demonstrate how to extend our model efficiently with square-specific covariates and illustrate this by introducing deterministic spatial changes.  相似文献   

7.
8.
A new procedure is proposed for deriving variable bandwidths in univariate kernel density estimation, based upon likelihood cross-validation and an analysis of a Bayesian graphical model. The procedure admits bandwidth selection which is flexible in terms of the amount of smoothing required. In addition, the basic model can be extended to incorporate local smoothing of the density estimate. The method is shown to perform well in both theoretical and practical situations, and we compare our method with those of Abramson (The Annals of Statistics 10: 1217–1223) and Sain and Scott (Journal of the American Statistical Association 91: 1525–1534). In particular, we note that in certain cases, the Sain and Scott method performs poorly even with relatively large sample sizes.We compare various bandwidth selection methods using standard mean integrated square error criteria to assess the quality of the density estimates. We study situations where the underlying density is assumed both known and unknown, and note that in practice, our method performs well when sample sizes are small. In addition, we also apply the methods to real data, and again we believe our methods perform at least as well as existing methods.  相似文献   

9.
We consider a general class of prior distributions for nonparametric Bayesian estimation which uses finite random series with a random number of terms. A prior is constructed through distributions on the number of basis functions and the associated coefficients. We derive a general result on adaptive posterior contraction rates for all smoothness levels of the target function in the true model by constructing an appropriate ‘sieve’ and applying the general theory of posterior contraction rates. We apply this general result on several statistical problems such as density estimation, various nonparametric regressions, classification, spectral density estimation and functional regression. The prior can be viewed as an alternative to the commonly used Gaussian process prior, but properties of the posterior distribution can be analysed by relatively simpler techniques. An interesting approximation property of B‐spline basis expansion established in this paper allows a canonical choice of prior on coefficients in a random series and allows a simple computational approach without using Markov chain Monte Carlo methods. A simulation study is conducted to show that the accuracy of the Bayesian estimators based on the random series prior and the Gaussian process prior are comparable. We apply the method on Tecator data using functional regression models.  相似文献   

10.
In this paper, we study a new Bayesian approach for the analysis of linearly mixed structures. In particular, we consider the case of hyperspectral images, which have to be decomposed into a collection of distinct spectra, called endmembers, and a set of associated proportions for every pixel in the scene. This problem, often referred to as spectral unmixing, is usually considered on the basis of the linear mixing model (LMM). In unsupervised approaches, the endmember signatures have to be calculated by an endmember extraction algorithm, which generally relies on the supposition that there are pure (unmixed) pixels contained in the image. In practice, this assumption may not hold for highly mixed data and consequently the extracted endmember spectra differ from the true ones. A way out of this dilemma is to consider the problem under the normal compositional model (NCM). Contrary to the LMM, the NCM treats the endmembers as random Gaussian vectors and not as deterministic quantities. Existing Bayesian approaches for estimating the proportions under the NCM are restricted to the case that the covariance matrix of the Gaussian endmembers is a multiple of the identity matrix. The self-evident conclusion is that this model is not suitable when the variance differs from one spectral channel to the other, which is a common phenomenon in practice. In this paper, we first propose a Bayesian strategy for the estimation of the mixing proportions under the assumption of varying variances in the spectral bands. Then we generalize this model to handle the case of a completely unknown covariance structure. For both algorithms, we present Gibbs sampling strategies and compare their performance with other, state of the art, unmixing routines on synthetic as well as on real hyperspectral fluorescence spectroscopy data.  相似文献   

11.
Abstract.  We are interested in estimating level sets using a Bayesian non-parametric approach, from an independent and identically distributed sample drawn from an unknown distribution. Under fairly general conditions on the prior, we provide an upper bound on the rate of convergence of the Bayesian level set estimate, via the rate at which the posterior distribution concentrates around the true level set. We then consider, as an application, the log-spline prior in the two-dimensional unit cube. Assuming that the true distribution belongs to a class of Hölder, we provide an upper bound on the rate of convergence of the Bayesian level set estimates. We compare our results with existing rates of convergence in the frequentist non-parametric literature: the Bayesian level set estimator proves to be competitive and is also easy to compute, which is of no small importance. A simulation study is given as an illustration.  相似文献   

12.
It may sometimes be clear from background knowledge that a population under investigation proportionally consists of a known number of subpopulations, whose distributions belong to the same, yet unknown, family. While a parametric family is commonly used in practice, one can also consider some nonparametric families to avoid distributional misspecification. In this article, we propose a solution using a mixture-based nonparametric family for the component distribution in a finite mixture model as opposed to some recent research that utilizes a kernel-based approach. In particular, we present a semiparametric maximum likelihood estimation procedure for the model parameters and tackle the bandwidth parameter selection problem via some popular means for model selection. Empirical comparisons through simulation studies and three real data sets suggest that estimators based on our mixture-based approach are more efficient than those based on the kernel-based approach, in terms of both parameter estimation and overall density estimation.  相似文献   

13.
In this paper, we discuss a progressively censored inverted exponentiated Rayleigh distribution. Estimation of unknown parameters is considered under progressive censoring using maximum likelihood and Bayesian approaches. Bayes estimators of unknown parameters are derived with respect to different symmetric and asymmetric loss functions using gamma prior distributions. An importance sampling procedure is taken into consideration for deriving these estimates. Further highest posterior density intervals for unknown parameters are constructed and for comparison purposes bootstrap intervals are also obtained. Prediction of future observations is studied in one- and two-sample situations from classical and Bayesian viewpoint. We further establish optimum censoring schemes using Bayesian approach. Finally, we conduct a simulation study to compare the performance of proposed methods and analyse two real data sets for illustration purposes.  相似文献   

14.
In this article, static light scattering (SLS) measurements are processed to estimate the particle size distribution of particle systems incorporating prior information obtained from an alternative experimental technique: scanning electron microscopy (SEM). For this purpose we propose two Bayesian schemes (one parametric and another non-parametric) to solve the stated light scattering problem and take advantage of the obtained results to summarize some features of the Bayesian approach within the context of inverse problems. The features presented in this article include the improvement of the results when some useful prior information from an alternative experiment is considered instead of a non-informative prior as it occurs in a deterministic maximum likelihood estimation. This improvement will be shown in terms of accuracy and precision in the corresponding results and also in terms of minimizing the effect of multiple minima by including significant information in the optimization. Both Bayesian schemes are implemented using Markov Chain Monte Carlo methods. They have been developed on the basis of the Metropolis–Hastings (MH) algorithm using Matlab® and are tested with the analysis of simulated and experimental examples of concentrated and semi-concentrated particles. In the simulated examples, SLS measurements were generated using a rigorous model, while the inversion stage was solved using an approximate model in both schemes and also using the rigorous model in the parametric scheme. Priors from SEM micrographs were also simulated and experimented, where the simulated ones were obtained using a Monte Carlo routine. In addition to the presentation of these features of the Bayesian approach, some other topics will be discussed, such as regularization and some implementation issues of the proposed schemes, among which we remark the selection of the parameters used in the MH algorithm.  相似文献   

15.
In this paper we develop a Bayesian approach to detecting unit roots in autoregressive panel data models. Our method is based on the comparison of stationary autoregressive models with and without individual deterministic trends, to their counterpart models with a unit autoregressive root. This is done under cross-sectional dependence among the error terms of the panel units. Simulation experiments are conducted with the aim to assess the performance of the suggested inferential procedure, as well as to investigate if the Bayesian model comparison approach can distinguish unit root models from stationary autoregressive models under cross-sectional dependence. The approach is applied to real exchange rate series for a panel of the G7 countries and to a panel of US nominal interest rates data.  相似文献   

16.
A new variational Bayesian (VB) algorithm, split and eliminate VB (SEVB), for modeling data via a Gaussian mixture model (GMM) is developed. This new algorithm makes use of component splitting in a way that is more appropriate for analyzing a large number of highly heterogeneous spiky spatial patterns with weak prior information than existing VB-based approaches. SEVB is a highly computationally efficient approach to Bayesian inference and like any VB-based algorithm it can perform model selection and parameter value estimation simultaneously. A significant feature of our algorithm is that the fitted number of components is not limited by the initial proposal giving increased modeling flexibility. We introduce two types of split operation in addition to proposing a new goodness-of-fit measure for evaluating mixture models. We evaluate their usefulness through empirical studies. In addition, we illustrate the utility of our new approach in an application on modeling human mobility patterns. This application involves large volumes of highly heterogeneous spiky data; it is difficult to model this type of data well using the standard VB approach as it is too restrictive and lacking in the flexibility required. Empirical results suggest that our algorithm has also improved upon the goodness-of-fit that would have been achieved using the standard VB method, and that it is also more robust to various initialization settings.  相似文献   

17.
This paper describes a Bayesian approach to make inference for risk reserve processes with an unknown claim‐size distribution. A flexible model based on mixtures of Erlang distributions is proposed to approximate the special features frequently observed in insurance claim sizes, such as long tails and heterogeneity. A Bayesian density estimation approach for the claim sizes is implemented using reversible jump Markov chain Monte Carlo methods. An advantage of the considered mixture model is that it belongs to the class of phase‐type distributions, and thus explicit evaluations of the ruin probabilities are possible. Furthermore, from a statistical point of view, the parametric structure of the mixtures of the Erlang distribution offers some advantages compared with the whole over‐parametrized family of phase‐type distributions. Given the observed claim arrivals and claim sizes, we show how to estimate the ruin probabilities, as a function of the initial capital, and predictive intervals that give a measure of the uncertainty in the estimations.  相似文献   

18.
A uniform shrinkage prior (USP) distribution on the unknown variance component of a random-effects model is known to produce good frequency properties. The USP has a parameter that determines the shape of its density function, but it has been neglected whether the USP can maintain such good frequency properties regardless of the choice for the shape parameter. We investigate which choice for the shape parameter of the USP produces Bayesian interval estimates of random effects that meet their nominal confidence levels better than several existent choices in the literature. Using univariate and multivariate Gaussian hierarchical models, we show that the USP can achieve its best frequency properties when its shape parameter makes the USP behave similarly to an improper flat prior distribution on the unknown variance component.  相似文献   

19.
Recently, mixture distribution becomes more and more popular in many scientific fields. Statistical computation and analysis of mixture models, however, are extremely complex due to the large number of parameters involved. Both EM algorithms for likelihood inference and MCMC procedures for Bayesian analysis have various difficulties in dealing with mixtures with unknown number of components. In this paper, we propose a direct sampling approach to the computation of Bayesian finite mixture models with varying number of components. This approach requires only the knowledge of the density function up to a multiplicative constant. It is easy to implement, numerically efficient and very practical in real applications. A simulation study shows that it performs quite satisfactorily on relatively high dimensional distributions. A well-known genetic data set is used to demonstrate the simplicity of this method and its power for the computation of high dimensional Bayesian mixture models.  相似文献   

20.
Various nonparametric approaches for Bayesian spectral density estimation of stationary time series have been suggested in the literature, mostly based on the Whittle likelihood approximation. A generalization of this approximation involving a nonparametric correction of a parametric likelihood has been proposed in the literature with a proof of posterior consistency for spectral density estimation in combination with the Bernstein–Dirichlet process prior for Gaussian time series. In this article, we will extend the posterior consistency result to non-Gaussian time series by employing a general consistency theorem for dependent data and misspecified models. As a special case, posterior consistency for the spectral density under the Whittle likelihood is also extended to non-Gaussian time series. Small sample properties of this approach are illustrated with several examples of non-Gaussian time series.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号