首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Kim filter (KF) approximation is widely used for the likelihood calculation of dynamic linear models with Markov regime-switching parameters. However, despite its popularity, its approximation error has not yet been examined rigorously. Therefore, this study investigates the reliability of the KF approximation for maximum likelihood (ML) and Bayesian estimations. To measure the approximation error, we compare the outcomes of the KF method with those of the auxiliary particle filter (APF). The APF is a numerical method that requires a longer computing time, but its numerical error can be sufficiently minimized by increasing simulation size. According to our extensive simulation and empirical studies, the likelihood values obtained from the KF approximation are practically identical to those of the APF. Furthermore, we show that the KF method is reliable, particularly when regimes are persistent and sample size is small. From the Bayesian perspective, we show that the KF method improves the efficiency of posterior simulation. This study contributes to the literature by providing evidence to justify the use of the KF method in both ML and Bayesian estimations.  相似文献   

2.
This article proposes computing sensitivities of upper tail probabilities of random sums by the saddlepoint approximation. The considered sensitivity is the derivative of the upper tail probability with respect to the parameter of the summation index distribution. Random sums with Poisson or Geometric distributed summation indices and Gamma or Weibull distributed summands are considered. The score method with importance sampling is considered as an alternative approximation. Numerical studies show that the saddlepoint approximation and the method of score with importance sampling are very accurate. But the saddlepoint approximation is substantially faster than the score method with importance sampling. Thus, the suggested saddlepoint approximation can be conveniently used in various scientific problems.  相似文献   

3.
Summary. A drawback of a new method for integrating abundance and mark–recapture–recovery data is the need to combine likelihoods describing the different data sets. Often these likelihoods will be formed by using specialist computer programs, which is an obstacle to the joint analysis. This difficulty is easily circumvented by the use of a multivariate normal approximation. We show that it is only necessary to make the approximation for the parameters of interest in the joint analysis. The approximation is evaluated on data sets for two bird species and is shown to be efficient and accurate.  相似文献   

4.
Hidden Markov random field models provide an appealing representation of images and other spatial problems. The drawback is that inference is not straightforward for these models as the normalisation constant for the likelihood is generally intractable except for very small observation sets. Variational methods are an emerging tool for Bayesian inference and they have already been successfully applied in other contexts. Focusing on the particular case of a hidden Potts model with Gaussian noise, we show how variational Bayesian methods can be applied to hidden Markov random field inference. To tackle the obstacle of the intractable normalising constant for the likelihood, we explore alternative estimation approaches for incorporation into the variational Bayes algorithm. We consider a pseudo-likelihood approach as well as the more recent reduced dependence approximation of the normalisation constant. To illustrate the effectiveness of these approaches we present empirical results from the analysis of simulated datasets. We also analyse a real dataset and compare results with those of previous analyses as well as those obtained from the recently developed auxiliary variable MCMC method and the recursive MCMC method. Our results show that the variational Bayesian analyses can be carried out much faster than the MCMC analyses and produce good estimates of model parameters. We also found that the reduced dependence approximation of the normalisation constant outperformed the pseudo-likelihood approximation in our analysis of real and synthetic datasets.  相似文献   

5.
We propose an efficient and robust method for variance function estimation in semiparametric longitudinal data analysis. The method utilizes a local log‐linear approximation for the variance function and adopts a generalized estimating equation approach to account for within subject correlations. We show theoretically and empirically that our method outperforms estimators using working independence that ignores the correlations. The Canadian Journal of Statistics 39: 656–670; 2011. © 2011 Statistical Society of Canada  相似文献   

6.
对半参数变系数回归模型,构造了新的空间相关性检验统计量,利用三阶矩 逼近方法导出了其检验 值的近似计算公式,蒙特卡罗模拟结果表明该统计量在检测空间相关性方面具有较高的准确性和可靠性。同时考察了误差项服从不同分布时的检验功效,体现出该检验方法的稳健性。进一步,我们还给出了检验统计量的Bootstrap方法以及检验水平的模拟效果。  相似文献   

7.
杨利雄  张春丽 《统计研究》2014,31(11):96-100
一般来说,数据结构突变点的位置是未知的或突变点的存在性无法准确预知。Enders和Lee(2009,2011)[1][2]证明低频的傅里叶变换(Fourier transformation)就能较精确地处理单位根检验中的数据结构突变(异质结构突变)问题。本文在协整模型框架下,使用傅里叶变换处理协整模型确定性趋势项下的结构突变,考察了协整模型参数的收敛速度,并重新推导了不等方差检验。傅里叶近似项参数的收敛速度为: 。使用蒙特卡洛模拟表明:在缺乏结构突变的先验知识的情况下,使用低频的傅里叶变换能较好地处理协整回归中的确定性趋势的结构突变的问题,显著提高协整向量的估计效率。使用改进后的方法,重新研究了中国股市和国际股市联动关系的密切程度,实证结果更为强烈地支持:中国投资者投资于澳大利亚市场分散风险的收益显著弱于投资其他国际市场。  相似文献   

8.
Estimation of finite mixture models when the mixing distribution support is unknown is an important problem. This article gives a new approach based on a marginal likelihood for the unknown support. Motivated by a Bayesian Dirichlet prior model, a computationally efficient stochastic approximation version of the marginal likelihood is proposed and large-sample theory is presented. By restricting the support to a finite grid, a simulated annealing method is employed to maximize the marginal likelihood and estimate the support. Real and simulated data examples show that this novel stochastic approximation and simulated annealing procedure compares favorably with existing methods.  相似文献   

9.
In this article we show the effectiveness and the accuracy of the test statistic based on the expnnent of the saddlepoint approximation for the density of M-estimators, proposed by Robinson, Ronchetti and Young (1999), for testing simultaneous hypotheses on the mean and on the variance of a wrapped normal distribution. We base this test statistic on the trigonometric method of moments estimator proposed by Gatto and Jammalamadaka (l999b), which admits the M-estimator representation necessary for this test. This test statistic has an approximate chi-squared distribution, asympiotically up to the second order, and the high accuracy of this approximation is shown by numerical simulations.  相似文献   

10.
ABSTRACT

This article addresses the problem of repeats detection used in the comparison of significant repeats in sequences. The case of self-overlapping leftmost repeats for large sequences generated by an homogeneous stationary Markov chain has not been treated in the literature. In this work, we are interested by the approximation of the number of self-overlapping leftmost long enough repeats distribution in an homogeneous stationary Markov chain. Using the Chen–Stein method, we show that the number of self-overlapping leftmost long enough repeats distribution is approximated by the Poisson distribution. Moreover, we show that this approximation can be extended to the case where the sequences are generated by a m-order Markov chain.  相似文献   

11.
We propose a sequential method to estimate monotone convex functions that consists of: (i) monotone regression via solving a constrained least square (LS) problem and (ii) convexification of the monotone regression estimate via solving a uniform approximation problem with associated constraints. We show that this method is faster than the constrained LS method. The ratio of computation time increases as data size increases. Moreover, we show that, under an appropriate smoothness condition, the uniform convergence rate achieved by the proposed method is nearly comparable to the best achievable rate for a non-parametric estimate which ignores the shape constraint. Simulation studies show that our method is comparable to the constrained LS method in estimation error. We illustrate our method by analysing ground water level data of wells in Korea.  相似文献   

12.
Estimation and prediction in generalized linear mixed models are often hampered by intractable high dimensional integrals. This paper provides a framework to solve this intractability, using asymptotic expansions when the number of random effects is large. To that end, we first derive a modified Laplace approximation when the number of random effects is increasing at a lower rate than the sample size. Second, we propose an approximate likelihood method based on the asymptotic expansion of the log-likelihood using the modified Laplace approximation which is maximized using a quasi-Newton algorithm. Finally, we define the second order plug-in predictive density based on a similar expansion to the plug-in predictive density and show that it is a normal density. Our simulations show that in comparison to other approximations, our method has better performance. Our methods are readily applied to non-Gaussian spatial data and as an example, the analysis of the rhizoctonia root rot data is presented.  相似文献   

13.
In this article, we propose a new method of imputation that makes use of higher order moments of an auxiliary variable while imputing missing values. The mean, ratio, and regression methods of imputation are shown to be special cases and less efficient than the newly developed method of imputation, which makes use of higher order moments. Analytical comparisons show that the first-order mean squared error approximation for the proposed new method of imputation is always smaller than that for the regression method of imputation. At the end, the proposed higher order moments-based imputation method has been applied to a real dataset.  相似文献   

14.
The empirical best linear unbiased prediction approach is a popular method for the estimation of small area parameters. However, the estimation of reliable mean squared prediction error (MSPE) of the estimated best linear unbiased predictors (EBLUP) is a complicated process. In this paper we study the use of resampling methods for MSPE estimation of the EBLUP. A cross-sectional and time-series stationary small area model is used to provide estimates in small areas. Under this model, a parametric bootstrap procedure and a weighted jackknife method are introduced. A Monte Carlo simulation study is conducted in order to compare the performance of different resampling-based measures of uncertainty of the EBLUP with the analytical approximation. Our empirical results show that the proposed resampling-based approaches performed better than the analytical approximation in several situations, although in some cases they tend to underestimate the true MSPE of the EBLUP in a higher number of small areas.  相似文献   

15.
This article describes a method for computing approximate statistics for large data sets, when exact computations may not be feasible. Such situations arise in applications such as climatology, data mining, and information retrieval (search engines). The key to our approach is a modular approximation to the cumulative distribution function (cdf) of the data. Approximate percentiles (as well as many other statistics) can be computed from this approximate cdf. This enables the reduction of a potentially overwhelming computational exercise into smaller, manageable modules. We illustrate the properties of this algorithm using a simulated data set. We also examine the approximation characteristics of the approximate percentiles, using a von Mises functional type approach. In particular, it is shown that the maximum error between the approximate cdf and the actual cdf of the data is never more than 1% (or any other preset level). We also show that under assumptions of underlying smoothness of the cdf, the approximation error is much lower in an expected sense. Finally, we derive bounds for the approximation error of the percentiles themselves. Simulation experiments show that these bounds can be quite tight in certain circumstances.  相似文献   

16.
We analyse a naive method using sample mean and sample variance to test the convergence of simulation. We find this method is valid for identically, independently distributed samples, as well as correlated samples with correlation disappearing in long period. Our simulation results on the approximation to bankruptcy probability (BP) show the naive method compares well with the Half-Width, Geweke and CUSUM methods in terms of accuracy and time cost. There are clear evidences of variance reduction from tail-distribution sampling for all convergence test methods when the true BP is very low.  相似文献   

17.
We establish the one-term Edgeworth expansion for various statistics related to Cox semipara-metric regression model when the covariate is one-dimensional and the observations are i.i.d. We show that the bootstrap approximation method is second-order correct. The second-order-correct estimates of the sampling distribution can be obtained without Monte Carlo simulation. We pay special attention to the Studentized version of the statistics and show that their distributions are different from those of the original statistics to order n  相似文献   

18.
In first-level analyses of functional magnetic resonance imaging data, adjustments for temporal correlation as a Satterthwaite approximation or a prewhitening method are usually implemented in the univariate model to keep the nominal test level. In doing so, the temporal correlation structure of the data is estimated, assuming an autoregressive process of order one.We show that this is applicable in multivariate approaches too - more precisely in the so-called stabilized multivariate test statistics. Furthermore, we propose a block-wise permutation method including a random shift that renders an approximation of the temporal correlation structure unnecessary but also approximately keeps the nominal test level in spite of the dependence of sample elements.Although the intentions are different, a comparison of the multivariate methods with the multiple ones shows that the global approach may achieve advantages if applied to suitable regions of interest. This is illustrated using an example from fMRI studies.  相似文献   

19.
A modified normal-based approximation for calculating the percentiles of a linear combination of independent random variables is proposed. This approximation is applicable in situations where expectations and percentiles of the individual random variables can be readily obtained. The merits of the approximation are evaluated for the chi-square and beta distributions using Monte Carlo simulation. An approximation to the percentiles of the ratio of two independent random variables is also given. Solutions based on the approximations are given for some classical problems such as interval estimation of the normal coefficient of variation, survival probability, the difference between or the ratio of two binomial proportions, and for some other problems. Furthermore, approximation to the percentiles of a doubly noncentral F distribution is also given. For all the problems considered, the approximation provides simple satisfactory solutions. Two examples are given to show applications of the approximation.  相似文献   

20.
We discuss parameter estimation for discretely observed, ergodic diffusion processes where the diffusion coefficient does not depend on the parameter. We propose using an approximation of the continuous-time score function as an estimating function. The estimating function can be expressed in simple terms through the drift and the diffusion coefficient and is thus easy to calculate. Simulation studies show that the method performs well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号