首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
An automated (Markov chain) Monte Carlo EM algorithm   总被引:1,自引:0,他引:1  
We present an automated Monte Carlo EM (MCEM) algorithm which efficiently assesses Monte Carlo error in the presence of dependent Monte Carlo, particularly Markov chain Monte Carlo, E-step samples and chooses an appropriate Monte Carlo sample size to minimize this Monte Carlo error with respect to progressive EM step estimates. Monte Carlo error is gauged though an application of the central limit theorem during renewal periods of the MCMC sampler used in the E-step. The resulting normal approximation allows us to construct a rigorous and adaptive rule for updating the Monte Carlo sample size each iteration of the MCEM algorithm. We illustrate our automated routine and compare the performance with competing MCEM algorithms in an analysis of a data set fit by a generalized linear mixed model.  相似文献   

2.
In the expectation–maximization (EM) algorithm for maximum likelihood estimation from incomplete data, Markov chain Monte Carlo (MCMC) methods have been used in change-point inference for a long time when the expectation step is intractable. However, the conventional MCMC algorithms tend to get trapped in local mode in simulating from the posterior distribution of change points. To overcome this problem, in this paper we propose a stochastic approximation Monte Carlo version of EM (SAMCEM), which is a combination of adaptive Markov chain Monte Carlo and EM utilizing a maximum likelihood method. SAMCEM is compared with the stochastic approximation version of EM and reversible jump Markov chain Monte Carlo version of EM on simulated and real datasets. The numerical results indicate that SAMCEM can outperform among the three methods by producing much more accurate parameter estimates and the ability to achieve change-point positions and estimates simultaneously.  相似文献   

3.
In recent years much effort has been devoted to maximum likelihood estimation of generalized linear mixed models. Most of the existing methods use the EM algorithm, with various techniques in handling the intractable E-step. In this paper, a new implementation of a stochastic approximation algorithm with Markov chain Monte Carlo method is investigated. The proposed algorithm is computationally straightforward and its convergence is guaranteed. A simulation and three real data sets, including the challenging salamander data, are used to illustrate the procedure and to compare it with some existing methods. The results indicate that the proposed algorithm is an attractive alternative for problems with a large number of random effects or with high dimensional intractable integrals in the likelihood function.  相似文献   

4.
Clustered binary data are common in medical research and can be fitted to the logistic regression model with random effects which belongs to a wider class of models called the generalized linear mixed model. The likelihood-based estimation of model parameters often has to handle intractable integration which leads to several estimation methods to overcome such difficulty. The penalized quasi-likelihood (PQL) method is the one that is very popular and computationally efficient in most cases. The expectation–maximization (EM) algorithm allows to estimate maximum-likelihood estimates, but requires to compute possibly intractable integration in the E-step. The variants of the EM algorithm to evaluate the E-step are introduced. The Monte Carlo EM (MCEM) method computes the E-step by approximating the expectation using Monte Carlo samples, while the Modified EM (MEM) method computes the E-step by approximating the expectation using the Laplace's method. All these methods involve several steps of approximation so that corresponding estimates of model parameters contain inevitable errors (large or small) induced by approximation. Understanding and quantifying discrepancy theoretically is difficult due to the complexity of approximations in each method, even though the focus is on clustered binary data. As an alternative competing computational method, we consider a non-parametric maximum-likelihood (NPML) method as well. We review and compare the PQL, MCEM, MEM and NPML methods for clustered binary data via simulation study, which will be useful for researchers when choosing an estimation method for their analysis.  相似文献   

5.
We consider the use of Monte Carlo methods to obtain maximum likelihood estimates for random effects models and distinguish between the pointwise and functional approaches. We explore the relationship between the two approaches and compare them with the EM algorithm. The functional approach is more ambitious but the approximation is local in nature which we demonstrate graphically using two simple examples. A remedy is to obtain successively better approximations of the relative likelihood function near the true maximum likelihood estimate. To save computing time, we use only one Newton iteration to approximate the maximiser of each Monte Carlo likelihood and show that this is equivalent to the pointwise approach. The procedure is applied to fit a latent process model to a set of polio incidence data. The paper ends by a comparison between the marginal likelihood and the recently proposed hierarchical likelihood which avoids integration altogether.  相似文献   

6.
Summary.  Gaussian Markov random-field (GMRF) models are frequently used in a wide variety of applications. In most cases parts of the GMRF are observed through mutually independent data; hence the full conditional of the GMRF, a hidden GMRF (HGMRF), is of interest. We are concerned with the case where the likelihood is non-Gaussian, leading to non-Gaussian HGMRF models. Several researchers have constructed block sampling Markov chain Monte Carlo schemes based on approximations of the HGMRF by a GMRF, using a second-order expansion of the log-density at or near the mode. This is possible as the GMRF approximation can be sampled exactly with a known normalizing constant. The Markov property of the GMRF approximation yields computational efficiency.The main contribution in the paper is to go beyond the GMRF approximation and to construct a class of non-Gaussian approximations which adapt automatically to the particular HGMRF that is under study. The accuracy can be tuned by intuitive parameters to nearly any precision. These non-Gaussian approximations share the same computational complexity as those which are based on GMRFs and can be sampled exactly with computable normalizing constants. We apply our approximations in spatial disease mapping and model-based geostatistical models with different likelihoods, obtain procedures for block updating and construct Metropolized independence samplers.  相似文献   

7.
Park  Joonha  Atchadé  Yves 《Statistics and Computing》2020,30(5):1325-1345

We explore a general framework in Markov chain Monte Carlo (MCMC) sampling where sequential proposals are tried as a candidate for the next state of the Markov chain. This sequential-proposal framework can be applied to various existing MCMC methods, including Metropolis–Hastings algorithms using random proposals and methods that use deterministic proposals such as Hamiltonian Monte Carlo (HMC) or the bouncy particle sampler. Sequential-proposal MCMC methods construct the same Markov chains as those constructed by the delayed rejection method under certain circumstances. In the context of HMC, the sequential-proposal approach has been proposed as extra chance generalized hybrid Monte Carlo (XCGHMC). We develop two novel methods in which the trajectories leading to proposals in HMC are automatically tuned to avoid doubling back, as in the No-U-Turn sampler (NUTS). The numerical efficiency of these new methods compare favorably to the NUTS. We additionally show that the sequential-proposal bouncy particle sampler enables the constructed Markov chain to pass through regions of low target density and thus facilitates better mixing of the chain when the target density is multimodal.

  相似文献   

8.
The maximum likelihood and Bayesian approaches have been considered for the two-parameter generalized exponential distribution based on record values with the number of trials following the record values (inter-record times). The maximum likelihood estimates are obtained under the inverse sampling and the random sampling schemes. It is shown that the maximum likelihood estimator of the shape parameter converges in mean square to the true value when the scale parameter is known. The Bayes estimates of the parameters have been developed by using Lindley's approximation and the Markov Chain Monte Carlo methods due to the lack of explicit forms under the squared error and the linear-exponential loss functions. The confidence intervals for the parameters are constructed based on asymptotic and Bayesian methods. The Bayes and the maximum likelihood estimators are compared in terms of the estimated risk by the Monte Carlo simulations. The comparison of the estimators based on the record values and the record values with their corresponding inter-record times are performed by using Monte Carlo simulations.  相似文献   

9.
The coefficient of variation (CV) is extensively used in many areas of applied statistics including quality control and sampling. It is regarded as a measure of stability or uncertainty, and can indicate the relative dispersion of data in the population to the population mean. In this article, based on progressive first-failure-censored data, we study the behavior of the CV of a random variable that follows a Burr-XII distribution. Specifically, we compute the maximum likelihood estimations and the confidence intervals of CV based on the observed Fisher information matrix using asymptotic distribution of the maximum likelihood estimator and also by using the bootstrapping technique. In addition, we propose to apply Markov Chain Monte Carlo techniques to tackle this problem, which allows us to construct the credible intervals. A numerical example based on real data is presented to illustrate the implementation of the proposed procedure. Finally, Monte Carlo simulations are performed to observe the behavior of the proposed methods.  相似文献   

10.
Based on hybrid censored data, the problem of making statistical inference on parameters of a two parameter Burr Type XII distribution is taken up. The maximum likelihood estimates are developed for the unknown parameters using the EM algorithm. Fisher information matrix is obtained by applying missing value principle and is further utilized for constructing the approximate confidence intervals. Some Bayes estimates and the corresponding highest posterior density intervals of the unknown parameters are also obtained. Lindley’s approximation method and a Markov Chain Monte Carlo (MCMC) technique have been applied to evaluate these Bayes estimates. Further, MCMC samples are utilized to construct the highest posterior density intervals as well. A numerical comparison is made between proposed estimates in terms of their mean square error values and comments are given. Finally, two data sets are analyzed using proposed methods.  相似文献   

11.
There are two conceptually distinct tasks in Markov chain Monte Carlo (MCMC): a sampler is designed for simulating a Markov chain and then an estimator is constructed on the Markov chain for computing integrals and expectations. In this article, we aim to address the second task by extending the likelihood approach of Kong et al. for Monte Carlo integration. We consider a general Markov chain scheme and use partial likelihood for estimation. Basically, the Markov chain scheme is treated as a random design and a stratified estimator is defined for the baseline measure. Further, we propose useful techniques including subsampling, regulation, and amplification for achieving overall computational efficiency. Finally, we introduce approximate variance estimators for the point estimators. The method can yield substantially improved accuracy compared with Chib's estimator and the crude Monte Carlo estimator, as illustrated with three examples.  相似文献   

12.
Summary.  The expectation–maximization (EM) algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and high dimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the relevant integrals. Typically, a very large Monte Carlo sample size is required to estimate these integrals within an acceptable tolerance when the algorithm is near convergence. Even if this sample size were known at the onset of implementation of MCEM, its use throughout all iterations is wasteful, especially when accurate starting values are not available. We propose a data-driven strategy for controlling Monte Carlo resources in MCEM. The algorithm proposed improves on similar existing methods by recovering EM's ascent (i.e. likelihood increasing) property with high probability, being more robust to the effect of user-defined inputs and handling classical Monte Carlo and Markov chain Monte Carlo methods within a common framework. Because of the first of these properties we refer to the algorithm as 'ascent-based MCEM'. We apply ascent-based MCEM to a variety of examples, including one where it is used to accelerate the convergence of deterministic EM dramatically.  相似文献   

13.
Nonlinear mixed‐effect models are often used in the analysis of longitudinal data. However, it sometimes happens that missing values for some of the model covariates are not purely random. Motivated by an application to HTV viral dynamics, where this situation occurs, the author considers likelihood inference for this type of problem. His approach involves a Monte Carlo EM algorithm, along with a Gibbs sampler and rejection/importance sampling methods. A concrete application is provided.  相似文献   

14.
Summary. The task of estimating an integral by Monte Carlo methods is formulated as a statistical model using simulated observations as data. The difficulty in this exercise is that we ordinarily have at our disposal all of the information required to compute integrals exactly by calculus or numerical integration, but we choose to ignore some of the information for simplicity or computational feasibility. Our proposal is to use a semiparametric statistical model that makes explicit what information is ignored and what information is retained. The parameter space in this model is a set of measures on the sample space, which is ordinarily an infinite dimensional object. None-the-less, from simulated data the base-line measure can be estimated by maximum likelihood, and the required integrals computed by a simple formula previously derived by Vardi and by Lindsay in a closely related model for biased sampling. The same formula was also suggested by Geyer and by Meng and Wong using entirely different arguments. By contrast with Geyer's retrospective likelihood, a correct estimate of simulation error is available directly from the Fisher information. The principal advantage of the semiparametric model is that variance reduction techniques are associated with submodels in which the maximum likelihood estimator in the submodel may have substantially smaller variance than the traditional estimator. The method is applicable to Markov chain and more general Monte Carlo sampling schemes with multiple samplers.  相似文献   

15.
This article deals with the statistical inference and prediction on Burr Type XII parameters based on Type II censored sample. It is observed that the maximum likelihood estimators (MLEs) cannot be obtained in closed form. We use the expectation-maximization algorithm to compute the MLEs. We also obtain the Bayes estimators under symmetric and asymmetric loss functions such as squared error and Linex By applying Lindley's approximation and Markov chain Monte Carlo (MCMC) technique. Further, MCMC samples are used to calculate the highest posterior density credible intervals. Monte Carlo simulation study and two real-life data-sets are presented to illustrate all of the methods developed here. Furthermore, we obtain a prediction of future order statistics based on the observed ordered because of its important application in different fields such as medical and engineering sciences. A numerical example carried out to illustrate the procedures obtained for prediction of future order statistics.  相似文献   

16.
Parameter estimation with missing data is a frequently encountered problem in statistics. Imputation is often used to facilitate the parameter estimation by simply applying the complete-sample estimators to the imputed dataset.In this article, we consider the problem of parameter estimation with nonignorable missing data using the approach of parametric fractional imputation proposed by Kim (2011). Using the fractional weights, the E-step of the EM algorithm can be approximated by the weighted mean of the imputed data likelihood where the fractional weights are computed from the current value of the parameter estimates. Calibration fractional imputation is also considered as a way for improving the Monte Carlo approximation in the fractional imputation. Variance estimation is also discussed. Results from two simulation studies are presented to compare the proposed method with the existing methods. A real data example from the Korea Labor and Income Panel Survey (KLIPS) is also presented.  相似文献   

17.
The latent class model or multivariate multinomial mixture is a powerful approach for clustering categorical data. It uses a conditional independence assumption given the latent class to which a statistical unit is belonging. In this paper, we exploit the fact that a fully Bayesian analysis with Jeffreys non-informative prior distributions does not involve technical difficulty to propose an exact expression of the integrated complete-data likelihood, which is known as being a meaningful model selection criterion in a clustering perspective. Similarly, a Monte Carlo approximation of the integrated observed-data likelihood can be obtained in two steps: an exact integration over the parameters is followed by an approximation of the sum over all possible partitions through an importance sampling strategy. Then, the exact and the approximate criteria experimentally compete, respectively, with their standard asymptotic BIC approximations for choosing the number of mixture components. Numerical experiments on simulated data and a biological example highlight that asymptotic criteria are usually dramatically more conservative than the non-asymptotic presented criteria, not only for moderate sample sizes as expected but also for quite large sample sizes. This research highlights that asymptotic standard criteria could often fail to select some interesting structures present in the data.  相似文献   

18.
The lasso is a popular technique of simultaneous estimation and variable selection in many research areas. The marginal posterior mode of the regression coefficients is equivalent to estimates given by the non-Bayesian lasso when the regression coefficients have independent Laplace priors. Because of its flexibility of statistical inferences, the Bayesian approach is attracting a growing body of research in recent years. Current approaches are primarily to either do a fully Bayesian analysis using Markov chain Monte Carlo (MCMC) algorithm or use Monte Carlo expectation maximization (MCEM) methods with an MCMC algorithm in each E-step. However, MCMC-based Bayesian method has much computational burden and slow convergence. Tan et al. [An efficient MCEM algorithm for fitting generalized linear mixed models for correlated binary data. J Stat Comput Simul. 2007;77:929–943] proposed a non-iterative sampling approach, the inverse Bayes formula (IBF) sampler, for computing posteriors of a hierarchical model in the structure of MCEM. Motivated by their paper, we develop this IBF sampler in the structure of MCEM to give the marginal posterior mode of the regression coefficients for the Bayesian lasso, by adjusting the weights of importance sampling, when the full conditional distribution is not explicit. Simulation experiments show that the computational time is much reduced with our method based on the expectation maximization algorithm and our algorithms and our methods behave comparably with other Bayesian lasso methods not only in prediction accuracy but also in variable selection accuracy and even better especially when the sample size is relatively large.  相似文献   

19.
A generalized version of inverted exponential distribution (IED) is considered in this paper. This lifetime distribution is capable of modeling various shapes of failure rates, and hence various shapes of aging criteria. The model can be considered as another useful two-parameter generalization of the IED. Maximum likelihood and Bayes estimates for two parameters of the generalized inverted exponential distribution (GIED) are obtained on the basis of a progressively type-II censored sample. We also showed the existence, uniqueness and finiteness of the maximum likelihood estimates of the parameters of GIED based on progressively type-II censored data. Bayesian estimates are obtained using squared error loss function. These Bayesian estimates are evaluated by applying the Lindley's approximation method and via importance sampling technique. The importance sampling technique is used to compute the Bayes estimates and the associated credible intervals. We further consider the Bayes prediction problem based on the observed samples, and provide the appropriate predictive intervals. Monte Carlo simulations are performed to compare the performances of the proposed methods and a data set has been analyzed for illustrative purposes.  相似文献   

20.
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号