首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A quasi-Newton acceleration for high-dimensional optimization algorithms   总被引:1,自引:0,他引:1  
In many statistical problems, maximum likelihood estimation by an EM or MM algorithm suffers from excruciatingly slow convergence. This tendency limits the application of these algorithms to modern high-dimensional problems in data mining, genomics, and imaging. Unfortunately, most existing acceleration techniques are ill-suited to complicated models involving large numbers of parameters. The squared iterative methods (SQUAREM) recently proposed by Varadhan and Roland constitute one notable exception. This paper presents a new quasi-Newton acceleration scheme that requires only modest increments in computation per iteration and overall storage and rivals or surpasses the performance of SQUAREM on several representative test problems.  相似文献   

2.
Celebrating the 20th anniversary of the presentation of the paper by Dempster, Laird and Rubin which popularized the EM algorithm, we investigate, after a brief historical account, strategies that aim to make the EM algorithm converge faster while maintaining its simplicity and stability (e.g. automatic monotone convergence in likelihood). First we introduce the idea of a 'working parameter' to facilitate the search for efficient data augmentation schemes and thus fast EM implementations. Second, summarizing various recent extensions of the EM algorithm, we formulate a general alternating expectation–conditional maximization algorithm AECM that couples flexible data augmentation schemes with model reduction schemes to achieve efficient computations. We illustrate these methods using multivariate t -models with known or unknown degrees of freedom and Poisson models for image reconstruction. We show, through both empirical and theoretical evidence, the potential for a dramatic reduction in computational time with little increase in human effort. We also discuss the intrinsic connection between EM-type algorithms and the Gibbs sampler, and the possibility of using the techniques presented here to speed up the latter. The main conclusion of the paper is that, with the help of statistical considerations, it is possible to construct algorithms that are simple, stable and fast.  相似文献   

3.
The EM algorithm is a popular method for computing maximum likelihood estimates or posterior modes in models that can be formulated in terms of missing data or latent structure. Although easy implementation and stable convergence help to explain the popularity of the algorithm, its convergence is sometimes notoriously slow. In recent years, however, various adaptations have significantly improved the speed of EM while maintaining its stability and simplicity. One especially successful method for maximum likelihood is known as the parameter expanded EM or PXEM algorithm. Unfortunately, PXEM does not generally have a closed form M-step when computing posterior modes, even when the corresponding EM algorithm is in closed form. In this paper we confront this problem by adapting the one-step-late EM algorithm to PXEM to establish a fast closed form algorithm that improves on the one-step-late EM algorithm by insuring monotone convergence. We use this algorithm to fit a probit regression model and a variety of dynamic linear models, showing computational savings of as much as 99.9%, with the biggest savings occurring when the EM algorithm is the slowest to converge.  相似文献   

4.
This paper examines the formation of maximum likelihood estimates of cell means in analysis of variance problems for cells with missing observations. Methods of estimating the means for missing cells has a long history which includes iterative maximum likelihood techniques, approximation techniques and ad hoc techniques. The use of the EM algorithm to form maximum likelihood estimates has resolved most of the issues associated with this problem. Implementation of the EM algorithm entails specification of a reduced model. As demonstrated in this paper, when there are several missing cells, it is possible to specify a reduced model that results in an unidentifiable likelihood. The EM algorithm in this case does not converge, although the slow divergence may often be mistaken by the unwary as convergence. This paper presents a simple matrix method of determining whether or not the reduced model results in an identifiable likelihood, and consequently in an EM algorithm that converges. We also show the EM algorithm in this case to be equivalent to a method which yields a closed form solution.  相似文献   

5.
Vardi’s Expectation-Maximization (EM) algorithm is frequently used for computing the nonparametric maximum likelihood estimator of length-biased right-censored data, which does not admit a closed-form representation. The EM algorithm may converge slowly, particularly for heavily censored data. We studied two algorithms for accelerating the convergence of the EM algorithm, based on iterative convex minorant and Aitken’s delta squared process. Numerical simulations demonstrate that the acceleration algorithms converge more rapidly than the EM algorithm in terms of number of iterations and actual timing. The acceleration method based on a modification of Aitken’s delta squared performed the best under a variety of settings.  相似文献   

6.
To obtain maximum likelihood (ML) estimation in factor analysis (FA), we propose in this paper a novel and fast conditional maximization (CM) algorithm, which has quadratic and monotone convergence, consisting of a sequence of CM log-likelihood (CML) steps. The main contribution of this algorithm is that the closed form expression for the parameter to be updated in each step can be obtained explicitly, without resorting to any numerical optimization methods. In addition, a new ECME algorithm similar to Liu’s (Biometrika 81, 633–648, 1994) one is obtained as a by-product, which turns out to be very close to the simple iteration algorithm proposed by Lawley (Proc. R. Soc. Edinb. 60, 64–82, 1940) but our algorithm is guaranteed to increase log-likelihood at every iteration and hence to converge. Both algorithms inherit the simplicity and stability of EM but their convergence behaviors are much different as revealed in our extensive simulations: (1) In most situations, ECME and EM perform similarly; (2) CM outperforms EM and ECME substantially in all situations, no matter assessed by the CPU time or the number of iterations. Especially for the case close to the well known Heywood case, it accelerates EM by factors of around 100 or more. Also, CM is much more insensitive to the choice of starting values than EM and ECME.  相似文献   

7.
Summary.  The expectation–maximization (EM) algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and high dimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the relevant integrals. Typically, a very large Monte Carlo sample size is required to estimate these integrals within an acceptable tolerance when the algorithm is near convergence. Even if this sample size were known at the onset of implementation of MCEM, its use throughout all iterations is wasteful, especially when accurate starting values are not available. We propose a data-driven strategy for controlling Monte Carlo resources in MCEM. The algorithm proposed improves on similar existing methods by recovering EM's ascent (i.e. likelihood increasing) property with high probability, being more robust to the effect of user-defined inputs and handling classical Monte Carlo and Markov chain Monte Carlo methods within a common framework. Because of the first of these properties we refer to the algorithm as 'ascent-based MCEM'. We apply ascent-based MCEM to a variety of examples, including one where it is used to accelerate the convergence of deterministic EM dramatically.  相似文献   

8.
The iteratively reweighting algorithm is one of the widely used algorithm to compute the M-estimates for the location and scatter parameters of a multivariate dataset. If the M estimating equations are the maximum likelihood estimating equations from some scale mixture of normal distributions (e.g. from a multivariate t-distribution), the iteratively reweighting algorithm is identified as an EM algorithm and the convergence behavior of which is well established. However, as Tyler (J. Roy. Statist. Soc. Ser. B 59 (1997) 550) pointed out, little is known about the theoretical convergence properties of the iteratively reweighting algorithms if it cannot be identified as an EM algorithm. In this paper, we consider the convergence behavior of the iteratively reweighting algorithm induced from the M estimating equations which cannot be identified as an EM algorithm. We give some general results on the convergence properties and, we show that convergence behavior of a general iteratively reweighting algorithm induced from the M estimating equations is similar to the convergence behavior of an EM algorithm even if it cannot be identified as an EM algorithm.  相似文献   

9.
Estimators derived from the expectation‐maximization (EM) algorithm are not robust since they are based on the maximization of the likelihood function. We propose an iterative proximal‐point algorithm based on the EM algorithm to minimize a divergence criterion between a mixture model and the unknown distribution that generates the data. The algorithm estimates in each iteration the proportions and the parameters of the mixture components in two separate steps. Resulting estimators are generally robust against outliers and misspecification of the model. Convergence properties of our algorithm are studied. The convergence of the introduced algorithm is discussed on a two‐component Weibull mixture entailing a condition on the initialization of the EM algorithm in order for the latter to converge. Simulations on Gaussian and Weibull mixture models using different statistical divergences are provided to confirm the validity of our work and the robustness of the resulting estimators against outliers in comparison to the EM algorithm. An application to a dataset of velocities of galaxies is also presented. The Canadian Journal of Statistics 47: 392–408; 2019 © 2019 Statistical Society of Canada  相似文献   

10.
Log-location-scale distributions are widely used parametric models that have fundamental importance in both parametric and semiparametric frameworks. The likelihood equations based on a Type II censored sample from location-scale distributions do not provide explicit solutions for the para-meters. Statistical software is widely available and is based on iterative methods (such as, Newton Raphson Algorithm, EM algorithm etc.), which require starting values near the global maximum. There are also many situations that the specialized software does not handle. This paper provides a method for determining explicit estimators for the location and scale parameters by approximating the likelihood function, where the method does not require any starting values. The performance of the proposed approximate method for the Weibull distribution and Log-Logistic distributions is compared with those based on iterative methods through the use of simulation studies for a wide range of sample size and Type II censoring schemes. Here we also examine the probability coverages of the pivotal quantities based on asymptotic normality. In addition, two examples are given.  相似文献   

11.
We introduce a class of small scale simulation models—‘prototypes'—which reproduce many of the known properties of maximum likelihood and related reconstruction methods used in emission tomography, and greatly simplify the development of new methods. We introduce an iterative Fisher-scoring algorithm and demonstrate, by use of the prototype models, its superior speed of convergence when compared with the standard EM algorithm.  相似文献   

12.
The EM algorithm is the standard method for estimating the parameters in finite mixture models. Yang and Pan [25] proposed a generalized classification maximum likelihood procedure, called the fuzzy c-directions (FCD) clustering algorithm, for estimating the parameters in mixtures of von Mises distributions. Two main drawbacks of the EM algorithm are its slow convergence and the dependence of the solution on the initial value used. The choice of initial values is of great importance in the algorithm-based literature as it can heavily influence the speed of convergence of the algorithm and its ability to locate the global maximum. On the other hand, the algorithmic frameworks of EM and FCD are closely related. Therefore, the drawbacks of FCD are the same as those of the EM algorithm. To resolve these problems, this paper proposes another clustering algorithm, which can self-organize local optimal cluster numbers without using cluster validity functions. These numerical results clearly indicate that the proposed algorithm is superior in performance of EM and FCD algorithms. Finally, we apply the proposed algorithm to two real data sets.  相似文献   

13.
Nelder and Wedderburn (1972) gave a practical fitting procedure that encompassed a more gencral family of data distributions than the Gaussian distribution and provided an easily understood conceptual framework. In extending the framework to more than one error structure the technical difficulties of the fitting procedure have tended to cloud the concepts. Here we show that a simple extension to the fitting procedure is possible and thus pave the way for a fuller examimtion of mixed effects models in generalized linear model distributions. It is clear that we should not, and do not have to, confine ourselves to fitting random effects using the Gaussian distribiition. In addition, in, some quite general mixing distribution problems the application of the EM algorithm to the complete data likelihood leads to iterative schemes that maximize the marginal likelihood of the observed data variable.  相似文献   

14.
The EM algorithm and its extensions are very popular tools for maximum likelihood estimation in incomplete data setting. However, one of the limitations of these methods is their slow convergence. The PX-EM (parameter-expanded EM) algorithm was proposed by Liu, Rubin and Wu to make EM much faster. On the other hand, stochastic versions of EM are powerful alternatives of EM when the E-step is untractable in a closed form. In this paper we propose the PX-SAEM which is a parameter expansion version of the so-called SAEM (Stochastic Approximation version of EM). PX-SAEM is shown to accelerate SAEM and improve convergence toward the maximum likelihood estimate in a parametric framework. Numerical examples illustrate the behavior of PX-SAEM in linear and nonlinear mixed effects models.  相似文献   

15.
In this article we investigate the relationship between the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM-type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under certain conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models.  相似文献   

16.
Acceleration of the EM Algorithm by using Quasi-Newton Methods   总被引:1,自引:0,他引:1  
The EM algorithm is a popular method for maximum likelihood estimation. Its simplicity in many applications and desirable convergence properties make it very attractive. Its sometimes slow convergence, however, has prompted researchers to propose methods to accelerate it. We review these methods, classifying them into three groups: pure , hybrid and EM-type accelerators. We propose a new pure and a new hybrid accelerator both based on quasi-Newton methods and numerically compare these and two other quasi-Newton accelerators. For this we use examples in each of three areas: Poisson mixtures, the estimation of covariance from incomplete data and multivariate normal mixtures. In these comparisons, the new hybrid accelerator was fastest on most of the examples and often dramatically so. In some cases it accelerated the EM algorithm by factors of over 100. The new pure accelerator is very simple to implement and competed well with the other accelerators. It accelerated the EM algorithm in some cases by factors of over 50. To obtain standard errors, we propose to approximate the inverse of the observed information matrix by using auxiliary output from the new hybrid accelerator. A numerical evaluation of these approximations indicates that they may be useful at least for exploratory purposes.  相似文献   

17.
Mini-batch algorithms have become increasingly popular due to the requirement for solving optimization problems, based on large-scale data sets. Using an existing online expectation–maximization (EM) algorithm framework, we demonstrate how mini-batch (MB) algorithms may be constructed, and propose a scheme for the stochastic stabilization of the constructed mini-batch algorithms. Theoretical results regarding the convergence of the mini-batch EM algorithms are presented. We then demonstrate how the mini-batch framework may be applied to conduct maximum likelihood (ML) estimation of mixtures of exponential family distributions, with emphasis on ML estimation for mixtures of normal distributions. Via a simulation study, we demonstrate that the mini-batch algorithm for mixtures of normal distributions can outperform the standard EM algorithm. Further evidence of the performance of the mini-batch framework is provided via an application to the famous MNIST data set.  相似文献   

18.
We propose an iterative method of estimation for discrete missing data problems that is conceptually different from the Expectation–Maximization (EM) algorithm and that does not in general yield the observed data maximum likelihood estimate (MLE). The proposed approach is based conceptually upon weighting the set of possible complete-data MLEs. Its implementation avoids the expectation step of EM, which can sometimes be problematic. In the simple case of Bernoulli trials missing completely at random, the iterations of the proposed algorithm are equivalent to the EM iterations. For a familiar genetics-oriented multinomial problem with missing count data and for the motivating example with epidemiologic applications that involves a mixture of a left censored normal distribution with a point mass at zero, we investigate the finite sample performance of the proposed estimator and find it to be competitive with that of the MLE. We give some intuitive justification for the method, and we explore an interesting connection between our algorithm and multiple imputation in order to suggest an approach for estimating standard errors.  相似文献   

19.
Linear mixed models are regularly applied to animal and plant breeding data to evaluate genetic potential. Residual maximum likelihood (REML) is the preferred method for estimating variance parameters associated with this type of model. Typically an iterative algorithm is required for the estimation of variance parameters. Two algorithms which can be used for this purpose are the expectation‐maximisation (EM) algorithm and the parameter expanded EM (PX‐EM) algorithm. Both, particularly the EM algorithm, can be slow to converge when compared to a Newton‐Raphson type scheme such as the average information (AI) algorithm. The EM and PX‐EM algorithms require specification of the complete data, including the incomplete and missing data. We consider a new incomplete data specification based on a conditional derivation of REML. We illustrate the use of the resulting new algorithm through two examples: a sire model for lamb weight data and a balanced incomplete block soybean variety trial. In the cases where the AI algorithm failed, a REML PX‐EM based on the new incomplete data specification converged in 28% to 30% fewer iterations than the alternative REML PX‐EM specification. For the soybean example a REML EM algorithm using the new specification converged in fewer iterations than the current standard specification of a REML PX‐EM algorithm. The new specification integrates linear mixed models, Henderson's mixed model equations, REML and the REML EM algorithm into a cohesive framework.  相似文献   

20.
The maximum likelihood estimation of parameters of the Poisson binomial distribution, based on a sample with exact and grouped observations, is considered by applying the EM algorithm (Dempster et al, 1977). The results of Louis (1982) are used in obtaining the observed information matrix and accelerating the convergence of the EM algorithm substantially. The maximum likelihood estimation from samples consisting entirely of complete (Sprott, 1958) or grouped observations are treated as special cases of the estimation problem mentioned above. A brief account is given for the implementation of the EM algorithm when the sampling distribution is the Neyman Type A since the latter is a limiting form of the Poisson binomial. Numerical examples based on real data are included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号