首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This article is concerned with the likelihood-based inference of vector autoregressive models with multivariate scaled t-distributed innovations by applying the EM-based (ECM and ECME) algorithms. The ECM and ECME algorithms, which are analytically quite simple to use, are applied to find the maximum likelihood estimates of the model parameters and then compared based on the computational running time and the accuracy of estimation via a simulation study. The results demonstrate that the ECME is efficient and usable in practice. We also show how the method can be applied to a multivariate dataset.  相似文献   

2.
To obtain maximum likelihood (ML) estimation in factor analysis (FA), we propose in this paper a novel and fast conditional maximization (CM) algorithm, which has quadratic and monotone convergence, consisting of a sequence of CM log-likelihood (CML) steps. The main contribution of this algorithm is that the closed form expression for the parameter to be updated in each step can be obtained explicitly, without resorting to any numerical optimization methods. In addition, a new ECME algorithm similar to Liu’s (Biometrika 81, 633–648, 1994) one is obtained as a by-product, which turns out to be very close to the simple iteration algorithm proposed by Lawley (Proc. R. Soc. Edinb. 60, 64–82, 1940) but our algorithm is guaranteed to increase log-likelihood at every iteration and hence to converge. Both algorithms inherit the simplicity and stability of EM but their convergence behaviors are much different as revealed in our extensive simulations: (1) In most situations, ECME and EM perform similarly; (2) CM outperforms EM and ECME substantially in all situations, no matter assessed by the CPU time or the number of iterations. Especially for the case close to the well known Heywood case, it accelerates EM by factors of around 100 or more. Also, CM is much more insensitive to the choice of starting values than EM and ECME.  相似文献   

3.
The EM algorithm is a popular method for computing maximum likelihood estimates. One of its drawbacks is that it does not produce standard errors as a by-product. We consider obtaining standard errors by numerical differentiation. Two approaches are considered. The first differentiates the Fisher score vector to yield the Hessian of the log-likelihood. The second differentiates the EM operator and uses an identity that relates its derivative to the Hessian of the log-likelihood. The well-known SEM algorithm uses the second approach. We consider three additional algorithms: one that uses the first approach and two that use the second. We evaluate the complexity and precision of these three and the SEM in algorithm seven examples. The first is a single-parameter example used to give insight. The others are three examples in each of two areas of EM application: Poisson mixture models and the estimation of covariance from incomplete data. The examples show that there are algorithms that are much simpler and more accurate than the SEM algorithm. Hopefully their simplicity will increase the availability of standard error estimates in EM applications. It is shown that, as previously conjectured, a symmetry diagnostic can accurately estimate errors arising from numerical differentiation. Some issues related to the speed of the EM algorithm and algorithms that differentiate the EM operator are identified.  相似文献   

4.
A developmental trajectory describes the course of behavior over time. Identifying multiple trajectories within an overall developmental process permits a focus on subgroups of particular interest. We introduce a framework for identifying trajectories by using the Expectation-Maximization (EM) algorithm to fit semiparametric mixtures of logistic distributions to longitudinal binary data. For performance comparison, we consider full maximization algorithms (PROC TRAJ in SAS), standard EM, and two other EM-based algorithms for speeding up convergence. Simulation shows that EM methods produce more accurate parameter estimates. The EM methodology is illustrated with a longitudinal dataset involving adolescents smoking behaviors.  相似文献   

5.
We review the Fisher scoring and EM algorithms for incomplete multivariate data from an estimating function point of view, and examine the corresponding quasi-score functions under second-moment assumptions. A bias-corrected REML-type estimator for the covariance matrix is derived, and the Fisher, Godambe and empirical sandwich information matrices are compared. We make a numerical investigation of the two algorithms, and compare with a hybrid algorithm, where Fisher scoring is used for the mean vector and the EM algorithm for the covariance matrix.  相似文献   

6.
The three-parameter asymmetric Laplace distribution (ALD) has received increasing attention in the field of quantile regression due to an important feature between its location and asymmetric parameters. On the basis of the representation of the ALD as a normal-variance–mean mixture with an exponential mixing distribution, this article develops EM and generalized EM algorithms, respectively, for computing regression quantiles of linear and nonlinear regression models. It is interesting to show that the proposed EM algorithm and the MM (Majorization–Minimization) algorithm for quantile regressions are really the same in terms of computation, since the updating formula of them are the same. This provides a good example that connects the EM and MM algorithms. Simulation studies show that the EM algorithm can successfully recover the true parameters in quantile regressions.  相似文献   

7.
This article applies the ECME algorithm to derive an easily implemented iterative feasible generalized least squares procedure for calculating maximum-likelihood estimates of the parameters of the unbalanced two-way random-effects model. The algorithm increases the log-likelihood monotonically and the fitted variance components are guaranteed to be non negative. This article applies the algorithm in an example.  相似文献   

8.
In most applications, the parameters of a mixture of linear regression models are estimated by maximum likelihood using the expectation maximization (EM) algorithm. In this article, we propose the comparison of three algorithms to compute maximum likelihood estimates of the parameters of these models: the EM algorithm, the classification EM algorithm and the stochastic EM algorithm. The comparison of the three procedures was done through a simulation study of the performance (computational effort, statistical properties of estimators and goodness of fit) of these approaches on simulated data sets.

Simulation results show that the choice of the approach depends essentially on the configuration of the true regression lines and the initialization of the algorithms.  相似文献   

9.
The development of models and methods for cure rate estimation has recently burgeoned into an important subfield of survival analysis. Much of the literature focuses on the standard mixture model. Recently, process-based models have been suggested. We focus on several models based on first passage times for Wiener processes. Whitmore and others have studied these models in a variety of contexts. Lee and Whitmore (Stat Sci 21(4):501–513, 2006) give a comprehensive review of a variety of first hitting time models and briefly discuss their potential as cure rate models. In this paper, we study the Wiener process with negative drift as a possible cure rate model but the resulting defective inverse Gaussian model is found to provide a poor fit in some cases. Several possible modifications are then suggested, which improve the defective inverse Gaussian. These modifications include: the inverse Gaussian cure rate mixture model; a mixture of two inverse Gaussian models; incorporation of heterogeneity in the drift parameter; and the addition of a second absorbing barrier to the Wiener process, representing an immunity threshold. This class of process-based models is a useful alternative to the standard model and provides an improved fit compared to the standard model when applied to many of the datasets that we have studied. Implementation of this class of models is facilitated using expectation-maximization (EM) algorithms and variants thereof, including the gradient EM algorithm. Parameter estimates for each of these EM algorithms are given and the proposed models are applied to both real and simulated data, where they perform well.  相似文献   

10.
We compare EM, SEM, and MCMC algorithms to estimate the parameters of the Gaussian mixture model. We focus on problems in estimation arising from the likelihood function having a sharp ridge or saddle points. We use both synthetic and empirical data with those features. The comparison includes Bayesian approaches with different prior specifications and various procedures to deal with label switching. Although the solutions provided by these stochastic algorithms are more often degenerate, we conclude that SEM and MCMC may display faster convergence and improve the ability to locate the global maximum of the likelihood function.  相似文献   

11.
Parametric incomplete data models defined by ordinary differential equations (ODEs) are widely used in biostatistics to describe biological processes accurately. Their parameters are estimated on approximate models, whose regression functions are evaluated by a numerical integration method. Accurate and efficient estimations of these parameters are critical issues. This paper proposes parameter estimation methods involving either a stochastic approximation EM algorithm (SAEM) in the maximum likelihood estimation, or a Gibbs sampler in the Bayesian approach. Both algorithms involve the simulation of non-observed data with conditional distributions using Hastings–Metropolis (H–M) algorithms. A modified H–M algorithm, including an original local linearization scheme to solve the ODEs, is proposed to reduce the computational time significantly. The convergence on the approximate model of all these algorithms is proved. The errors induced by the numerical solving method on the conditional distribution, the likelihood and the posterior distribution are bounded. The Bayesian and maximum likelihood estimation methods are illustrated on a simulated pharmacokinetic nonlinear mixed-effects model defined by an ODE. Simulation results illustrate the ability of these algorithms to provide accurate estimates.  相似文献   

12.
An automated (Markov chain) Monte Carlo EM algorithm   总被引:1,自引:0,他引:1  
We present an automated Monte Carlo EM (MCEM) algorithm which efficiently assesses Monte Carlo error in the presence of dependent Monte Carlo, particularly Markov chain Monte Carlo, E-step samples and chooses an appropriate Monte Carlo sample size to minimize this Monte Carlo error with respect to progressive EM step estimates. Monte Carlo error is gauged though an application of the central limit theorem during renewal periods of the MCMC sampler used in the E-step. The resulting normal approximation allows us to construct a rigorous and adaptive rule for updating the Monte Carlo sample size each iteration of the MCEM algorithm. We illustrate our automated routine and compare the performance with competing MCEM algorithms in an analysis of a data set fit by a generalized linear mixed model.  相似文献   

13.
In this article, we present the performance of the maximum likelihood estimates of the Burr XII parameters for constant-stress partially accelerated life tests under multiple censored data. Two maximum likelihood estimation methods are considered. One method is based on observed-data likelihood function and the maximum likelihood estimates are obtained by using the quasi-Newton algorithm. The other method is based on complete-data likelihood function and the maximum likelihood estimates are derived by using the expectation-maximization (EM) algorithm. The variance–covariance matrices are derived to construct the confidence intervals of the parameters. The performance of these two algorithms is compared with each other by a simulation study. The simulation results show that the maximum likelihood estimation via the EM algorithm outperforms the quasi-Newton algorithm in terms of the absolute relative bias, the bias, the root mean square error and the coverage rate. Finally, a numerical example is given to illustrate the performance of the proposed methods.  相似文献   

14.
Celebrating the 20th anniversary of the presentation of the paper by Dempster, Laird and Rubin which popularized the EM algorithm, we investigate, after a brief historical account, strategies that aim to make the EM algorithm converge faster while maintaining its simplicity and stability (e.g. automatic monotone convergence in likelihood). First we introduce the idea of a 'working parameter' to facilitate the search for efficient data augmentation schemes and thus fast EM implementations. Second, summarizing various recent extensions of the EM algorithm, we formulate a general alternating expectation–conditional maximization algorithm AECM that couples flexible data augmentation schemes with model reduction schemes to achieve efficient computations. We illustrate these methods using multivariate t -models with known or unknown degrees of freedom and Poisson models for image reconstruction. We show, through both empirical and theoretical evidence, the potential for a dramatic reduction in computational time with little increase in human effort. We also discuss the intrinsic connection between EM-type algorithms and the Gibbs sampler, and the possibility of using the techniques presented here to speed up the latter. The main conclusion of the paper is that, with the help of statistical considerations, it is possible to construct algorithms that are simple, stable and fast.  相似文献   

15.
Summary.  The expectation–maximization (EM) algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and high dimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the relevant integrals. Typically, a very large Monte Carlo sample size is required to estimate these integrals within an acceptable tolerance when the algorithm is near convergence. Even if this sample size were known at the onset of implementation of MCEM, its use throughout all iterations is wasteful, especially when accurate starting values are not available. We propose a data-driven strategy for controlling Monte Carlo resources in MCEM. The algorithm proposed improves on similar existing methods by recovering EM's ascent (i.e. likelihood increasing) property with high probability, being more robust to the effect of user-defined inputs and handling classical Monte Carlo and Markov chain Monte Carlo methods within a common framework. Because of the first of these properties we refer to the algorithm as 'ascent-based MCEM'. We apply ascent-based MCEM to a variety of examples, including one where it is used to accelerate the convergence of deterministic EM dramatically.  相似文献   

16.
We compare the performances of the simulated annealing and the EM algorithms in problems of decomposition of normal mixtures according to the likelihood approach. In this case the likelihood function has multiple maxima and singularities, and we consider a suitable reformulation of the problem which yields an optimization problem having a global solution and at least a smaller number of spurious maxima. The results are compared considering some distance measures between the estimated distributions and the true ones. No overwhelming superiority of either method has been demonstrated, though in one of our cases simulated annealing achieved better results.  相似文献   

17.
The established general results on convergence properties of the EM algorithm require the sequence of EM parameter estimates to fall in the interior of the parameter space over which the likelihood is being maximized. This paper presents convergence properties of the EM sequence of likelihood values and parameter estimates in constrained parameter spaces for which the sequence of EM parameter estimates may converge to the boundary of the constrained parameter space contained in the interior of the unconstrained parameter space. Examples of the behavior of the EM algorithm applied to such parameter spaces are presented.  相似文献   

18.
传统的分层模型假设组与组之间独立,没有考虑组之间的相关性。而以地理单元分组的数据往往具有空间依赖性,个体不仅受本地区的影响,也可能受相邻地区的影响。此时,传统分层模型层-2残差分布的假设不再成立。为了处理空间分层数据,将空间统计和空间计量经济模型的思想引入到分层模型中,既纳入分层的思想,又顾及空间相关性,提出了空间分层线性模型,并给出了其固定效应、方差协方差成分和空间回归参数的最大似然估计,在运用EM算法时,结合运用了Fisher得分算法。  相似文献   

19.
The paper is focussing on some recent developments in nonparametric mixture distributions. It discusses nonparametric maximum likelihood estimation of the mixing distribution and will emphasize gradient type results, especially in terms of global results and global convergence of algorithms such as vertex direction or vertex exchange method. However, the NPMLE (or the algorithms constructing it) provides also an estimate of the number of components of the mixing distribution which might be not desirable for theoretical reasons or might be not allowed from the physical interpretation of the mixture model. When the number of components is fixed in advance, the before mentioned algorithms can not be used and globally convergent algorithms do not exist up to now. Instead, the EM algorithm is often used to find maximum likelihood estimates. However, in this case multiple maxima are often occuring. An example from a meta-analyis of vitamin A and childhood mortality is used to illustrate the considerable, inferential importance of identifying the correct global likelihood. To improve the behavior of the EM algorithm we suggest a combination of gradient function steps and EM steps to achieve global convergence leading to the EM algorithm with gradient function update (EMGFU). This algorithms retains the number of components to be exactly k and typically converges to the global maximum. The behavior of the algorithm is highlighted at hand of several examples.  相似文献   

20.
The complete-data model that underlies an Expectation-Maximization (EM) algorithm must have a parameter space that coincides with the parameter space of the observed-data model. Otherwise, maximization of the observed-data log-likelihood will be carried out over a space that does not coincide with the desired parameter space. In some contexts, however, a natural complete-data model may be defined only for parameter values within a subset of the observed-data parameter space. In this paper we discuss situations where this can still be useful if the complete-data model can be viewed as a member of a finite family of complete-data models that have parameter spaces which collectively cover the observed-data parameter space. Such a family of complete-data models defines a family of EM algorithms which together lead to a finite collection of constrained maxima of the observed-data log-likelihood. Maximization of the log-likelihood function over the full parameter space then involves identifying the constrained maximum that achieves the greatest log-likelihood value. Since optimization over a finite collection of candidates is referred to as combinatorial optimization, we refer to such a family of EM algorithms as a combinatorial EM (CEM) algorithm. As well as discussing the theoretical concepts behind CEM algorithms, we discuss strategies for improving the computational efficiency when the number of complete-data models is large. Various applications of CEM algorithms are also discussed, ranging from simple examples that illustrate the concepts, to more substantive examples that demonstrate the usefulness of CEM algorithms in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号