首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The iteratively reweighting algorithm is one of the widely used algorithm to compute the M-estimates for the location and scatter parameters of a multivariate dataset. If the M estimating equations are the maximum likelihood estimating equations from some scale mixture of normal distributions (e.g. from a multivariate t-distribution), the iteratively reweighting algorithm is identified as an EM algorithm and the convergence behavior of which is well established. However, as Tyler (J. Roy. Statist. Soc. Ser. B 59 (1997) 550) pointed out, little is known about the theoretical convergence properties of the iteratively reweighting algorithms if it cannot be identified as an EM algorithm. In this paper, we consider the convergence behavior of the iteratively reweighting algorithm induced from the M estimating equations which cannot be identified as an EM algorithm. We give some general results on the convergence properties and, we show that convergence behavior of a general iteratively reweighting algorithm induced from the M estimating equations is similar to the convergence behavior of an EM algorithm even if it cannot be identified as an EM algorithm.  相似文献   

2.
For the data from multivariate t distributions, it is very hard to make an influence analysis based on the probability density function since its expression is intractable. In this paper, we present a technique for influence analysis based on the mixture distribution and EM algorithm. In fact, the multivariate t distribution can be considered as a particular Gaussian mixture by introducing the weights from the Gamma distribution. We treat the weights as the missing data and develop the influence analysis for the data from multivariate t distributions based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm. Several case-deletion measures are proposed for detecting influential observations from multivariate t distributions. Two numerical examples are given to illustrate our methodology.  相似文献   

3.

We propose a semiparametric version of the EM algorithm under the semiparametric mixture model introduced by Anderson (1979, Biometrika , 66 , 17-26). It is shown that the sequence of proposed EM iterates, irrespective of the starting value, converges to the maximum semiparametric likelihood estimator of the vector of parameters in the semiparametric mixture model. The proposed EM algorithm preserves the appealing monotone convergence property of the standard EM algorithm and can be implemented by employing the standard logistic regression program. We present one example to demonstrate the performance of the proposed EM algorithm.  相似文献   

4.
Multivariate mixtures of Erlang distributions form a versatile, yet analytically tractable, class of distributions making them suitable for multivariate density estimation. We present a flexible and effective fitting procedure for multivariate mixtures of Erlangs, which iteratively uses the EM algorithm, by introducing a computationally efficient initialization and adjustment strategy for the shape parameter vectors. We furthermore extend the EM algorithm for multivariate mixtures of Erlangs to be able to deal with randomly censored and fixed truncated data. The effectiveness of the proposed algorithm is demonstrated on simulated as well as real data sets.  相似文献   

5.
The maximum likelihood equations for a multivariate normal model with structured mean and structured covariance matrix may not have an explicit solution. In some cases the model's error term may be decomposed as the sum of two independent error terms, each having a patterned covariance matrix, such that if one of the unobservable error terms is artificially treated as "missing data", the EM algorithm can be used to compute the maximum likelihood estimates for the original problem. Some decompositions produce likelihood equations which do not have an explicit solution at each iteration of the EM algorithm, but within-iteration explicit solutions are shown for two general classes of models including covariance component models used for analysis of longitudinal data.  相似文献   

6.
We present an algorithm for multivariate robust Bayesian linear regression with missing data. The iterative algorithm computes an approximative posterior for the model parameters based on the variational Bayes (VB) method. Compared to the EM algorithm, the VB method has the advantage that the variance for the model parameters is also computed directly by the algorithm. We consider three families of Gaussian scale mixture models for the measurements, which include as special cases the multivariate t distribution, the multivariate Laplace distribution, and the contaminated normal model. The observations can contain missing values, assuming that the missing data mechanism can be ignored. A Matlab/Octave implementation of the algorithm is presented and applied to solve three reference examples from the literature.  相似文献   

7.

In this paper the use of the empirical Fisher information matrix as an estimator of the information matrix is considered in the context of response models and incomplete data problems. The introduction of an additional stochastic component into such models is shown to greatly increase the range of situations in which the estimator can be employed. In particular the conditions for its use in incomplete data problems are shown to be the same as those needed to justify the use of the EM algorithm.  相似文献   

8.
Finite mixtures of multivariate skew t (MST) distributions have proven to be useful in modelling heterogeneous data with asymmetric and heavy tail behaviour. Recently, they have been exploited as an effective tool for modelling flow cytometric data. A number of algorithms for the computation of the maximum likelihood (ML) estimates for the model parameters of mixtures of MST distributions have been put forward in recent years. These implementations use various characterizations of the MST distribution, which are similar but not identical. While exact implementation of the expectation-maximization (EM) algorithm can be achieved for ‘restricted’ characterizations of the component skew t-distributions, Monte Carlo (MC) methods have been used to fit the ‘unrestricted’ models. In this paper, we review several recent fitting algorithms for finite mixtures of multivariate skew t-distributions, at the same time clarifying some of the connections between the various existing proposals. In particular, recent results have shown that the EM algorithm can be implemented exactly for faster computation of ML estimates for mixtures with unrestricted MST components. The gain in computational time is effected by noting that the semi-infinite integrals on the E-step of the EM algorithm can be put in the form of moments of the truncated multivariate non-central t-distribution, similar to the restricted case, which subsequently can be expressed in terms of the non-truncated form of the central t-distribution function for which fast algorithms are available. We present comparisons to illustrate the relative performance of the restricted and unrestricted models, and demonstrate the usefulness of the recently proposed methodology for the unrestricted MST mixture, by some applications to three real datasets.  相似文献   

9.
A strategy is proposed to initialize the EM algorithm in the multivariate Gaussian mixture context. It consists in randomly drawing, with a low computational cost in many situations, initial mixture parameters in an appropriate space including all possible EM trajectories. This space is simply defined by two relations between the two first empirical moments and the mixture parameters satisfied by any EM iteration. An experimental study on simulated and real data sets clearly shows that this strategy outperforms classical methods, since it has the nice property to widely explore local maxima of the likelihood function.  相似文献   

10.
In this article we investigate the relationship between the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM-type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under certain conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models.  相似文献   

11.
The iterative weighted least squares algorithm is handy for solving generalized estimating equations. In some situations it may be desirable to limit the number of iterations to a fixed finite number, for instance, to keep the breakdown point under control. Such a scheme is called reweighting. Usually reweighting leads to a different large sample theory than full iteration, and the reweighted estimator may inherit deficiencies of the starting value. When might the reweighting scheme work? To answer this question we define a broad class of estimators, namely, approximate GM estimators, and we show that reweighting leads to the same large sample theory as full iteration within this class. As an example, we provide conditions under which one-step Newton-Raphson estimators are approximate GM estimators. We then use the reweighting to construct residual-based graphics for approximate GM estimates, adapting weighted residual plots that have been proposed previously, and developing new plots to provide complementary views of the data.  相似文献   

12.
Recently Kundu and Gupta [2010, Modified Sarhan-Balakrishnan singular bivariate distribution, Journal of Statistical Planning and Inference, 140, 526-538] introduced the modified Sarhan-Balakrishnan bivariate distribution and established its several properties. In this paper we provide a multivariate extension of the modified Sarhan-Balakrishnan bivariate distribution. It is a distribution with a singular part. Different ageing and dependence properties of the proposed multivariate distribution have been established. The moment generating function, the product moments can be obtained in terms of infinite series. The multivariate hazard rate has been obtained. We provide the EM algorithm to compute the maximum likelihood estimators and an illustrative example is performed to see the effectiveness of the proposed method.  相似文献   

13.
Various solutions to the parameter estimation problem of a recently introduced multivariate Pareto distribution are developed and exemplified numerically. Namely, a density of the aforementioned multivariate Pareto distribution with respect to a dominating measure, rather than the corresponding Lebesgue measure, is specified and then employed to investigate the maximum likelihood estimation (MLE) approach. Also, in an attempt to fully enjoy the common shock origins of the multivariate model of interest, an adapted variant of the expectation-maximization (EM) algorithm is formulated and studied. The method of moments is discussed as a convenient way to obtain starting values for the numerical optimization procedures associated with the MLE and EM methods.  相似文献   

14.
The EM algorithm is a popular method for parameter estimation in situations where the data can be viewed as being incomplete. As each E-step visits each data point on a given iteration, the EM algorithm requires considerable computation time in its application to large data sets. Two versions, the incremental EM (IEM) algorithm and a sparse version of the EM algorithm, were proposed recently by Neal R.M. and Hinton G.E. in Jordan M.I. (Ed.), Learning in Graphical Models, Kluwer, Dordrecht, 1998, pp. 355–368 to reduce the computational cost of applying the EM algorithm. With the IEM algorithm, the available n observations are divided into B (B n) blocks and the E-step is implemented for only a block of observations at a time before the next M-step is performed. With the sparse version of the EM algorithm for the fitting of mixture models, only those posterior probabilities of component membership of the mixture that are above a specified threshold are updated; the remaining component-posterior probabilities are held fixed. In this paper, simulations are performed to assess the relative performances of the IEM algorithm with various number of blocks and the standard EM algorithm. In particular, we propose a simple rule for choosing the number of blocks with the IEM algorithm. For the IEM algorithm in the extreme case of one observation per block, we provide efficient updating formulas, which avoid the direct calculation of the inverses and determinants of the component-covariance matrices. Moreover, a sparse version of the IEM algorithm (SPIEM) is formulated by combining the sparse E-step of the EM algorithm and the partial E-step of the IEM algorithm. This SPIEM algorithm can further reduce the computation time of the IEM algorithm.  相似文献   

15.
Acceleration of the EM Algorithm by using Quasi-Newton Methods   总被引:1,自引:0,他引:1  
The EM algorithm is a popular method for maximum likelihood estimation. Its simplicity in many applications and desirable convergence properties make it very attractive. Its sometimes slow convergence, however, has prompted researchers to propose methods to accelerate it. We review these methods, classifying them into three groups: pure , hybrid and EM-type accelerators. We propose a new pure and a new hybrid accelerator both based on quasi-Newton methods and numerically compare these and two other quasi-Newton accelerators. For this we use examples in each of three areas: Poisson mixtures, the estimation of covariance from incomplete data and multivariate normal mixtures. In these comparisons, the new hybrid accelerator was fastest on most of the examples and often dramatically so. In some cases it accelerated the EM algorithm by factors of over 100. The new pure accelerator is very simple to implement and competed well with the other accelerators. It accelerated the EM algorithm in some cases by factors of over 50. To obtain standard errors, we propose to approximate the inverse of the observed information matrix by using auxiliary output from the new hybrid accelerator. A numerical evaluation of these approximations indicates that they may be useful at least for exploratory purposes.  相似文献   

16.
Linear mixed models are regularly applied to animal and plant breeding data to evaluate genetic potential. Residual maximum likelihood (REML) is the preferred method for estimating variance parameters associated with this type of model. Typically an iterative algorithm is required for the estimation of variance parameters. Two algorithms which can be used for this purpose are the expectation‐maximisation (EM) algorithm and the parameter expanded EM (PX‐EM) algorithm. Both, particularly the EM algorithm, can be slow to converge when compared to a Newton‐Raphson type scheme such as the average information (AI) algorithm. The EM and PX‐EM algorithms require specification of the complete data, including the incomplete and missing data. We consider a new incomplete data specification based on a conditional derivation of REML. We illustrate the use of the resulting new algorithm through two examples: a sire model for lamb weight data and a balanced incomplete block soybean variety trial. In the cases where the AI algorithm failed, a REML PX‐EM based on the new incomplete data specification converged in 28% to 30% fewer iterations than the alternative REML PX‐EM specification. For the soybean example a REML EM algorithm using the new specification converged in fewer iterations than the current standard specification of a REML PX‐EM algorithm. The new specification integrates linear mixed models, Henderson's mixed model equations, REML and the REML EM algorithm into a cohesive framework.  相似文献   

17.
For multivariate normal data with non-monotone (i.e. arbitrary) missing data patterns, lattice conditional independence (LCI) models determined by the observed data patterns can be used to obtain closed-form MLEs (Andersson and Perlman, 1991, 1993). In this paper, three procedures — LCI models, the EM algorithm, and the complete-data method — are compared by means of a Monte Carlo experiment. When the LCI model is accepted by the LR test, the LCI estimate is more efficient than those based on the EM algorithm and the complete-data method. When the LCI model is not accepted, the LCI estimate may lose efficiency but still may be more efficient than the EM estimate if the observed data is sparse. When the LCI model appears too restrictive, it may be possible to obtain a less restrictive LCI model by.discarding only a small portion of the incomplete observations. LCI models appear to be especially useful when the observed data is sparse, even in cases where the suitability of the LCI model is uncertain.  相似文献   

18.
We propose penalized-likelihood methods for parameter estimation of high dimensional t distribution. First, we show that a general class of commonly used shrinkage covariance matrix estimators for multivariate normal can be obtained as penalized-likelihood estimator with a penalty that is proportional to the entropy loss between the estimate and an appropriately chosen shrinkage target. Motivated by this fact, we then consider applying this penalty to multivariate t distribution. The penalized estimate can be computed efficiently using EM algorithm for given tuning parameters. It can also be viewed as an empirical Bayes estimator. Taking advantage of its Bayesian interpretation, we propose a variant of the method of moments to effectively elicit the tuning parameters. Simulations and real data analysis demonstrate the competitive performance of the new methods.  相似文献   

19.
An EM algorithm for multivariate Poisson distribution and related models   总被引:2,自引:0,他引:2  
Multivariate extensions of the Poisson distribution are plausible models for multivariate discrete data. The lack of estimation and inferential procedures reduces the applicability of such models. In this paper, an EM algorithm for Maximum Likelihood estimation of the parameters of the Multivariate Poisson distribution is described. The algorithm is based on the multivariate reduction technique that generates the Multivariate Poisson distribution. Illustrative examples are also provided. Extension to other models, generated via multivariate reduction, is discussed.  相似文献   

20.
We review the Fisher scoring and EM algorithms for incomplete multivariate data from an estimating function point of view, and examine the corresponding quasi-score functions under second-moment assumptions. A bias-corrected REML-type estimator for the covariance matrix is derived, and the Fisher, Godambe and empirical sandwich information matrices are compared. We make a numerical investigation of the two algorithms, and compare with a hybrid algorithm, where Fisher scoring is used for the mean vector and the EM algorithm for the covariance matrix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号