首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a unified method for influence analysis to deal with random effects appeared in additive nonlinear regression models for repeated measurement data. The basic idea is to apply the Q-function, the conditional expectation of the complete-data log-likelihood function obtained from EM algorithm, instead of the observed-data log-likelihood function as used in standard influence analysis. Diagnostic measures are derived based on the case-deletion approach and the local influence approach. Two real examples and a simulation study are examined to illustrate our methodology.  相似文献   

2.
We derive and investigate a variant of AIC, the Akaike information criterion, for model selection in settings where the observed data is incomplete. Our variant is based on the motivation provided for the PDIO (‘predictive divergence for incomplete observation models’) criterion of Shimodaira (1994, in: Selecting Models from Data: Artificial Intelligence and Statistics IV, Lecture Notes in Statistics, vol. 89, Springer, New York, pp. 21–29). However, our variant differs from PDIO in its ‘goodness-of-fit’ term. Unlike AIC and PDIO, which require the computation of the observed-data empirical log-likelihood, our criterion can be evaluated using only complete-data tools, readily available through the EM algorithm and the SEM (‘supplemented’ EM) algorithm of Meng and Rubin (Journal of the American Statistical Association 86 (1991) 899–909). We compare the performance of our AIC variant to that of both AIC and PDIO in simulations where the data being modeled contains missing values. The results indicate that our criterion is less prone to overfitting than AIC and less prone to underfitting than PDIO.  相似文献   

3.
An incomplete-data Fisher scoring method is proposed for parameter estimation in models where data are missing and in latent-variable models that can be formulated as a missing data problem. The convergence properties of the proposed method and an accelerated variant of this method are provided. The main features of this method are its ability to accelerate the rate of convergence by adjusting the steplength, to provide a second derivative of the observed-data log-likelihood function using only the functions used in the proposed method, and the ability to avoid having to explicitly solve the first derivative of the object function. Four examples are presented to demonstrate how the proposed method converges compared with the EM algorithm and its variants. The computing time is also compared.  相似文献   

4.
In this article, the finite mixture model of Weibull distributions is studied, the identifiability of the model with m components is proven, and the parameter estimators for the case of two components resulted by several algorithms are compared. The parameter estimators are obtained with maximum likelihood performing calculations with different algorithms: expectation-maximization (EM), Fisher scoring, backfitting, optimization of k-nearest neighbor approach, and random walk algorithm using Monte Carlo simulation. The Akaike information criterion and the log-likelihood value are used to compare models. In general, the proposed random walk algorithm shows better performance in mean square error and bias. Finally, the results are applied to electronic component lifetime data.  相似文献   

5.
This paper proposes a method to assess the local influence in a minor perturbation of a statistical model with incomplete data. The idea is to utilize Cook's approach to the conditional expectation of the complete-data log-likelihood function in the EM algorithm. It is shown that the method proposed produces analytic results that are very similar to those obtained from a classical local influence approach based on the observed data likelihood function and has the potential to assess a variety of complicated models that cannot be handled by existing methods. An application to the generalized linear mixed model is investigated. Some illustrative artificial and real examples are presented.  相似文献   

6.
Variable selection in cluster analysis is important yet challenging. It can be achieved by regularization methods, which realize a trade-off between the clustering accuracy and the number of selected variables by using a lasso-type penalty. However, the calibration of the penalty term can suffer from criticisms. Model selection methods are an efficient alternative, yet they require a difficult optimization of an information criterion which involves combinatorial problems. First, most of these optimization algorithms are based on a suboptimal procedure (e.g. stepwise method). Second, the algorithms are often computationally expensive because they need multiple calls of EM algorithms. Here we propose to use a new information criterion based on the integrated complete-data likelihood. It does not require the maximum likelihood estimate and its maximization appears to be simple and computationally efficient. The original contribution of our approach is to perform the model selection without requiring any parameter estimation. Then, parameter inference is needed only for the unique selected model. This approach is used for the variable selection of a Gaussian mixture model with conditional independence assumed. The numerical experiments on simulated and benchmark datasets show that the proposed method often outperforms two classical approaches for variable selection. The proposed approach is implemented in the R package VarSelLCM available on CRAN.  相似文献   

7.
In this article, we present the performance of the maximum likelihood estimates of the Burr XII parameters for constant-stress partially accelerated life tests under multiple censored data. Two maximum likelihood estimation methods are considered. One method is based on observed-data likelihood function and the maximum likelihood estimates are obtained by using the quasi-Newton algorithm. The other method is based on complete-data likelihood function and the maximum likelihood estimates are derived by using the expectation-maximization (EM) algorithm. The variance–covariance matrices are derived to construct the confidence intervals of the parameters. The performance of these two algorithms is compared with each other by a simulation study. The simulation results show that the maximum likelihood estimation via the EM algorithm outperforms the quasi-Newton algorithm in terms of the absolute relative bias, the bias, the root mean square error and the coverage rate. Finally, a numerical example is given to illustrate the performance of the proposed methods.  相似文献   

8.
In this work, we generalize the controlled calibration model by assuming replication on both variables. Likelihood-based methodology is used to estimate the model parameters and the Fisher information matrix is used to construct confidence intervals for the unknown value of the regressor variable. Further, we study the local influence diagnostic method which is based on the conditional expectation of the complete-data log-likelihood function related to the EM algorithm. Some useful perturbation schemes are discussed. A simulation study is carried out to assess the effect of the measurement error on the estimation of the parameter of interest. This new approach is illustrated with a real data set.  相似文献   

9.
The EM algorithm is a popular method for computing maximum likelihood estimates. One of its drawbacks is that it does not produce standard errors as a by-product. We consider obtaining standard errors by numerical differentiation. Two approaches are considered. The first differentiates the Fisher score vector to yield the Hessian of the log-likelihood. The second differentiates the EM operator and uses an identity that relates its derivative to the Hessian of the log-likelihood. The well-known SEM algorithm uses the second approach. We consider three additional algorithms: one that uses the first approach and two that use the second. We evaluate the complexity and precision of these three and the SEM in algorithm seven examples. The first is a single-parameter example used to give insight. The others are three examples in each of two areas of EM application: Poisson mixture models and the estimation of covariance from incomplete data. The examples show that there are algorithms that are much simpler and more accurate than the SEM algorithm. Hopefully their simplicity will increase the availability of standard error estimates in EM applications. It is shown that, as previously conjectured, a symmetry diagnostic can accurately estimate errors arising from numerical differentiation. Some issues related to the speed of the EM algorithm and algorithms that differentiate the EM operator are identified.  相似文献   

10.
The analysis of human perceptions is often carried out by resorting to surveys and questionnaires, where respondents are asked to express ratings about the objects being evaluated. A class of mixture models, called CUB (Combination of Uniform and shifted Binomial), has been recently proposed in this context. This article focuses on a model of this class, the Nonlinear CUB, and investigates some computational issues concerning parameter estimation, which is performed by Maximum Likelihood. More specifically, we consider two main approaches to optimize the log-likelihood: the classical numerical methods of optimization and the EM algorithm. The classical numerical methods comprise the widely used algorithms Nelder–Mead, Newton–Raphson, Broyden–Fletcher–Goldfarb–Shanno (BFGS), Berndt–Hall–Hall–Hausman (BHHH), Simulated Annealing, Conjugate Gradients and usually have the advantage of a fast convergence. On the other hand, the EM algorithm deserves consideration for some optimality properties in the case of mixture models, but it is slower. This article has a twofold aim: first we show how to obtain explicit formulas for the implementation of the EM algorithm in nonlinear CUB models and we formally derive the asymptotic variance–covariance matrix of the Maximum Likelihood estimator; second, we discuss and compare the performance of the two above mentioned approaches to the log-likelihood maximization.  相似文献   

11.
For the data from multivariate t distributions, it is very hard to make an influence analysis based on the probability density function since its expression is intractable. In this paper, we present a technique for influence analysis based on the mixture distribution and EM algorithm. In fact, the multivariate t distribution can be considered as a particular Gaussian mixture by introducing the weights from the Gamma distribution. We treat the weights as the missing data and develop the influence analysis for the data from multivariate t distributions based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm. Several case-deletion measures are proposed for detecting influential observations from multivariate t distributions. Two numerical examples are given to illustrate our methodology.  相似文献   

12.
A popular approach to estimation based on incomplete data is the EM algorithm. For categorical data, this paper presents a simple expression of the observed data log-likelihood and its derivatives in terms of the complete data for a broad class of models and missing data patterns. We show that using the observed data likelihood directly is easy and has some advantages. One can gain considerable computational speed over the EM algorithm and a straightforward variance estimator is obtained for the parameter estimates. The general formulation treats a wide range of missing data problems in a uniform way. Two examples are worked out in full.  相似文献   

13.
The established general results on convergence properties of the EM algorithm require the sequence of EM parameter estimates to fall in the interior of the parameter space over which the likelihood is being maximized. This paper presents convergence properties of the EM sequence of likelihood values and parameter estimates in constrained parameter spaces for which the sequence of EM parameter estimates may converge to the boundary of the constrained parameter space contained in the interior of the unconstrained parameter space. Examples of the behavior of the EM algorithm applied to such parameter spaces are presented.  相似文献   

14.
ABSTRACT

A new hidden Markov random field model is proposed for the analysis of cylindrical spatial series, i.e. bivariate spatial series of intensities and angles. It allows us to segment cylindrical spatial series according to a finite number of latent classes that represent the conditional distributions of the data under specific environmental conditions. The model parsimoniously accommodates circular–linear correlation, multimodality, skewness and spatial autocorrelation. A numerically tractable expectation–maximization algorithm is provided to compute parameter estimates by exploiting a mean-field approximation of the complete-data log-likelihood function. These methods are illustrated on a case study of marine currents in the Adriatic sea.  相似文献   

15.
It is well known that the normal mixture with unequal variance has unbounded likelihood and thus the corresponding global maximum likelihood estimator (MLE) is undefined. One of the commonly used solutions is to put a constraint on the parameter space so that the likelihood is bounded and then one can run the EM algorithm on this constrained parameter space to find the constrained global MLE. However, choosing the constraint parameter is a difficult issue and in many cases different choices may give different constrained global MLE. In this article, we propose a profile log likelihood method and a graphical way to find the maximum interior mode. Based on our proposed method, we can also see how the constraint parameter, used in the constrained EM algorithm, affects the constrained global MLE. Using two simulation examples and a real data application, we demonstrate the success of our new method in solving the unboundness of the mixture likelihood and locating the maximum interior mode.  相似文献   

16.
Despite the popularity and importance, there is limited work on modelling data which come from complex survey design using finite mixture models. In this work, we explored the use of finite mixture regression models when the samples were drawn using a complex survey design. In particular, we considered modelling data collected based on stratified sampling design. We developed a new design-based inference where we integrated sampling weights in the complete-data log-likelihood function. The expectation–maximisation algorithm was developed accordingly. A simulation study was conducted to compare the new methodology with the usual finite mixture of a regression model. The comparison was done using bias-variance components of mean square error. Additionally, a simulation study was conducted to assess the ability of the Bayesian information criterion to select the optimal number of components under the proposed modelling approach. The methodology was implemented on real data with good results.  相似文献   

17.
For multivariate normal data with non-monotone (i.e. arbitrary) missing data patterns, lattice conditional independence (LCI) models determined by the observed data patterns can be used to obtain closed-form MLEs (Andersson and Perlman, 1991, 1993). In this paper, three procedures — LCI models, the EM algorithm, and the complete-data method — are compared by means of a Monte Carlo experiment. When the LCI model is accepted by the LR test, the LCI estimate is more efficient than those based on the EM algorithm and the complete-data method. When the LCI model is not accepted, the LCI estimate may lose efficiency but still may be more efficient than the EM estimate if the observed data is sparse. When the LCI model appears too restrictive, it may be possible to obtain a less restrictive LCI model by.discarding only a small portion of the incomplete observations. LCI models appear to be especially useful when the observed data is sparse, even in cases where the suitability of the LCI model is uncertain.  相似文献   

18.
Parameters of a finite mixture model are often estimated by the expectation–maximization (EM) algorithm where the observed data log-likelihood function is maximized. This paper proposes an alternative approach for fitting finite mixture models. Our method, called the iterative Monte Carlo classification (IMCC), is also an iterative fitting procedure. Within each iteration, it first estimates the membership probabilities for each data point, namely the conditional probability of a data point belonging to a particular mixing component given that the data point value is obtained, it then classifies each data point into a component distribution using the estimated conditional probabilities and the Monte Carlo method. It finally updates the parameters of each component distribution based on the classified data. Simulation studies were conducted to compare IMCC with some other algorithms for fitting mixture normal, and mixture t, densities.  相似文献   

19.
In this paper the class of Bilinear GARCH (BL-GARCH) models is proposed. BL-GARCH models allow to capture asymmetries in the conditional variance of financial and economic time series by means of interactions between past shocks and volatilities. The availability of likelihood based inference is an attractive feature of BL-GARCH models. Under the assumption of conditional normality, the log-likelihood function can be maximized by means of an EM type algorithm. The main reason for using the EM algorithm is that it allows to obtain parameter estimates which naturally guarantee the positive definiteness of the conditional variance with no need for additional parameter constraints. We also derive a robust LM test statistic which can be used for model identification. Finally, the effectiveness of BL-GARCH models in capturing asymmetric volatility patterns in financial time series is assessed by means of an application to a time series of daily returns on the NASDAQ Composite stock market index.  相似文献   

20.
To obtain maximum likelihood (ML) estimation in factor analysis (FA), we propose in this paper a novel and fast conditional maximization (CM) algorithm, which has quadratic and monotone convergence, consisting of a sequence of CM log-likelihood (CML) steps. The main contribution of this algorithm is that the closed form expression for the parameter to be updated in each step can be obtained explicitly, without resorting to any numerical optimization methods. In addition, a new ECME algorithm similar to Liu’s (Biometrika 81, 633–648, 1994) one is obtained as a by-product, which turns out to be very close to the simple iteration algorithm proposed by Lawley (Proc. R. Soc. Edinb. 60, 64–82, 1940) but our algorithm is guaranteed to increase log-likelihood at every iteration and hence to converge. Both algorithms inherit the simplicity and stability of EM but their convergence behaviors are much different as revealed in our extensive simulations: (1) In most situations, ECME and EM perform similarly; (2) CM outperforms EM and ECME substantially in all situations, no matter assessed by the CPU time or the number of iterations. Especially for the case close to the well known Heywood case, it accelerates EM by factors of around 100 or more. Also, CM is much more insensitive to the choice of starting values than EM and ECME.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号