首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 420 毫秒
1.
This article considers a discrete-time Markov chain for modeling transition probabilities when multiple successive observations are missing at random between two observed outcomes using three methods: a na\"?ve analog of complete-case analysis using the observed one-step transitions alone, a non data-augmentation method (NL) by solving nonlinear equations, and a data-augmentation method, the Expectation-Maximization (EM) algorithm. The explicit form of the conditional log-likelihood given the observed information as required by the E step is provided, and the iterative formula in the M step is expressed in a closed form. An empirical study was performed to examine the accuracy and precision of the estimates obtained in the three methods under ignorable missing mechanisms of missing completely at random and missing at random. A dataset from the mental health arena was used for illustration. It was found that both data-augmentation and nonaugmentation methods provide accurate and precise point estimation, and that the na\"?ve method resulted in estimates of the transition probabilities with similar bias but larger MSE. The NL method and the EM algorithm in general provide similar results whereas the latter provides conditional expected row margins leading to smaller standard errors.  相似文献   

2.
The complete-data model that underlies an Expectation-Maximization (EM) algorithm must have a parameter space that coincides with the parameter space of the observed-data model. Otherwise, maximization of the observed-data log-likelihood will be carried out over a space that does not coincide with the desired parameter space. In some contexts, however, a natural complete-data model may be defined only for parameter values within a subset of the observed-data parameter space. In this paper we discuss situations where this can still be useful if the complete-data model can be viewed as a member of a finite family of complete-data models that have parameter spaces which collectively cover the observed-data parameter space. Such a family of complete-data models defines a family of EM algorithms which together lead to a finite collection of constrained maxima of the observed-data log-likelihood. Maximization of the log-likelihood function over the full parameter space then involves identifying the constrained maximum that achieves the greatest log-likelihood value. Since optimization over a finite collection of candidates is referred to as combinatorial optimization, we refer to such a family of EM algorithms as a combinatorial EM (CEM) algorithm. As well as discussing the theoretical concepts behind CEM algorithms, we discuss strategies for improving the computational efficiency when the number of complete-data models is large. Various applications of CEM algorithms are also discussed, ranging from simple examples that illustrate the concepts, to more substantive examples that demonstrate the usefulness of CEM algorithms in practice.  相似文献   

3.
A developmental trajectory describes the course of behavior over time. Identifying multiple trajectories within an overall developmental process permits a focus on subgroups of particular interest. We introduce a framework for identifying trajectories by using the Expectation-Maximization (EM) algorithm to fit semiparametric mixtures of logistic distributions to longitudinal binary data. For performance comparison, we consider full maximization algorithms (PROC TRAJ in SAS), standard EM, and two other EM-based algorithms for speeding up convergence. Simulation shows that EM methods produce more accurate parameter estimates. The EM methodology is illustrated with a longitudinal dataset involving adolescents smoking behaviors.  相似文献   

4.
A hierarchical logit-normal model for analysis of binary data with extra-binomial variation is examined. A method of approximate maximum likelihood estimation of the parameters is proposed. The method uses the EM algorithm and approximations to facilitate its implementation are derived. Approximate standard errors of the estimates are provided and a numerical example is used to illustrate the method.  相似文献   

5.
We discuss here two examples of estimation by numerical maximization of penalized likelihood. We show that, in these examples, it is simpler not to use the EM algorithm for computation of the estimates or their standard errors. We discuss also confidence and credibility intervals based on penalized likelihood and a chi-squared approximate distribution, and compare such intervals with intervals of Wald type.

[Received July 2014. Revised September 2015.]  相似文献   

6.
This paper presents an EM algorithm for maximum likelihood estimation in generalized linear models with overdispersion. The algorithm is initially derived as a form of Gaussian quadrature assuming a normal mixing distribution, but with only slight variation it can be used for a completely unknown mixing distribution, giving a straightforward method for the fully non-parametric ML estimation of this distribution. This is of value because the ML estimates of the GLM parameters may be sensitive to the specification of a parametric form for the mixing distribution. A listing of a GLIM4 algorithm for fitting the overdispersed binomial logit model is given in an appendix.A simple method is given for obtaining correct standard errors for parameter estimates when using the EM algorithm.Several examples are discussed.  相似文献   

7.
Latent variable models are widely used for jointly modeling of mixed data including nominal, ordinal, count and continuous data. In this paper, we consider a latent variable model for jointly modeling relationships between mixed binary, count and continuous variables with some observed covariates. We assume that, given a latent variable, mixed variables of interest are independent and count and continuous variables have Poisson distribution and normal distribution, respectively. As such data may be extracted from different subpopulations, consideration of an unobserved heterogeneity has to be taken into account. A mixture distribution is considered (for the distribution of the latent variable) which accounts the heterogeneity. The generalized EM algorithm which uses the Newton–Raphson algorithm inside the EM algorithm is used to compute the maximum likelihood estimates of parameters. The standard errors of the maximum likelihood estimates are computed by using the supplemented EM algorithm. Analysis of the primary biliary cirrhosis data is presented as an application of the proposed model.  相似文献   

8.
This paper presents a unified method for influence analysis to deal with random effects appeared in additive nonlinear regression models for repeated measurement data. The basic idea is to apply the Q-function, the conditional expectation of the complete-data log-likelihood function obtained from EM algorithm, instead of the observed-data log-likelihood function as used in standard influence analysis. Diagnostic measures are derived based on the case-deletion approach and the local influence approach. Two real examples and a simulation study are examined to illustrate our methodology.  相似文献   

9.
Abstract

Recently, the study of the lifetime of systems in reliability and survival analysis in the presence of several causes of failure (competing risks) has attracted attention in the literature. In this paper, series and parallel systems with exponential lifetime for each item of the system are considered. Several causes of failure independently affect lifetime distributions and observations of failure times of the systems are considered under progressive Type-II censored scheme. For series systems, the maximum likelihood estimates of parameters are computed and confidence intervals for parameters of the model are obtained using Fisher information matrix. For parallel systems, the generalized EM algorithm which uses the Newton-Raphson algorithm inside the EM algorithm is used to compute the maximum likelihood estimates of parameters. Also, the standard errors of the maximum likelihood estimates are computed by using the supplemented EM algorithm. The simulation study confirms the good performance of the introduced approach.  相似文献   

10.
The analysis of human perceptions is often carried out by resorting to surveys and questionnaires, where respondents are asked to express ratings about the objects being evaluated. A class of mixture models, called CUB (Combination of Uniform and shifted Binomial), has been recently proposed in this context. This article focuses on a model of this class, the Nonlinear CUB, and investigates some computational issues concerning parameter estimation, which is performed by Maximum Likelihood. More specifically, we consider two main approaches to optimize the log-likelihood: the classical numerical methods of optimization and the EM algorithm. The classical numerical methods comprise the widely used algorithms Nelder–Mead, Newton–Raphson, Broyden–Fletcher–Goldfarb–Shanno (BFGS), Berndt–Hall–Hall–Hausman (BHHH), Simulated Annealing, Conjugate Gradients and usually have the advantage of a fast convergence. On the other hand, the EM algorithm deserves consideration for some optimality properties in the case of mixture models, but it is slower. This article has a twofold aim: first we show how to obtain explicit formulas for the implementation of the EM algorithm in nonlinear CUB models and we formally derive the asymptotic variance–covariance matrix of the Maximum Likelihood estimator; second, we discuss and compare the performance of the two above mentioned approaches to the log-likelihood maximization.  相似文献   

11.
In this paper, we consider two well-known parametric long-term survival models, namely, the Bernoulli cure rate model and the promotion time (or Poisson) cure rate model. Assuming the long-term survival probability to depend on a set of risk factors, the main contribution is in the development of the stochastic expectation maximization (SEM) algorithm to determine the maximum likelihood estimates of the model parameters. We carry out a detailed simulation study to demonstrate the performance of the proposed SEM algorithm. For this purpose, we assume the lifetimes due to each competing cause to follow a two-parameter generalized exponential distribution. We also compare the results obtained from the SEM algorithm with those obtained from the well-known expectation maximization (EM) algorithm. Furthermore, we investigate a simplified estimation procedure for both SEM and EM algorithms that allow the objective function to be maximized to split into simpler functions with lower dimensions with respect to model parameters. Moreover, we present examples where the EM algorithm fails to converge but the SEM algorithm still works. For illustrative purposes, we analyze a breast cancer survival data. Finally, we use a graphical method to assess the goodness-of-fit of the model with generalized exponential lifetimes.  相似文献   

12.
In most applications, the parameters of a mixture of linear regression models are estimated by maximum likelihood using the expectation maximization (EM) algorithm. In this article, we propose the comparison of three algorithms to compute maximum likelihood estimates of the parameters of these models: the EM algorithm, the classification EM algorithm and the stochastic EM algorithm. The comparison of the three procedures was done through a simulation study of the performance (computational effort, statistical properties of estimators and goodness of fit) of these approaches on simulated data sets.

Simulation results show that the choice of the approach depends essentially on the configuration of the true regression lines and the initialization of the algorithms.  相似文献   

13.
Parameters of a finite mixture model are often estimated by the expectation–maximization (EM) algorithm where the observed data log-likelihood function is maximized. This paper proposes an alternative approach for fitting finite mixture models. Our method, called the iterative Monte Carlo classification (IMCC), is also an iterative fitting procedure. Within each iteration, it first estimates the membership probabilities for each data point, namely the conditional probability of a data point belonging to a particular mixing component given that the data point value is obtained, it then classifies each data point into a component distribution using the estimated conditional probabilities and the Monte Carlo method. It finally updates the parameters of each component distribution based on the classified data. Simulation studies were conducted to compare IMCC with some other algorithms for fitting mixture normal, and mixture t, densities.  相似文献   

14.
This paper proposes a method for estimating the parameters in a generalized linear model with missing covariates. The missing covariates are assumed to come from a continuous distribution, and are assumed to be missing at random. In particular, Gaussian quadrature methods are used on the E-step of the EM algorithm, leading to an approximate EM algorithm. The parameters are then estimated using the weighted EM procedure given in Ibrahim (1990). This approximate EM procedure leads to approximate maximum likelihood estimates, whose standard errors and asymptotic properties are given. The proposed procedure is illustrated on a data set.  相似文献   

15.
This paper introduces practical methods of parameter and standard error estimation for adaptive robust regression where errors are assumed to be from a normal/independent family of distributions. In particular, generalized EM algorithms (GEM) are considered for the two cases of t and slash families of distributions. For the t family, a one step method is proposed to estimate the degree of freedom parameter. Use of empirical information is suggested for standard error estimation. It is shown that this choice leads to standard errors that can be obtained as a by-product of the GEM algorithm. The proposed methods, as discussed, can be implemented in most available nonlinear regression programs. Details of implementation in SAS NLIN are given using two specific examples.  相似文献   

16.
The development of models and methods for cure rate estimation has recently burgeoned into an important subfield of survival analysis. Much of the literature focuses on the standard mixture model. Recently, process-based models have been suggested. We focus on several models based on first passage times for Wiener processes. Whitmore and others have studied these models in a variety of contexts. Lee and Whitmore (Stat Sci 21(4):501–513, 2006) give a comprehensive review of a variety of first hitting time models and briefly discuss their potential as cure rate models. In this paper, we study the Wiener process with negative drift as a possible cure rate model but the resulting defective inverse Gaussian model is found to provide a poor fit in some cases. Several possible modifications are then suggested, which improve the defective inverse Gaussian. These modifications include: the inverse Gaussian cure rate mixture model; a mixture of two inverse Gaussian models; incorporation of heterogeneity in the drift parameter; and the addition of a second absorbing barrier to the Wiener process, representing an immunity threshold. This class of process-based models is a useful alternative to the standard model and provides an improved fit compared to the standard model when applied to many of the datasets that we have studied. Implementation of this class of models is facilitated using expectation-maximization (EM) algorithms and variants thereof, including the gradient EM algorithm. Parameter estimates for each of these EM algorithms are given and the proposed models are applied to both real and simulated data, where they perform well.  相似文献   

17.
The family of power series cure rate models provides a flexible modeling framework for survival data of populations with a cure fraction. In this work, we present a simplified estimation procedure for the maximum likelihood (ML) approach. ML estimates are obtained via the expectation-maximization (EM) algorithm where the expectation step involves computation of the expected number of concurrent causes for each individual. It has the big advantage that the maximization step can be decomposed into separate maximizations of two lower-dimensional functions of the regression and survival distribution parameters, respectively. Two simulation studies are performed: the first to investigate the accuracy of the estimation procedure for different numbers of covariates and the second to compare our proposal with the direct maximization of the observed log-likelihood function. Finally, we illustrate the technique for parameter estimation on a dataset of survival times for patients with malignant melanoma.  相似文献   

18.
In many studies, the data collected are subject to some upper and lower detection limits. Hence, the responses are either left or right censored. A complication arises when these continuous measures present heavy tails and asymmetrical behavior; simultaneously. For such data structures, we propose a robust-censored linear model based on the scale mixtures of skew-normal (SMSN) distributions. The SMSN is an attractive class of asymmetrical heavy-tailed densities that includes the skew-normal, skew-t, skew-slash, skew-contaminated normal and the entire family of scale mixtures of normal (SMN) distributions as special cases. We propose a fast estimation procedure to obtain the maximum likelihood (ML) estimates of the parameters, using a stochastic approximation of the EM (SAEM) algorithm. This approach allows us to estimate the parameters of interest easily and quickly, obtaining as a byproducts the standard errors, predictions of unobservable values of the response and the log-likelihood function. The proposed methods are illustrated through real data applications and several simulation studies.  相似文献   

19.
A popular approach to estimation based on incomplete data is the EM algorithm. For categorical data, this paper presents a simple expression of the observed data log-likelihood and its derivatives in terms of the complete data for a broad class of models and missing data patterns. We show that using the observed data likelihood directly is easy and has some advantages. One can gain considerable computational speed over the EM algorithm and a straightforward variance estimator is obtained for the parameter estimates. The general formulation treats a wide range of missing data problems in a uniform way. Two examples are worked out in full.  相似文献   

20.
This article considers inference for the log-normal distribution based on progressive Type I interval censored data by both frequentist and Bayesian methods. First, the maximum likelihood estimates (MLEs) of the unknown model parameters are computed by expectation-maximization (EM) algorithm. The asymptotic standard errors (ASEs) of the MLEs are obtained by applying the missing information principle. Next, the Bayes’ estimates of the model parameters are obtained by Gibbs sampling method under both symmetric and asymmetric loss functions. The Gibbs sampling scheme is facilitated by adopting a similar data augmentation scheme as in EM algorithm. The performance of the MLEs and various Bayesian point estimates is judged via a simulation study. A real dataset is analyzed for the purpose of illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号