On the Convergence of the Monte Carlo Maximum Likelihood Method for Latent Variable Models |
| |
Authors: | OLIVIER CAPPÉ ,RANDAL DOUC,ERIC MOULINES,& CHRISTIAN ROBERT |
| |
Affiliation: | École Nationale Supérieure des Télécommunications,; Universitéde Paris Dauphine |
| |
Abstract: | While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker. |
| |
Keywords: | maximum likelihood estimation Monte Carlo maximum likelihood simulated likelihood ratio stochastic approximation stochastic optimization |
|
|