single-jc.php

JACIII Vol.13 No.5 pp. 537-541
doi: 10.20965/jaciii.2009.p0537
(2009)

Paper:

Bias of Standard Errors in Latent Class Model Applications Using Newton-Raphson and EM Algorithms

Liberato Camilleri

Department of Statistics and Operations Research, University of Malta
Msida (MSD 06) Malta

Received:
September 19, 2008
Accepted:
February 21, 2009
Published:
September 20, 2009
Keywords:
EM algorithm, numerical differentiation, proportional odds model, maximum likelihood estimation, latent class model
Abstract
The EM algorithm is a popular method for computing maximum likelihood estimates. It tends to be numerically stable, reduces execution time compared to other estimation procedures and is easy to implement in latent class models. However, the EM algorithm fails to provide a consistent estimator of the standard errors of maximum likelihood estimates in incomplete data applications. Correct standard errors can be obtained by numerical differentiation. The technique requires computation of a complete-data gradient vector and Hessian matrix, but not those associated with the incomplete data likelihood. Obtaining first and second derivatives numerically is computationally very intensive and execution time may become very expensive when fitting Latent class models using a Newton-type algorithm. When the execution time is too high one is motivated to use the EM algorithm solution to initialize the Newton Raphson algorithm. We also investigate the effect on the execution time when a final Newton-Raphson step follows the EM algorithm after convergence. In this paper we compare the standard errors provided by the EM and Newton-Raphson algorithms for two models and analyze how this bias is affected by the number of parameters in the model fit.
Cite this article as:
L. Camilleri, “Bias of Standard Errors in Latent Class Model Applications Using Newton-Raphson and EM Algorithms,” J. Adv. Comput. Intell. Intell. Inform., Vol.13 No.5, pp. 537-541, 2009.
Data files:
References
  1. [1] T. A. Louis, “Finding the Observed Information Matrix when using the EM algorithm,” Journal of the Royal Statistical Society, 44, pp. 226-233, 1982.
  2. [2] R. A. Fisher, “Theory of Statistical Estimation,” Proc. Camb. Phil. Society, 22, pp. 700-725, 1925.
  3. [3] I. Meilijson, “A Fast Improvement to the EM algorithm on its Own Term,” Journal of the Royal Statistical Society, 51, pp. 127-138, 1989.
  4. [4] X. L. Meng and D. B. Rubin, “Using EM to obtain asymptotic variance-covariance matrices: the SEM algorithm,” Journal of American Statistical Association, 86, pp. 899-909, 1991.
  5. [5] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum Likelihood from Incomplete Data via the EM algorithm,” Journal of the Royal Statistical Society, B, 39, pp. 1-38, 1977.
  6. [6] M. R. Segal, P. Bacchetti, and N. P. Jewell, “Variance for Maximum Penalized Likelihood estimates obtained via the EM algorithm,” Journal of the Royal Statistical Society, B, 56, pp. 345-352, 1994.
  7. [7] C. E. McCulloch, “Maximum Likelihood Variance components estimation for Binary Data,” Journal of the American Statistical Association, 89, pp. 330-335, 1998.
  8. [8] M. Jamshidian and R. I. Jennrich, “Standard Errors for EM Estimation,” Journal of the Royal Statistical Society, B, pp. 257-270, 2000.
  9. [9] A. Agresti, “Categorical Data Analysis,” A Wiley-Interscience Publication, 2002.
  10. [10] P. McCullagh, “Regression Models for Ordinal Data,” J.R. Statistical. Soc, B, 42, pp. 109-142, 1980.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024