single-jc.php

JACIII Vol.16 No.2 pp. 341-348
doi: 10.20965/jaciii.2012.p0341
(2012)

Paper:

Robust Facial Expression Recognition Using Near Infrared Cameras

Laszlo A. Jeni*, Hideki Hashimoto**,
and Takashi Kubota*

*Department of Electrical Engineering, The University of Tokyo, ISAS Campus, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252-5210, Japan

**Department of Electrical, Electronics and Communication Engineering, Chuo University, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan

Received:
September 15, 2011
Accepted:
November 15, 2011
Published:
March 20, 2012
Keywords:
emotion recognition, 3D face tracking, near infrared camera, constrained local models
Abstract
In human-human communication we use verbal, vocal and non-verbal signals to communicate with others. Facial expressions are a form of non-verbal communication, recognizing them helps to improve the human-machine interaction. This paper proposes a system for pose- and illumination-invariant recognition of facial expressions using near-infrared camera images and precise 3D shape registration. Precise 3D shape information of the human face can be computed by means of Constrained Local Models (CLM), which fits a dense model to an unseen image in an iterative manner. We used a multi-class SVM to classify the acquired 3D shape into different emotion categories. Results surpassed human performance and show poseinvariant performance. Varying lighting conditions can influence the fitting process and reduce the recognition precision. We built a near-infrared and visible light camera array to test the method with different illuminations. Results shows that the near-infrared camera configuration is suitable for robust and reliable facial expression recognition with changing lighting conditions.
Cite this article as:
L. Jeni, H. Hashimoto, and T. Kubota, “Robust Facial Expression Recognition Using Near Infrared Cameras,” J. Adv. Comput. Intell. Intell. Inform., Vol.16 No.2, pp. 341-348, 2012.
Data files:
References
  1. [1] P. Korondi and H. Hashimoto, “Intelligent Space, as an Integrated Intelligent System,” Keynote paper of Int. Conf. on Electrical Drives and Power Electronics, Proc., pp. 24-31, 2003.
  2. [2] J. Lee, K. Morioka, N. Ando, and H. Hashimoto, “Cooperation of Distributed Intelligent Sensors in Intelligent Environment,” IEEE/ASME Trans. on Mechatronics, Vol.9, No.3, pp. 535-543, 2004.
  3. [3] K. Morioka and H. Hashimoto, “Color Appearance Based Object Identification in Intelligent Space,” In: Proc. of the 8th IEEE Int. Workshop on Advanced Motion Control, Kawasaki, Japan, pp. 505-510, 2004.
  4. [4] Z. Petres, P. Baranyi, P. Korondi, and H. Hashimoto, “Trajectory Tracking by TPModel Transformation: Case Study of a Benchmark Problem,” IEEE Trans. on Industrial Electronics, Vol.54, No.3, pp. 1654-1663, June 2007.
  5. [5] T. Sasaki, D. Brscic, and H. Hashimoto, “Human Observation Based Extraction of Path Patterns for Mobile Robot Navigation,” IEEE Trans. on Industrial Electronics, Vol.56, 2010.
  6. [6] M. Niitsuma and H. Hashimoto, “Spatial Memory as an Aid System for Human Activity in the Intelligent Space,” IEEE Trans. on Industrial Electronics, Vol.54, Issue 2, pp. 1122-1131, 2007.
  7. [7] Z. Zeng, M. Pantic, G. Roisman, and T. Huang, “A Survey of Affect Recognition Methods: Audio, Visual and Spontaneous Expressions,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.31, No.1, pp. 39-58, 2009.
  8. [8] M. S. Bartlett, G. Littlewort, M. G. Frank, C. Lainscsek, I. R. Fasel, and J. R. Movellan, “Automatic recognition of facial actions in spontaneous expressions,” J. of Multimedia, Vol.1, No.6, pp. 22-35, 2006.
  9. [9] Y. Chang, C. Hu, R. Feris, and M. Turk, “Manifold based analysis of facial expression,” Image and Vision Computing, Vol.24, No.6, pp. 605-614, 2006.
  10. [10] B. Fasel, F. Monay, and D. Gatica-Perez, “Latent semantic analysis of facial action codes for automatic facial expression recognition,” In Proc. of the ACM SIGMM Int. workshop on multimedia information retrieval, pp. 181-188, 2004.
  11. [11] S. Koelstra and M. Pantic, “Non-rigid registration using freeform deformations for recognition of facial actions and their temporal dynamics,” In Proc. of the IEEE Int. Conf. on automatic face and gesture recognition, 2008.
  12. [12] I. Kotsia and I. Pitas, “Facial expression recognition in image sequences using geometric deformation features and support vector machines,” IEEE Trans. on Image Processing, Vol.16, No.1, pp. 172-187, 2007.
  13. [13] Y. L. Tian, T. Kanade, and J. F. Cohn, “Recognizing action units for facial expression analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.23, No.2, pp. 97-115, 2001.
  14. [14] S. W. Chew, P. J. Lucey, S. Lucey, J. Saragih, J. F. Cohn, and S. Sridharan, “Person-independent facial expression detection using constrained local models,” In Proc. of FG 2011 Facial Expression Recognition and Analysis Challenge, Santa Barbara, CA, 2011.
  15. [15] P. Lucey, J. F. Cohn, K. M. Prkachin, P. Solomon, and I. Matthews, “Painful data: The UNBC-McMaster Shoulder Pain Expression Archive Database,” 9th IEEE Int. Conf. on Automatic Face and Gesture Recogn. 2011.
  16. [16] P. Azad, T. Asfour, and R. Dillmann, “Robust real-time stereo-based markerless human motion capture,” Humanoids 2008. 8th IEEERAS Int. Conf. on Humanoid Robots 2008, pp. 700-707, Dec. 1-3, 2008.
  17. [17] Y. Furukawa and J. Ponce, “Dense 3D motion capture from synchronized video streams,” IEEE Conf. on Computer Vision and Pattern Recognition, 2008 (CVPR 2008), pp. 1-8, June 23-28, 2008.
  18. [18] Y. Lu, “Markerless human motion capture: An application of simulated annealing and Fast Marching Method,” 19th Int. Conf. on Pattern Recognition 2008 (ICPR 2008), pp. 1-4, Dec. 8-11, 2008.
  19. [19] P. Vadakkepat, P. Lim, L. C. De Silva, Liu Jing, and Li Li Ling, “Multimodal Approach to Human-Face Detection and Tracking,” IEEE Trans. on Industrial Electronics, Vol.55, No.3, pp. 1385-1393, March 2008.
  20. [20] D. DeCarlo and D. Metaxas, “The integration of optical flow and deformable models with applications to human face shape and motion estimation,” in Proc. IEEE Conf. CVPR, San Francisco, CA, pp. 231-238, 1996.
  21. [21] J. Hoey and J. J. Little, “Bayesian clustering of optical flow fields,” Proc. 9th IEEE Int. Conf. on Computer Vision 2003, Vol.2, pp. 1086-1093, Oct. 13-16, 2003.
  22. [22] Y.-S. Hou, Y.-N. Zhang, and R.-C. Zhao, “Robust tracking of nonrigid objects using techniques of inverse component uncertainty factorization subspace constraints optical flow,” Proc. of 2005 Int. Conf. on Machine Learning and Cybernetics 2005, Vol.9, pp. 5458-5466 Aug. 18-21, 2005.
  23. [23] D. Cristinacce and T. Cootes, “Automatic feature localisation with constrained local models,” Pattern Recognition, Vol.41, pp. 3054-3067, 2008.
  24. [24] J. M. Saragih, S. Lucey, and J. F. Cohn, “Deformable model fitting by regularized landmark mean-shift,” Int. J. Comp. Vision, Vol.91, No.2, pp. 200-215, 2011.
  25. [25] P. Ekman and E. L. Rosenberg (Eds.), “What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (2nd ed.),” New York: Oxford University Press, 2005.
  26. [26] T. Cootes and C. Taylor, “Active Shape Models – ‘Smart Snakes’,” In British Machine Vision Conf., pp. 266-275, 1992.
  27. [27] M. Kamandar and S. A. Seyedin, “Procrustes – based shape prior for parametric active contours,” Int. Conf. on Machine Vision 2007 (ICMV 2007), pp. 135-140, Dec. 28-29, 2007.
  28. [28] C.-C. Chang and C.-J. Lin, “LIBSVM: a library for support vector machines,” 2001.
    http://www.csie.ntu.edu.tw/ cjlin/libsvm
  29. [29] Point Grey Research, Inc.,
    http://www.ptgrey.com
  30. [30] D. Lundqvist, A. Flykt, and A. Öhman, “The Karolinska Directed Emotional Faces – KDEF,” CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institute, 1998. ISBN 91-630-7164-9
  31. [31] M. Taini, G. Zhao, S. Z. Li, and M. Pietikainen, “Facial Expression Recognition from Near-Infrared Video Sequences,” In: Proc. 19th Conf. on Pattern Recognition (ICPR 2008), Tampa, FL, 2008.
  32. [32] L. Jeni, D. Takacs, and A. Lorincz, “High Quality Facial Expression Recognition in Video Streams using Shape Related Information only,” In: Benchmarking Facial Image Analysis Technologies at ICCV, 2011.
  33. [33] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, and Z. Ambadar, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression,” IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2010.
  34. [34] E. Goeleven, R. D. Raedt, L. Leyman, and B. Verschuere, “The Karolinska Directed Emotional Faces: A validation study,” Cognition and Emotion, Vol.22, No.6, pp. 1094-1118, 2008.
  35. [35] I. Matthews and S. Baker, “Active appearance models revisited,” Int. J. of Computer Vision, Vol.60, pp. 135-164, 2004.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024