single-jc.php

JACIII Vol.23 No.3 pp. 444-455
doi: 10.20965/jaciii.2019.p0444
(2019)

Paper:

Combining 2D Gabor and Local Binary Pattern for Facial Expression Recognition Using Extreme Learning Machine

Zhen-Tao Liu*,**, Si-Han Li*,**, Wei-Hua Cao*,**, Dan-Yun Li*,**,†, Man Hao*,**, and Ri Zhang*,**

*School of Automation, China University of Geosciences
No.388 Lumo Road, Hongshan District, Wuhan, Hubei 430074, China

**Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems
No.388 Lumo Road, Hongshan District, Wuhan, Hubei 430074, China

Corresponding author

Received:
June 9, 2018
Accepted:
November 12, 2018
Published:
May 20, 2019
Keywords:
facial expression recognition, 2D Gabor, LBP, ELM, human-robot interaction
Abstract

The efficiency of facial expression recognition (FER) is important for human-robot interaction. Detection of the facial region, extraction of discriminative facial expression features, and identification of categories of facial expressions are all related to the recognition accuracy and time-efficiency. An FER framework is proposed, in which 2D Gabor and local binary pattern (LBP) are combined to extract discriminative features of salient facial expression patches, and extreme learning machine (ELM) is adopted to identify facial expression categories. The combination of 2D Gabor and LBP can not only describe multiscale and multidirectional textural features, but also capture small local details. The FER of ELM and support vector machine (SVM) is performed using the Japanese female facial expression database and extended Cohn-Kanade database, respectively, in which both ELM and SVM achieve an accuracy of more than 85%, and the computational efficiency of ELM is higher than that of SVM. The proposed framework has been used in the multimodal emotional communication based humans-robots interaction system, in which FER within 2 seconds enables real-time human-robot interaction.

Real-time FER in the MEC-HRI system

Real-time FER in the MEC-HRI system

Cite this article as:
Z. Liu, S. Li, W. Cao, D. Li, M. Hao, and R. Zhang, “Combining 2D Gabor and Local Binary Pattern for Facial Expression Recognition Using Extreme Learning Machine,” J. Adv. Comput. Intell. Intell. Inform., Vol.23 No.3, pp. 444-455, 2019.
Data files:
References
  1. [1] A. Mehrabian, “Communication without words,” Communication Theory, pp. 193-200, 2008.
  2. [2] S. Wang, Y. Zhu, L. Yue et al., “Emotion recognition with the help of privileged information,” IEEE Trans. Auton. Mental Develop., Vol.7, No.3, pp. 189-200, 2015.
  3. [3] P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion,” J. Person. Soc. Psyc., Vol.17, No.2, pp. 124-129, 1971.
  4. [4] A. A. Mohammed, R. Minhas, Q. J. Wu et al., “Human face recognition based on multidimensional PCA and extreme learning machine,” Pattern. Recogn., Vol.44, No.10-11, pp. 2588-2597, 2011.
  5. [5] C. Ding, J. Choi, D. Tao et al., “Multi-directional multi-level dual-cross patterns for robust face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., Vol.38, No.3, pp. 518-531, 2016.
  6. [6] T. Ahsan, T. Jabid, and U.-P. Chong, “Facial expression recognition using local transitional pattern on Gabor filtered facial images,” IETE. Tech. Rev., Vol.30, No.1, pp. 47-52, 2013.
  7. [7] E. Owusu, Y. Zhan, and Q. R. Mao, “A neural-AdaBoost based facial expression recognition system,” Expert. Sys. Appl., Vol.41, No.7, pp. 3383-3390, 2014.
  8. [8] S. L. Happy and A. Routray, “Automatic facial expression recognition using features of salient facial patches,” IEEE trans. Affect. Comput., Vol.6, No.1, pp. 1-12, 2015.
  9. [9] W.-L. Chao, J.-J. Ding, and J.-Z. Liu, “Facial expression recognition based on improved local binary pattern and class-regularized locality preserving projection,” Signal Processing, Vol.117, pp. 1-10, 2015.
  10. [10] P. S. Aleksic and A. K. Katsaggelos, “Automatic facial expression recognition using facial animation parameters and multistream HMMs,” IEEE Trans. Inf. Forensics Security., Vol.1, No.1, pp. 3-11, 2006.
  11. [11] T. Chuk, A. B. Chan, and J. H. Hsiao, “Understanding eye movements in face recognition using hidden Markov models,” J. Vision., Vol.14, No.11, pp. 1-14, 2014.
  12. [12] P. N. Belhumeur, J. P. Hespanha, D. J. Kriegman et al., “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell., Vol.19, No.7, pp. 711-720, 1997.
  13. [13] M. H. Siddiqi, R. Ali, A. M. Khan et al., “Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields,” IEEE Trans. Image Process., Vol.24, No.4, pp. 1386-1398, 2015.
  14. [14] E. Lozano-Monasor, M. T. López, A. Fernández-Caballero et al., “Facial expression recognition from webcam based on active shape models and support vector machines,” Int. Workshop on Ambient Assisted Living, Lecture Notes in Computer Sci. Vol.8868, pp. 147-154, 2014.
  15. [15] C. Zhao, J. Lian, Q. Dang et al., “Classification of driver fatigue expressions by combined curvelet features and Gabor features, and random subspace ensembles of support vector machines,” J. Intell. Fuzzy Syst., Vol.26, No.1, pp. 91-100, 2014.
  16. [16] A. Uçar, Y. Demir, and C. Güzeliş, “A new facial expression recognition based on curvelet transform and online sequential extreme learning machine initialized with spherical clustering,” Neural Computing and Applications, Vol.27, No.1, pp. 131-142, 2016.
  17. [17] B. Lu, X. Duan, and Y. Yuan, “Facial expression recognition based on ensemble extreme learning machine with eye movements information,” Proc. ELM-2015, Vol.2, pp. 295-306, 2016.
  18. [18] M. Saaidia, N. Zermi, and M. Ramdani, “Facial expression recognition using neural network trained with Zernike moments,” 4th Int. Conf. Art. Intell. Appl. Eng. Tech, pp. 187-192, 2014.
  19. [19] J. J. Bazzo and M. V. Lamar, “Recognizing facial actions using Gabor wavelets with neutral face average difference,” 6th IEEE Int. Conf. Autom. Face Gesture Recgnit, pp. 505-510, 2004.
  20. [20] K. H. An and M. J. Chung, “Learning discriminative MspLBP features based on Ada-LDA for multi-class pattern classification,” Proc. IEEE Int. Conf. Robot. Autom., pp. 4803-4808, 2010.
  21. [21] G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Comput., Vol.18, No.7, pp. 1527-1554, 2006.
  22. [22] P. Liu, S. Han, Z. Meng et al., “Facial expression recognition via a boosted deep belief network,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1805-1812, 2014.
  23. [23] M. Liu, S. Li, S. Shan et al., “Au-aware deep networks for facial expression recognition,” 10th IEEE Int. Conf. Workshops. Autom. Face Gesture Recognit, pp. 1-6, 2013.
  24. [24] A. Eftekhari, M. Forouzanfar, H. A. Moghaddam et al., “Block-wise 2D kernel PCA/LDA for face recognition,” Inform. Process. Lett., Vol.110, No.17, pp. 761-766, 2010.
  25. [25] M. Lyons, S. Akamatsu, M. Kamachi et al., “Coding facial expressions with Gabor wavelets,” Proc. 3rd IEEE Int. Conf. Autom. Face Gesture Recognit., pp. 200-205, 1998.
  26. [26] P. Lucey, J. F. Cohn, T, Kanade et al., “The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” IEEE Comput. Society Conf. on Comput. Vision and Pattern Recognit. Workshops (CVPRW), pp. 94-101, 2010.
  27. [27] C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study,” Image Vis. Comput., Vol.27, No.6, pp. 803-816, 2009.
  28. [28] L. Zhong, Q. Liu, P. Yang et al., “Learning active facial patches for expression analysis,” IEEE Conf. Comput. Vis. Pattern Recogn., pp. 2562-2569, 2012.
  29. [29] B. Abboud, F. Davoine, and M. Dang, “Facial expression recognition and synthesis based on an appearance model,” Signal Processing: Image Communication, Vol.19, No.8, pp. 723-740, 2004.
  30. [30] S. Lucey, I. Matthews, C. Hu et al., “AAM derived face representations for robust facial action recognition,” Int. Conf. Autom. Face Gesture Recogn., pp. 155-160, 2006.
  31. [31] P. Viola and M. Jones, “Fast and robust classification using asymmetric AdaBoost and a detector cascade,” Adv. Neuro. Inform. Process. Syst., Vol.14, No.1, pp. 1311-1318, 2002.
  32. [32] S.-K. Pavani, D. Delgado, and A. F. Frangi, “Haar-like features with optimally weighted rectangles for rapid object detection,” Pattern Recogn., Vol.43, No.1, pp. 160-172, 2010.
  33. [33] W. Zhang, S. Shan, W. Gao et al., “Local Gabor binary pattern histogram sequence (LGBPHS): A novel non-statistical model for face representation and recognition,” 10th IEEE Int. Conf. Comput. Vision, pp. 786-791, 2005.
  34. [34] M. Yang, L. Zhang, S. C. K. Shiu et al., “Gabor feature based robust representation and classification for face recognition with Gabor occlusion dictionary,” Pattern Recogn., Vol.46, No.7, pp. 1865-1878, 2013.
  35. [35] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell., Vol.24, No.7, pp. 971-987, 2002.
  36. [36] T. Jabid, M. H. Kabir, and O. Chae, “Facial expression recognition using local directional pattern (LDP),” 2010 IEEE Int. Conf. Image Process., pp. 1605-1608, 2010.
  37. [37] T. Senechal, V. Rapp, H. Salam et al., “Combining AAM coefficients with LGBP histograms in the multi-kernel SVM framework to detect facial action units,” Face and Gesture 2011, pp. 860-865, 2011.
  38. [38] T. Senechal, V. Rapp, H. Salam et al., “Facial action recognition combining heterogeneous features via multikernel learning,” IEEE Trans. on Sys., Man, Cybern., Part B, (Cybern.), Vol.42, No.4, pp. 993-1005, 2012.
  39. [39] F. Gholami, “Facial expressions recognition based on a combination of the basic facial expression using weighted the local Gabor binary pattern (LGBP),” Int. J. Comput. Sci. Netw. Security, Vol.17, No.4, pp. 352-360, 2017.
  40. [40] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine: a new learning scheme of feedforward neural networks,” 2004 IEEE Int. Joint Conf. Neuro. Netw., 2004, Vol.2, pp. 985-990, 2004.
  41. [41] M. N. Dailey, C. Joyce, M. J. Lyons et al., “Evidence and a computational explanation of cultural differences in facial expression recognition,” Emotion, Vol.10, No.6, pp. 874-893, 2010.
  42. [42] Z.-T. Liu, G.-T. Sui, D.-Y. Li et al., “A novel facial expression recognition method based on extreme learning machine,” 34th Chinese Control Conf., pp. 3852-3857, 2015.
  43. [43] C.-W. Hsu and C.-J. Lin, “A comparison of methods for multiclass support vector machines,” IEEE Trans. Neural Netw., Vol.13, No.2, pp. 415-425, 2002.
  44. [44] Z.-T. Liu, F.-F. Pan, M. Wu et al., “A multimodal emotional communication based humans-robots interaction system,” 35th Chinese Control Conf., pp. 6363-6368, 2016.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024