single-jc.php

JACIII Vol.24 No.6 pp. 792-801
doi: 10.20965/jaciii.2020.p0792
(2020)

Paper:

Two-Channel Feature Extraction Convolutional Neural Network for Facial Expression Recognition

Chang Liu, Kaoru Hirota, Bo Wang, Yaping Dai, and Zhiyang Jia

School of Automation, Beijing Institute of Technology
No.5 Zhongguancun South Street, Haidian District, Beijing 10081, China

Corresponding author

Received:
October 13, 2020
Accepted:
October 19, 2020
Published:
November 20, 2020
Keywords:
facial expression recognition, convolutional neural network, local binary pattern, texture feature
Abstract

An emotion recognition framework based on a two-channel convolutional neural network (CNN) is proposed to detect the affective state of humans through facial expressions. The framework consists of three parts, i.e., the frontal face detection module, the feature extraction module, and the classification module. The feature extraction module contains two channels: one is for raw face images and the other is for texture feature images. The local binary pattern (LBP) images are utilized for texture feature extraction to enrich facial features and improve the network performance. The attention mechanism is adopted in both CNN feature extraction channels to highlight the features that are related to facial expressions. Moreover, arcface loss function is integrated into the proposed network to increase the inter-class distance and decrease the inner-class distance of facial features. The experiments conducted on the two public databases, FER2013 and CK+, demonstrate that the proposed method outperforms the previous methods, with the accuracies of 72.56% and 94.24%, respectively. The improvement in emotion recognition accuracy makes our approach applicable to service robots.

Cite this article as:
Chang Liu, Kaoru Hirota, Bo Wang, Yaping Dai, and Zhiyang Jia, “Two-Channel Feature Extraction Convolutional Neural Network for Facial Expression Recognition,” J. Adv. Comput. Intell. Intell. Inform., Vol.24, No.6, pp. 792-801, 2020.
Data files:
References
  1. [1] A. De, A. Saha, and M. Pal, “A human facial expression recognition model based on eigen face approach,” Procedia Computer Science, Vol.45, pp. 282-289, 2015.
  2. [2] L. Chen, M. Zhou, W. Su, M. Wu, J. She, and K. Hirota, “Softmax regression based deep sparse autoencoder network for facial emotion recognition in human-robot interaction,” Information Sciences, Vol.428, pp. 49-61, 2018.
  3. [3] M. S. Hossain, “Patient state recognition system for healthcare using speech and facial expressions,” J. of Medical Systems, Vol.40, Issue 12, Article No.272, 2016.
  4. [4] J. Khalfallah and J. B. H. Slama, “Facial expression recognition for intelligent tutoring systems in remote laboratories platform,” Procedia Computer Science, Vol.73, pp. 274-281, 2015.
  5. [5] K. Yu, Z. Wang, L. Zhuo, J. Wang, Z. Chi, and D. Feng, “Learning realistic facial expressions from web images,” Pattern Recognition, Vol.46, No.8, pp. 2144-2155, 2013.
  6. [6] S. Han, Z. Meng, A.-S. Khan, and Y. Tong, “Incremental boosting convolutional neural network for facial action unit recognition,” Advances in Neural Information Processing Systems 29 (NIPS 2016), pp. 109-117, 2016.
  7. [7] D. Zhang, D. Ding, J. Li, and Q. Liu, “PCA based extracting feature using fast Fourier transform for facial expression recognition,” Trans. on Engineering Technologies, pp. 413-424, 2015.
  8. [8] M. N. Patil, B. Iyer, and R. Arya, “Performance evaluation of PCA and ICA algorithm for facial expression recognition application,” Proc. of 5th Int. Conf. on Soft Computing for Problem Solving, pp. 965-976, 2016.
  9. [9] A. Jalal, S. Kamal, and D. Kim, “Facial expression recognition using 1D transform features and hidden Markov model,” J. of Electrical Engineering and Technology, Vol.12, No.4, pp. 1657-1662, 2017.
  10. [10] M. Mahmood, A. Jalal, and H. A. Evans, “Facial expression recognition in image sequences using 1D transform and Gabor wavelet transform,” 2018 Int. Conf. on Applied and Engineering Mathematics (ICAEM), pp. 1-6, 2018.
  11. [11] S. Jaiswal and M. Valstar, “Deep learning the dynamic appearance and shape of facial action units,” 2016 IEEE Winter Conf. on Applications of Computer Vision (WACV), pp. 1-8, 2016.
  12. [12] Q. You, J. Luo, H. Jin, and J. Yang, “Building a large scale dataset for image emotion recognition: The fine print and the benchmark,” arXiv preprint, arXiv: 1605.02677, 2016.
  13. [13] S. K. Esser, R. Appuswamy, P. A. Merolla, J. V. Arthur, and D. S. Modha, “Backpropagation for energy-efficient neuromorphic computing,” Advances in Neural Information Processing Systems, pp. 1117-1125, 2015.
  14. [14] H. W. Ng, V. D. Nguyen, V. Vonikakis, and S. Winkler, “Deep learning for emotion recognition on small datasets using transfer learning,” Proc. of the 2015 ACM on Int. Conf. on Multimodal Interaction, pp. 443-449, 2015.
  15. [15] V. E. Neagoe, A. Barar, N. Sebe, and P. Robitu, “A deep learning approach for subject independent emotion recognition from facial expressions,” Recent Advances in Image, Audio and Signal Processing, pp. 93-98, 2013.
  16. [16] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A discriminative feature learning approach for deep face recognition,” European Conf. on Computer Vision (ECCV 2016), pp. 499-515, 2016.
  17. [17] E. Hoffer and N. Ailon. “Deep metric learning using triplet network,” Int. Workshop on Similarity-Based Pattern Recognition (SIMBAD), pp. 84-92, 2015.
  18. [18] Y. Tang, “Deep learning using linear support vector machines,” arXiv preprint, arXiv:1306.0239, 2013.
  19. [19] T. Devries, K. Biswaranjan, and G. W. Taylor, “Multi-task learning of facial landmarks and expression,” 2014 Canadian Conference on Computer and Robot Vision, pp. 98-103, 2014.
  20. [20] J. Jeon, J.-C. Park, Y. Jo, C. Nam, K.-H. Bae, Y. Hwang, and D.-S. Kim, “A real-time facial expression recognizer using deep neural network,” Proc. of the 10th Int. Conf. on Ubiquitous Information Management and Communication, Article No.94, pp. 1-4, 2016.
  21. [21] Y. Guo, D. Tao, J. Yu, H. Xiong, Y. Li, and D. Tao, “Deep neural networks with relativity learning for facial expression recognition,” 2016 IEEE Int. Conf. on Multimedia & Expo Workshops (ICMEW), pp. 1-6, 2016.
  22. [22] S. Munasinghe, C. Fookes, and S. Sridharan, “Deep features-based expression-invariant tied factor analysis for emotion recognition,” 2017 IEEE Int. Joint Conf. on Biometrics (IJCB), pp. 546-554, 2017.
  23. [23] S. Xie, H. Hu, and Y. Wu, “Deep multi-path convolutional neural network joint with salient region attention for facial expression recognition,” Pattern Recognition, Vol.92, pp. 177-191, 2019.
  24. [24] X. Sun, S. Zheng, and H. Fu, “ROI-attention vectorized CNN model for static facial expression recognition,” IEEE Access, Vol.8, pp. 7183-7194, 2020.
  25. [25] H. Jung, S. Lee, J. Yim, S. Park, and J. Kim, “Joint fine-tuning in deep neural networks for facial expression recognition,” Proc. of the IEEE Int. Conf. on Computer Vision (ICCV), pp. 2983-2991, 2015.
  26. [26] A. Mollahosseini, D. Chan, and M. H. Mahoor, “Going deeper in facial expression recognition using deep neural networks,” 2016 IEEE Winter Conf. on Applications of Computer Vision (WACV), pp. 1-10, 2016.
  27. [27] A. T. Lopes, E. de Aguiar, A. F. De Souza, and T. Oliveira-Santos, “Facial expression recognition with convolutional neural networks: coping with few data and the training sample order,” Pattern Recognition, Vol.61, pp. 610-628, 2017.
  28. [28] T. Zhang, W. Zheng, Z. Cui, Y. Zong, and Y. Li, “Spatial–temporal recurrent neural network for emotion recognition,” IEEE Trans. on Cybernetics, Vol.49, No.3, pp. 839-847, 2018.
  29. [29] D. K. Jain, P. Shamsolmoali, and P. Sehdev, “Extended deep neural network for facial emotion recognition,” Pattern Recognition Letter, Vol.120, pp. 69-74, 2019.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Mar. 05, 2021