single-jc.php

JACIII Vol.25 No.6 pp. 953-962
doi: 10.20965/jaciii.2021.p0953
(2021)

Paper:

A Facial Expressions Recognition Method Using Residual Network Architecture for Online Learning Evaluation

Duong Thang Long

Hanoi Open University
B101 House, Nguyen Hien Street, Hai Ba Trung District, Ha Noi City, Viet Nam

Received:
April 9, 2021
Accepted:
August 4, 2021
Published:
November 20, 2021
Keywords:
convolutional neural networks, facial expressions recognition, image augmenting, learning management system
Abstract

Facial expression recognition (FER) has been widely researched in recent years, with successful applications in a range of domains such as monitoring and warning of drivers for safety, surveillance, and recording customer satisfaction. However, FER is still challenging due to the diversity of people with the same facial expressions. Currently, researchers mainly approach this problem based on convolutional neural networks (CNN) in combination with architectures such as AlexNet, VGGNet, GoogleNet, ResNet, SENet. Although the FER results of these models are getting better day by day due to the constant evolution of these architectures, there is still room for improvement, especially in practical applications. In this study, we propose a CNN-based model using a residual network architecture for FER problems. We also augment images to create a diversity of training data to improve the recognition results of the model and avoid overfitting. Utilizing this model, this study proposes an integrated system for learning management systems to identify students and evaluate online learning processes. We run experiments on different datasets that have been published for research: CK+, Oulu-CASIA, JAFFE, and collected images from our students (FERS21). Our experimental results indicate that the proposed model performs FER with a significantly higher accuracy compared with other existing methods.

Cite this article as:
D. Long, “A Facial Expressions Recognition Method Using Residual Network Architecture for Online Learning Evaluation,” J. Adv. Comput. Intell. Intell. Inform., Vol.25 No.6, pp. 953-962, 2021.
Data files:
References
  1. [1] Y. Shang, M. Sato, and M. Kasuga, “An Interactive System with Facial Expression Recognition,” J. Adv. Comput. Intell. Intell. Inform., Vol.9, No.6, pp. 637-642, 2005.
  2. [2] S. Sawardekar and S. R. Naik, “Facial Expression Recognition using Efficient LBP and CNN,” Int. Research J. of Engineering and Technology (IRJET), Vol.5, Issue 6, pp. 2273-2277, 2018.
  3. [3] S. Li and W. Deng, “Deep Facial Expression Recognition: A Survey,” IEEE Trans. on Affective Computing, doi: 10.1109/TAFFC.2020.2981446, 2020.
  4. [4] S. Minaee, M. Minaei, and A. Abdolrashidi, “Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network,” Sensors, Vol.21, No.9, Article No.3046, 2021.
  5. [5] M. Wang et al., “Deep Face Recognition: A Survey,” School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, 2019.
  6. [6] M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, M. Hasan, B. C. Van Essen, A. A. S. Awwal, and V. K. Asari, “A State-of-the-Art Survey on Deep Learning Theory and Architectures,” Electronics, Vol.8, No.3, Article No.292, doi: 10.3390/electronics8030292, 2019.
  7. [7] Y. Wang, J. Wu, and K. Hoashi, “Lightweight Deep Convolutional Neural Networks for Facial Expression Recognition,” 2019 IEEE 21st Int. Workshop on Multimedia Signal Processing (MMSP), doi: 10.1109/MMSP.2019.8901820, 2019.
  8. [8] Z. Ming, J. Xia, M. Luqman, J.-C. Burie, and K. Zhao, “Dynamic Multi-Task Learning for Face Recognition with Facial Expression,” Lightweight Face Recognition Challenge Workshop during the 2019 Int. Conf. on Computer Vision (ICCV 2019), 2019.
  9. [9] X. Wang, K. Wang, and S. Lian, “A survey on face data augmentation for the training of deep neural networks,” Neural Computing and Applications, Vol.32, pp. 15503-15531, 2020.
  10. [10] S. Porcu, A. Floris, and L. Atzori, “Evaluation of Data Augmentation Techniques for Facial Expression Recognition Systems,” Electronics, Vol.9, No.11, Article No.1892, 2020.
  11. [11] D. T. Long, “A Lightweight Face Recognition Model Using Convolutional Neural Network for Monitoring Students in E-Learning,” I. J. of Modern Education and Computer Science, Vol.6, pp. 16-28, 2020.
  12. [12] P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion,” J. of Personality and Social Psychology, Vol. 7, No.2, pp. 124-129, 1971.
  13. [13] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” 2010 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition – Workshops, doi: 10.1109/CVPRW.2010.5543262, 2010.
  14. [14] G. Zhao, X. Huang, M. Taini, S. Z. Li, and M. Pietikäinen, “Facial expression recognition from near-infrared videos,” Image and Vision Computing, Vol.29, No.9, pp. 607-619, 2011.
  15. [15] P. L. Carrier, A. Courville, I. J. Goodfellow, M. Mirza, and Y. Bengio, “FER-2013 face database,” Universit de Montreal, 2013.
  16. [16] K. He, X. Zhang, S. Ren, and J. Sun, “Identity Mappings in Deep Residual Networks,” European Conf. on Computer Vision (ECCV 2016), pp. 630-645, 2016.
  17. [17] M. J. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, and J. Budynek, “The Japanese female facial expression (JAFFE) database,” 3rd IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 14-16, 1998.
  18. [18] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” 3rd Int. Conf. on Learning Representations (ICLR 2015), 2015.
  19. [19] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization,” 2017 IEEE Int. Conf. on Computer Vision (ICCV), 2017.
  20. [20] R. Zhao, T. Liu, J. Xiao, D. P. K. Lun, and K. M. Lam, “Deep Multi-task Learning for Facial Expression Recognition and Synthesis Based on Selective Feature Sharing,” arXiv preprint, arXiv:2007.04514, 2020.
  21. [21] Z. Zhang, P. Luo, C. L. Chen, and X. Tang, “From facial expression recognition to interpersonal relation prediction,” Int. J. of Computer Vision, Vol.126, pp. 550-569, 2018.
  22. [22] H. Ding, S. K. Zhou, and R. Chellappa, “FaceNet2ExpNet: Regularizing a deep face recognition net for expression recognition,” 2017 12th IEEE Int. Conf. on Automatic Face & Gesture Recognition (FG 2017), pp. 118-126, 2017.
  23. [23] W. Wang et al., “A Fine-Grained Facial Expression Database for End-to-End Multi-Pose Facial Expression Recognition,” arXiv preprint, arXiv:1907.10838, 2019.
  24. [24] D. Hamester, P. Barros, and S. Wermter, “Face expression recognition with a 2-channel convolutional neural network,” 2015 Int. Joint Conf. on Neural Networks (IJCNN), doi: 10.1109/IJCNN.2015.7280539, 2015.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 13, 2024