single-jc.php

JACIII Vol.24 No.7 pp. 891-899
doi: 10.20965/jaciii.2020.p0891
(2020)

Paper:

A Students’ Concentration Evaluation Algorithm Based on Facial Attitude Recognition via Classroom Surveillance Video

Simin Li, Yaping Dai, Kaoru Hirota, and Zhe Zuo

School of Automation, Beijing Institute of Technology
No. 5 Zhongguancun South Street, Haidian District, Beijing 100081, China

Corresponding author

Received:
October 20, 2020
Accepted:
October 27, 2020
Published:
December 20, 2020
Keywords:
concentration evaluation, facial attitude recognition, Dempster–Shafer theory, classroom surveillance video
Abstract
A Students’ Concentration Evaluation Algorithm Based on Facial Attitude Recognition via Classroom Surveillance Video

Algorithm's screen result in class video

To detect the students’ concentration state in classroom, a DS (Dempster–Shafer theory)-based evaluation algorithm is proposed by measuring the students’ Euler angles of their facial attitude. The detection of facial attitude angles can be implemented under the surveillance video with lower pixels. Therefore, compared with other methods for students’ concentration evaluation, the proposed algorithm can be applied directly in most classrooms by the support of existing monitoring equipment. By using DS theory to fuse the concentration state of each student, the curve of students’ overall concentration score changing with time can be obtained to describe the overall classroom concentration state. The design of the algorithm is proved to be feasible and effective under the dataset provided by computer front camera. The realization of the overall function effect of the algorithm is tested under the 35-person classroom video dataset. Compared with the average score from the questionnaire given by 20 reviewers, the accuracy of the proposed algorithm is about 85.3%.

Cite this article as:
Simin Li, Yaping Dai, Kaoru Hirota, and Zhe Zuo, “A Students’ Concentration Evaluation Algorithm Based on Facial Attitude Recognition via Classroom Surveillance Video,” J. Adv. Comput. Intell. Intell. Inform., Vol.24, No.7, pp. 891-899, 2020.
Data files:
References
  1. [1] L. Y. Mano, A. Mazzo, J. R. T. Neto, M. H. G. Meska, G. T. Giancristofaro, J. Ueyama, and G. A. P. Júnior, “Using emotion recognition to assess simulation-based learning,” Nurse Education in Practice, Vol.36, pp. 13-19, 2019.
  2. [2] K. L. B. and L. P. G. G., “Student emotion recognition system (SERS) for e-learning improvement based on learner concentration metric,” Procedia Computer Science, Vol.85, pp. 767-776, 2016.
  3. [3] J. Duan, “Evaluation and Evaluation System of Students’ Attentiveness Based on Machine Vision,” M.S. Thesis, Zhejiang Gongshang University, 2018 (in Chinese).
  4. [4] Y. Sun, “The Research of Pupil’s Classroom Focus Based on Face Detection,” M.S. Thesis, Hubei Normal University, 2016 (in Chinese).
  5. [5] S. Yang, P. Luo, C.-C. Loy, and X. Tang, “From Facial Parts Responses to Face Detection: A Deep Learning Approach,” IEEE Int. Conf. on Computer Vision (ICCV 2015), doi: 10.1109/ICCV.2015.419, 2015.
  6. [6] J. Xiang and G. Zhu, “Joint Face Detection and Facial Expression Recognition with MTCNN,” 4th Int. Conf. on Information Science and Control Engineering (ICISCE), pp. 424-427, 2017.
  7. [7] J. Zhou, “Vision-based Human Pose Estimation in Smart Classroom,” M.S. Thesis, Shanghai Jiao Tong University, 2011 (in Chinese).
  8. [8] D. Li, H. Liu, W. Chang, P. Xu, and Z. Luo, “Visualization Analysis of Learning Attention Based on Single-image PnP Head Pose Estimation,” Proc. of the 2017 2nd Int. Conf. on Education, Sports, Arts and Management Engineering (ICESAME 2017), pp. 1508-1512, doi: 10.2991/icesame-17.2017.324, 2017.
  9. [9] K. Yu and J. Yin, “Design and Study on Hybrid Gestures Based on Head Gestures and Facial Features,” Computer Applications and Software, Vol.35, No.12, pp. 209-215+258, 2018 (in Chinese and English abstract).
  10. [10] T. Liu and Y. Ling, “Research on Relative Coordinate Detection System of 3-DOF Spherical Motor,” J. of Shaanxi University of Technology (Natural Science Edition), Vol.35, No.2, pp. 43-49, 2019 (in Chinese).
  11. [11] B. Chen, “DS Theory of Evidence-based Decision Closeness Fusion Algorithm,” Presented at the 4th China Command and Control Conf., 2016.
  12. [12] J. F. Liu, L. Liu, and J. He, “High resolution remote sensing image change detection based on Bayesian method,” Technology Innovation and Application, No.11, pp. 1-5, 2019 (in Chinese).
  13. [13] S. Cui, H. L. Jiang, H. Rong, and W. Y. Wang, “A Survey of Multi-sensor Information Fusion Technology,” Auto Electric Parts, No.09, pp. 41-43, 2018.
  14. [14] Z. H. Chen and L. Q. Huang, “Online Multi-target Tracking Algorithm Based on Kalman Filtering and Multiple Information Fusion,” Information & Communications, No.03, pp. 35-38, 2019 (in Chinese).
  15. [15] D. Han, Y. Yang, and C. Han, “Advances in DS evidence theory and related discussions,” Control and Design, Vol.29, No.1, pp. 1-11, 2014 (in Chinese with English abstract).
  16. [16] Y. Dai, F. Yang, H. Zhao, Z. Jia, and K. Hirota, “Auto Analysis System of Students Behavior in MOOC Teaching,” Acta Automatica Sinica, Vol.46, No.4, pp. 681-694, 2020 (in Chinese with English abstract).

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Nov. 25, 2021