single-jc.php

JACIII Vol.30 No.1 pp. 205-221
doi: 10.20965/jaciii.2026.p0205
(2026)

Research Paper:

Practical Application of Fuzzy Atmosfield with Machine Learning for Group Learning Assessment in Smart Classrooms

Linyao Yang*, Ting Zhang*, Bemnet Wondimagegnehu Mersha*,†, Yaping Dai**, Kaoru Hirota*, Wei Dai***, and Yumin Lin***

*School of Automation, Beijing Institute of Technology
No.5 Zhongguancun South Street, Haidian District, Beijing 100081, China

Corresponding author

**Beijing Institute of Technology, Zhuhai
No.6 Jinfeng Road, Tangjiawan, Zhuhai, Guangdong 519088, China

***River Security Technology Co., Ltd.
No.1520 Gumei Road, Xuhui District, Shanghai 200336, China

Received:
February 25, 2025
Accepted:
September 3, 2025
Published:
January 20, 2026
Keywords:
Fuzzy Atmosfield, action recognition, smart classroom, machine learning
Abstract

Traditional classroom group learning state evaluations are labor-intensive, time-consuming, and often biased, which has sparked the need for an automatic group learning state assessment method. Current research on smart education focuses primarily on identifying individual student behaviors, which has left a gap in the assessment of the group learning states. To address this, an integrated machine learning method with a Fuzzy Atmosfield for group learning state assessment is proposed. The Fuzzy Atmosfield was designed to capture the learning state of the group using an improved three-axis vector. The proposed method was tested on a customized simulated classroom dataset. Subsequently, it was applied to a real classroom video dataset. The accuracy of behavior recognition in the real classroom video data reached 83.73%, and the analysis results corresponded to real classroom situations. The experimental results show that the proposed method can provide automatic, accurate, and real-time group learning state assessments in a smart classroom.

Cite this article as:
L. Yang, T. Zhang, B. Mersha, Y. Dai, K. Hirota, W. Dai, and Y. Lin, “Practical Application of Fuzzy Atmosfield with Machine Learning for Group Learning Assessment in Smart Classrooms,” J. Adv. Comput. Intell. Intell. Inform., Vol.30 No.1, pp. 205-221, 2026.
Data files:
References
  1. [1] L. Pang, S. Hu, M. Chen, and Y. Guan, “Exploration of classroom teaching quality evaluation model based on OBE concept,” J. of Higher Education, Vol.9, No.29, pp. 102-107, 2023 (in Chinese). https://doi.org/10.19980/j.CN23-1593/G4.2023.29.024
  2. [2] Y. Jia and H. Yang, “Overview of classroom behavior recognition methods based on computer vision,” Automation & Instrumentation, Vol.2022, No.9, pp. 1-6, 2022 (in Chinese). https://doi.org/10.14016/j.cnki.1001-9227.2022.09.001
  3. [3] A. Kavitha, K. Shanmugapriya, L. G. Swetha, J. Varsana, and N. Varsha, “Framework for detecting student behaviour (nail biting, sleep, and yawn) using deep learning algorithm,” 2nd Int. Conf. on Artificial Intelligence and Machine Learning Applications Theme: Healthcare and Internet of Things, 2024. https://doi.org/10.1109/AIMLA59606.2024.10531573
  4. [4] Y. Dai, F. Yang, H. Zhao, Z. Jia, and K. Hirota, “Auto analysis system of students behavior in MOOC teaching,” Acta Automatica Sinica, Vol.46, No.4, pp. 681-694, 2020 (in Chinese). https://doi.org/10.16383/j.aas.c170416
  5. [5] R. Fu et al., “Learning behavior analysis in classroom based on deep learning,” 10th Int. Conf. on Intelligent Control and Information Processing, pp. 206-212, 2019. https://doi.org/10.1109/ICICIP47338.2019.9012177
  6. [6] J. Zhou et al., “Classroom learning status assessment based on deep learning,” Mathematical Problems in Engineering, Vol.2022, Article No.7049458, 2022. https://doi.org/10.1155/2022/7049458
  7. [7] H. Zhou, F. Jiang, J. Si, L. Xiong, and H. Lu, “Stuart: Individualized classroom observation of students with automatic behavior recognition and tracking,” 2023 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, 2023. https://doi.org/10.1109/ICASSP49357.2023.10094982
  8. [8] F. Yang and T. Wang, “SCB-Dataset3: A benchmark for detecting student classroom behavior,” arXiv:2310.02522, 2023. https://doi.org/10.48550/arXiv.2310.02522
  9. [9] K. Hirota and F. Dong, “Concept of fuzzy atmosfield and its visualization,” R. Seising, E. Trillas, C. Moraga, and S. Termini (Eds.), “On Fuzziness: A Homage to Lotfi A. Zadeh – Volume 1,” pp. 257-263, Springer, 2013. https://doi.org/10.1007/978-3-642-35641-4_39
  10. [10] F. Yang, “SCB-Dataset: A dataset for detecting student classroom behavior,” arXiv:2304.02488v6, 2025. https://doi.org/10.48550/arXiv.2304.02488
  11. [11] B. Sun et al., “Student Class Behavior Dataset: A video dataset for recognizing, detecting, and captioning students’ behaviors in classroom scenes,” Neural Computing and Applications, Vol.33, No.14, pp. 8335-8354, 2021. https://doi.org/10.1007/s00521-020-05587-y
  12. [12] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, “OpenPose: Realtime multi-person 2D pose estimation using part affinity fields,” arXiv:1812.08008, 2019. https://doi.org/10.48550/arXiv.1812.08008
  13. [13] H. Chen and J. Guan, “Teacher–student behavior recognition in classroom teaching based on improved YOLO-v4 and Internet of Things technology,” Electronics, Vol.11, No.23, Article No.3998, 2022. https://doi.org/10.3390/electronics11233998
  14. [14] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” arXiv:1506-01497, 2016. https://doi.org/10.48550/arXiv.1506.01497
  15. [15] C. Pabba and P. Kumar, “An intelligent system for monitoring students’ engagement in large classroom teaching through facial expression recognition,” Expert Systems, Vol.39, No.1, Article No.e12839, 2022. https://doi.org/10.1111/exsy.12839
  16. [16] N. S. Nuha, T. Mahmud, N. Rezaoana, M. S. Hossain, and K. Andersson, “An approach of analyzing classroom student engagement in multimodal environment by using deep learning,” IEEE 9th Int. Women in Engineering Conf. on Electrical and Computer Engineering, pp. 286-291, 2023. https://doi.org/10.1109/WIECON-ECE60392.2023.10456488
  17. [17] Ö. Sümer et al., “Multimodal engagement analysis from facial videos in the classroom,” IEEE Trans. on Affective Computing, Vol.14, No.2, pp. 1012-1027, 2023. https://doi.org/10.1109/TAFFC.2021.3127692
  18. [18] W. Lu et al., “A video dataset for classroom group engagement recognition,” Scientific Data, Vol.12, Article No.644, 2025. https://doi.org/10.1038/s41597-025-04987-w
  19. [19] Z.-T. Liu et al., “Emotional states based 3-D fuzzy atmosfield for casual communication between humans and robots,” 2011 IEEE Int. Conf. on Fuzzy Systems, pp. 777-782, 2011. https://doi.org/10.1109/FUZZY.2011.6007428
  20. [20] Z.-T. Liu et al., “Concept of fuzzy atmosfield for representing communication atmosphere and its application to humans-robots interaction,” J. Adv. Comput. Intell. Intell. Inform., Vol.17, No.1, pp. 3-17, 2013. https://doi.org/10.20965/jaciii.2013.p0003
  21. [21] F. Yan, A. M. Iliyasu, and K. Hirota, “Emotion space modelling for social robots,” Engineering Applications of Artificial Intelligence, Vol.100, Article No.104178, 2021. https://doi.org/10.1016/j.engappai.2021.104178
  22. [22] F. Yan, N. Wu, A. M. Iliyasu, K. Kawamoto, and K. Hirota, “Framework for identifying and visualising emotional atmosphere in online learning environments in the COVID-19 era,” Applied Intelligence, Vol.52, No.8, pp. 9406-9422, 2022. https://doi.org/10.1007/s10489-021-02916-z
  23. [23] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2016 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 770-778, 2016. https://doi.org/10.1109/CVPR.2016.90
  24. [24] A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv:2010.11929, 2020. https://doi.org/10.48550/arXiv.2010.11929

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jan. 21, 2026