single-jc.php

JACIII Vol.29 No.2 pp. 349-357
doi: 10.20965/jaciii.2025.p0349
(2025)

Research Paper:

Improved YOLOv8-Based Algorithm for Detecting Helmets of Electric Moped Drivers and Passengers

Si-Yue Fu*, Dong Wei*,**,† ORCID Icon, and Liu-Ying Zhou*

*School of Intelligence Science and Technology, Beijing University of Civil Engineering and Architecture
No.15 Yongyuan Road, Huangcun Town, Daxing District, Beijing 102616, China

**Beijing Key Laboratory of Super Intelligent Technology for Urban Architecture
No.15 Yongyuan Road, Huangcun Town, Daxing District, Beijing 102616, China

Corresponding author

Received:
April 14, 2024
Accepted:
January 6, 2025
Published:
March 20, 2025
Keywords:
YOLOv8, object detection, multiscale feature fusion, loss function, deep learning
Abstract

After learning, the object-detection algorithm can automatically detect whether the riders of electric mopeds are wearing helmets, thereby saving regulatory labor costs. However, the complex environmental background and headwear similar to helmets can easily cause a large number of false negatives and false positives, increasing the difficulty of detection. This paper proposes the YOLOv8n-Improved object-detection algorithm. First, in the neck part, the algorithm uses a simplified weighted bi-directional feature pyramid network structure to remove single input nodes, add connection edges, and attach path weights according to the importance of features. This structure optimizes the algorithm’s multiscale feature-fusion capability while improving computational efficiency. In the head part, the algorithm uses the scale-sensitive intersection over union loss function to introduce the vector angle between the predicted and ground-truth boxes, redefining the penalty metric. This improvement speeds up the convergence process of the network and improves the accuracy of the model. After comparative validation on the test set, the YOLOv8n-Improved algorithm shows a 1.37% and 3.16% increase in the average precision (AP) metric for electric moped and helmet detection, respectively, and a 2.27% increase in the overall mean AP metric, with a reduction in both false negatives and false positives for the two categories.

Cite this article as:
S. Fu, D. Wei, and L. Zhou, “Improved YOLOv8-Based Algorithm for Detecting Helmets of Electric Moped Drivers and Passengers,” J. Adv. Comput. Intell. Intell. Inform., Vol.29 No.2, pp. 349-357, 2025.
Data files:
References
  1. [1] Z. Zou, K. Chen, Z. Shi, Y. Guo, and J. Ye, “Object detection in 20 years: A survey,” Proc. of the IEEE, Vol.111, No.3, pp. 257-276, 2019. https://doi.org/10.1109/JPROC.2023.3238524
  2. [2] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” 2014 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 580-587, 2014. https://doi.org/10.1109/CVPR.2014.81
  3. [3] R. Girshick, “Fast R-CNN,” 2015 IEEE Int. Conf. on Computer Vision, pp. 1440-1448, 2015. https://doi.org/10.1109/ICCV.2015.169
  4. [4] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” Proc. of the 29th Int. Conf. on Neural Information Processing Systems, Vol.1, pp. 91-99, 2015.
  5. [5] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” Proc. of the 13th European Conf. on Computer Vision, Part 3, pp. 346-361, 2014. https://doi.org/10.1007/978-3-319-10578-9_23
  6. [6] T.-Y. Lin et al., “Feature pyramid networks for object detection,” 2017 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 936-944, 2017. https://doi.org/10.1109/CVPR.2017.106
  7. [7] W. Liu et al., “SSD: Single shot multibox detector,” Proc. of the 14th European Conf. on Computer Vision, Part 1, pp. 21-37, 2016. https://doi.org/10.1007/978-3-319-46448-0_2
  8. [8] Z. Li and F. Zhou, “FSSD: Feature fusion single shot multibox detector,” arXiv:1712.00960, 2017. https://doi.org/10.48550/arXiv.1712.00960
  9. [9] C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg, “DSSD: Deconvolutional single shot detector,” arXiv:1701.06659, 2017. https://doi.org/10.48550/arXiv.1701.06659
  10. [10] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” 2016 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 779-788, 2016. https://doi.org/10.1109/CVPR.2016.91
  11. [11] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” 2017 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 6517-6525, 2017. https://doi.org/10.1109/CVPR.2017.690
  12. [12] J. Redmon and A. Farhadi, “YOLOv3: An incremental improvement,” arXiv:1804.02767, 2018. https://doi.org/10.48550/arXiv.1804.02767
  13. [13] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal speed and accuracy of object detection,” arXiv:2004.10934, 2020. https://doi.org/10.48550/arXiv.2004.10934
  14. [14] S. Chen, J. Lan, H. Liu, C. Chen, and X. Wang, “Helmet wearing detection of motorcycle drivers using deep learning network with residual transformer-spatial attention,” Drones, Vol.6, No.12, Article No.415, 2022. https://doi.org/10.3390/drones6120415
  15. [15] W. Jia et al., “Real-time automatic helmet detection of motorcyclists in urban traffic using improved YOLOv5 detector,” IET Image Processing, Vol.15, No.14, pp. 3623-3637, 2021. https://doi.org/10.1049/ipr2.12295
  16. [16] J. P. Q. Tomas and B. Doma, “Motorcycle helmet detection and usage classification in the Philippines using YOLOv5 algorithm,” Proc. of the 2022 5th Int. Conf. on Computational Intelligence and Intelligent Systems, pp. 21-25, 2022. https://doi.org/10.1145/3581792.3581796
  17. [17] Y. Li, Q. Fan, H. Huang, Z. Han, and Q. Gu, “A modified YOLOv8 detection network for UAV aerial image recognition,” Drones, Vol.7, No.5, Article No.304, 2023. https://doi.org/10.3390/drones7050304
  18. [18] S.-Y. Fu, D. Wei, and L.-Y. Zhou, “Helmet detection algorithm of electric bicycle riders based on YOLOv5 with CBAM attention mechanism integration,” Proc. of the 8th Int. Workshop on Advanced Computational Intelligence and Intelligent Informatics, pp. 43-56, 2023. https://doi.org/10.1007/978-981-99-7593-8_5
  19. [19] J. Terven, D. M. Córdova-Esparza, and J.-A. Romero-González, “A comprehensive review of YOLO architectures in computer vision: From YOLOv1 to YOLOv8 and YOLO-NAS,” Machine Learning and Knowledge Extraction, Vol.5, No.4, pp. 1680-1716, 2023. https://doi.org/10.3390/make5040083
  20. [20] T. Wu and Y. Dong, “YOLO-SE: Improved YOLOv8 for remote sensing object detection and recognition,” Applied Sciences, Vol.13, No.24, Article No.12977, 2023. https://doi.org/10.3390/app132412977
  21. [21] T.-Y. Lin et al., “Feature pyramid networks for object detection,” 2017 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 936-944, 2017. https://doi.org/10.1109/CVPR.2017.106
  22. [22] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 8759-8768, 2018. https://doi.org/10.1109/CVPR.2018.00913
  23. [23] M. Tan, R. Pang, and Q. V. Le, “EfficientDet: Scalable and efficient object detection,” 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 10778-10787, 2020. https://doi.org/10.1109/CVPR42600.2020.01079
  24. [24] Z. Zheng et al., “Distance-IoU loss: Faster and better learning for bounding box regression,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.34, No.7, pp. 12993-13000, 2020. https://doi.org/10.1609/aaai.v34i07.6999
  25. [25] Z. Gevorgyan, “SIoU loss: More powerful learning for bounding box regression,” arXiv:2205.12740, 2022. https://doi.org/10.48550/arXiv.2205.12740

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 24, 2025