JACIII Vol.27 No.4 pp. 622-631
doi: 10.20965/jaciii.2023.p0622

Research Paper:

Trash Detection Algorithm Suitable for Mobile Robots Using Improved YOLO

Ryotaro Harada, Tadahiro Oyama, Kenji Fujimoto ORCID Icon, Toshihiko Shimizu, Masayoshi Ozawa, Julien Samuel Amar, and Masahiko Sakai

Kobe City College of Technology
8-3 Gakuen-higashimachi, Nishi-ku, Kobe, Hyogo 651-2194, Japan

Corresponding author

December 16, 2022
March 24, 2023
July 20, 2023
autonomous robot, trash detection, deep neural network, edge device, YOLO

The illegal dumping of aluminum and plastic into cities and marine areas leads to negative impacts on the ecosystem and contributes to increased environmental pollution. Although volunteer trash pickup activities have increased in recent years, they require significant effort, time, and money. Therefore, we propose automated trash pickup robot, which incorporates autonomous movement and trash pickup arms. Although these functions have been actively developed, relatively little research has focused on trash detection. As such, we have developed a trash detection function by using deep learning models to improve the accuracy. First, we created a new trash dataset that classifies four types of trash with high illegal dumping volumes (cans, plastic bottles, cardboard, and cigarette butts). Next, we developed a new you only look once (YOLO)-based model with low parameters and computations. We trained the model on a created dataset and a dataset consisting of marine trash created during previous research. In consequence, the proposed models achieve the same detection accuracy as the existing models on both datasets, with fewer parameters and computations. Furthermore, the proposed models accelerate the edge device’s frame rate.

Result of trash detection

Result of trash detection

Cite this article as:
R. Harada, T. Oyama, K. Fujimoto, T. Shimizu, M. Ozawa, J. Amar, and M. Sakai, “Trash Detection Algorithm Suitable for Mobile Robots Using Improved YOLO,” J. Adv. Comput. Intell. Intell. Inform., Vol.27 No.4, pp. 622-631, 2023.
Data files:
  1. [1] Organisation for Economic Co-Operation and Development (OECD), “Environment at a glance 2020,” OECD Publishing, 2020.
  2. [2] National Oceanic and Atmospheric Administration (NOAA), “What is marine debris?” [Accessed May 3, 2021]
  3. [3] Keep America Beautiful, “Keep America Beautiful’s Volunteer Portal.” [Accessed May 3, 2021]
  4. [4] OR&R’s Marine Debris Division, NOAA, “Removal.” [Accessed May 3, 2021]
  5. [5] S. Hossain et al., “Autonomous trash collector based on object detection using deep neural network,” 2019 IEEE Reg. 10 Conf. (TENCON 2019), pp. 1406-1410, 2019.
  6. [6] M. Kraft et al., “Autonomous, onboard vision-based trash and litter detection in low altitude aerial images collected by an unmanned aerial vehicle,” Remote Sens., Vol.13, No.5, Article No.965, 2021.
  7. [7] R. Miyagusuku et al., “Toward autonomous garbage collection robots in terrains with different elevations,” J. Robot. Mechatron., Vol.32, No.6, pp. 1164-1172, 2020.
  8. [8] S. Gupta et al., “Gar-Bot: Garbage collecting and segregating robot,” J. Phys.: Conf. Ser., Vol.1950, Article No.012023, 2021.
  9. [9] J. Redmon et al., “You only look once: Unified, real-time object detection,” 2016 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 779-788, 2016.
  10. [10] P. F. Proença and P. Simões, “TACO: Trash annotations in context for litter detection,” arXiv: 2003.06975, 2020.
  11. [11] Y. Wang and X. Zhang, “Autonomous garbage detection for intelligent urban management,” MATEC Web Conf., Vol.232, Article No.01056, 2018.
  12. [12] M. Kulshreshtha et al., “OATCR: Outdoor autonomous trash-collecting robot design using YOLOv4-tiny,” Electronics, Vol.10, No.18, Article No.2292, 2021.
  13. [13] B. D. Carolis, F. Ladogana, and N. Macchiarulo, “YOLO TrashNet: Garbage detection in video streams,” 2020 IEEE Conf. Evol. Adapt. Intell. Syst. (EAIS), 2020.
  14. [14] Q. Chen and Q. Xiong, “Garbage classification detection based on improved YOLOV4,” J. Comput. Commun., Vol.8, No.12, pp. 285-294, 2020.
  15. [15] L. Zhao et al., “Skip-YOLO: Domestic garbage detection using deep learning method in complex multi-scenes,” Research Square, 2021.
  16. [16] A. Aishwarya et al., “A waste management technique to detect and separate non-biodegradable waste using machine learning and YOLO algorithm,” 2021 11th Int. Conf. Cloud Comput. Data Sci. Eng. (Conflu.), pp. 443-447, 2021.
  17. [17] J. Xue et al., “Garbage detection using YOLOv3 in Nakanoshima Challenge,” J. Robot. Mechatron., Vol.32, No.6, pp. 1200-1210, 2020.
  18. [18] M. Valdenegro-Toro, “Submerged marine debris detection with autonomous underwater vehicles,” 2016 Int. Conf. Robot. Autom. Humanit. Appl. (RAHA), 2016.
  19. [19] M. Fulton et al., “Robotic detection of marine litter using deep visual detection models,” 2019 Int. Conf. Robot. Autom. (ICRA), pp. 5752-5758, 2019.
  20. [20] J. Hong, M. Fulton, and J. Sattar, “TrashCan: A semantically-segmented dataset towards visual detection of marine debris,” arXiv: 2007.08097, 2020.
  21. [21] H. Panwar et al., “AquaVision: Automating the detection of waste in water bodies using deep transfer learning,” Case Stud. Chem. Environ. Eng., Vol.2, Article No.100026, 2020.
  22. [22] C. Wu et al., “Underwater trash detection algorithm based on improved YOLOv5s,” J. Real-Time Image Process., Vol.19, No.5, pp. 911-920, 2022.
  23. [23] X. Ma et al., “Light-YOLOv4: An edge-device oriented target detection method for remote sensing images,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., Vol.14, pp. 10808-10820, 2021.
  24. [24] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Scaled-YOLOv4: Scaling cross stage partial network,” 2021 IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 13024-13033, 2021.
  25. [25] R. Harada et al., “Development of an AI-based illegal dumping trash detection system,” Artif. Intell. Data Sci., Vol.3, No.3, pp. 1-9, 2022.
  26. [26] G. Jocher et al., “ultralytics/yolov5: v6.1 - TensorRT, TensorFlow Edge TPU and OpenVINO export and inference,” Zenodo, 2022.
  27. [27] K. Han et al., “GhostNet: More features from cheap operations,” 2020 IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1577-1586, 2020.
  28. [28] F. Meslet-Millet, E. Chaput, and S. Mouysset, “SPPNet: An approach for real-time encrypted traffic classification using deep learning,” 2021 IEEE Glob. Commun. Conf. (GLOBECOM), 2021.
  29. [29] C.-Y. Wang et al., “CSPNet: A new backbone that can enhance learning capability of CNN,” 2020 IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), pp. 1571-1580, 2020.
  30. [30] S. Elfwing, E. Uchibe, and K. Doya, “Sigmoid-weighted linear units for neural network function approximation in reinforcement learning,” Neural Netw., Vol.107, pp. 3-11, 2018.
  31. [31] A. Kuznetsova et al., “The Open Images Dataset V4,” Int. J. Comput. Vis., Vol.128, No.7, pp. 1956-1981, 2020.
  32. [32] I. Krasin et al., “OpenImages: A public dataset for large-scale multi-label and multi-class image classification,” 2017. [Accessed June 6, 2021]
  33. [33] S. Sekar, “Waste Classification Data,” Kaggle. [Accessed June 6, 2021]
  34. [34] A. Serezhkin, “Drinking Waste Classification,” Kaggle. [Accessed June 6, 2021]
  35. [35] M. Yang and G. Thung, “Classification of trash for recyclability status,” Stanford University, 2016.
  36. [36] Immersive Limit LLC, “Cigarette Butt Dataset.” [Accessed June 6, 2021]
  37. [37] E. Uzun et al., “An effective and efficient Web content extractor for optimizing the crawling process,” Softw.: Pract. Exp., Vol.44, No.10, pp. 1181-1199, 2014.
  38. [38] B. Zhou et al., “Places: A 10 million image database for scene recognition,” IEEE Trans. Pattern Anal. Mach. Intell., Vol.40, No.6, pp. 1452-1464, 2018.
  39. [39] G. Qin, B. Vrusias, and L. Gillam, “Background filtering for improving of object detection in images,” 2010 20th Int. Conf. Pattern Recognit., pp. 922-925, 2010.
  40. [40] Z. Ge et al., “YOLOX: Exceeding YOLO series in 2021,” arXiv: 2107.08430, 2021.
  41. [41] C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” J. Big Data, Vol.6, No.1, Article No.60, 2019.
  42. [42] R. Padilla et al., “A survey on performance metrics for object-detection algorithms,” 2020 Int. Conf. Syst. Signals Image Process. (IWSSIP), pp. 237-242, 2020.
  43. [43] D. Franklin, “Jetson Nano brings AI computing to everyone,” NVIDIA Technical Blog, 2019. [Accessed November 6, 2022]
  44. [44] D. Franklin, “Introducing Jetson Xavier NX, the world’s smallest AI supercomputer,” NVIDIA Technical Blog, 2019. [Accessed November 6, 2022]
  45. [45] Z. Qin et al., “Diagonalwise refactorization: An efficient training method for depthwise convolutions,” 2018 Int. Jt. Conf. Neural Netw. (IJCNN), 2018.
  46. [46] P. Zhang, E. Lo, and B. Lu, “High performance depthwise and pointwise convolutions on mobile devices,” 34th AAAI Conf. Artif. Intell. (AAAI-20), pp. 6795-6802, 2020.
  47. [47] ONNX, “Open Neural Network Exchange.” [Accessed December 2, 2022]
  48. [48] H. Vanholder, “Efficient inference with TensorRT.” [Accessed December 2, 2022]
  49. [49] R. David et al., “TensorFlow Lite Micro: Embedded machine learning on TinyML systems,” arXiv: 2010.08678, 2020.
  50. [50] Intel, “Intel® RealSense™ Depth Camera D435i.” [Accessed December 2, 2022]

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Sep. 29, 2023