single-rb.php

JRM Vol.32 No.6 pp. 1164-1172
doi: 10.20965/jrm.2020.p1164
(2020)

Paper:

Toward Autonomous Garbage Collection Robots in Terrains with Different Elevations

Renato Miyagusuku, Yuki Arai, Yasunari Kakigi, Takumi Takebayashi, Akinori Fukushima, and Koichi Ozaki

Graduate School of Engineering, Utsunomiya University
7-1-2 Yoto, Utsunomiya, Tochigi 321-8585, Japan

Received:
June 15, 2020
Accepted:
September 1, 2020
Published:
December 20, 2020
Keywords:
field robotics, simultaneous localization and mapping, autonomous navigation, image recognition, Nakanoshima Challenge
Abstract

The practical application of robotic technologies can significantly reduce the burden on human workers, which is particularly important when considering the declining birthrates and aging populations in Japan and around the world. In this paper, we present our work toward realizing one such application, namely outdoor autonomous garbage collection robots. We address issues related to outdoor garbage recognition and autonomous navigation (mapping, localization, and re-localization) in crowded outdoor environments and areas with different terrain elevations. Our approach was experimentally validated in real urban settings during the Nakanoshima Challenge and Nakanoshima Challenge – Extra Challenge, where we managed to complete all tasks.

Navigation and garbage recognition during the Nakanoshima Challenge 2019

Navigation and garbage recognition during the Nakanoshima Challenge 2019

Cite this article as:
R. Miyagusuku, Y. Arai, Y. Kakigi, T. Takebayashi, A. Fukushima, and K. Ozaki, “Toward Autonomous Garbage Collection Robots in Terrains with Different Elevations,” J. Robot. Mechatron., Vol.32 No.6, pp. 1164-1172, 2020.
Data files:
References
  1. [1] S. Yuta, “Tsukuba Challenge: Open Experiments for Autonomous Navigation of Mobile Robots in the City – Activities and Results of the First and Second Stages –,” J. Robot. Mechatron., Vol.30, No.4, pp. 504-512, 2018.
  2. [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, pp. 1097-1105, 2012.
  3. [3] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
  4. [4] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” Advances in Neural Information Processing Systems, pp. 91-99, 2015.
  5. [5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” 2009 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 248-255, 2009.
  6. [6] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 7263-7271, 2017.
  7. [7] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint, arXiv:1804.02767, 2018.
  8. [8] Z.-Q. Zhao, P. Zheng, S.-t. Xu, and X. Wu, “Object detection with deep learning: A review,” IEEE Trans. on Neural Networks and Learning Systems, Vol.30, No.11, pp. 3212-3232, 2019.
  9. [9] R. Brooks, “A robust layered control system for a mobile robot,” IEEE J. on Robotics and Automation, Vo.2, No.1, pp. 14-23, 1986.
  10. [10] S. A. Rahok, H. Oneda, T. Nakayama, K. Inoue, S. Osawa, A. Tanaka, and K. Ozaki, “Enhancement of scan matching using an environmental magnetic field,” J. Robot. Mechatron., Vol.30, No.4, pp. 532-539, 2018.
  11. [11] A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “Octomap: An efficient probabilistic 3D mapping framework based on octrees,” Autonomous Robots, Vol.34, No.3, pp. 189-206, 2013.
  12. [12] A. Nuchter, K. Lingemann, J. Hertzberg, and H. Surmann, “6D SLAM – 3D mapping outdoor environments,” J. of Field Robotics, Vol.24, Nos.8-9, pp. 699-722, 2007.
  13. [13] Y. Takita, “Creating a 3D Cuboid Map Using Multi-layer 3D Lidar with a Swing Mechanism,” J. Robot. Mechatron., Vol.30, No.4, pp. 523-531, 2018.
  14. [14] R. Triebel, P. Pfaff, and W. Burgard, “Multi-level surface maps for outdoor terrain mapping and loop closing,” 2006 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 2276-2282, 2006.
  15. [15] Y. Seow, R. Miyagusuku, A. Yamashita, and H. Asama, “Detecting and solving the kidnapped robot problem using laser range finder and wifi signal,” Proc. of the IEEE Int. Conf. on Real-time Computing and Robotics, pp. 303-308, July 2017.
  16. [16] R. Miyagusuku, Y. Seow, A. Yamashita, and H. Asama, “Fast and robust localization using laser rangefinder and wifi data,” Proc. of the IEEE Int. Conf. on Multisensor Fusion and Integration for Intelligent Systems, pp. 111-117, 2017.
  17. [17] Y. Kakigi, K. Inoue, M. Hijikata, and K. Ozaki, “Development of flexible cowl covered mobile robot in consideration with safety and design property,” J. Robot. Mechatron., Vol.29, No.4, pp. 630-638, 2017.
  18. [18] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proc. of the IEEE Conf. on computer vision and pattern recognition, pp. 779-788, 2016.
  19. [19] G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE Trans. on Robotics, Vol.23, No.1, pp. 34-46, 2007.
  20. [20] W. Hess, D. Kohler, H. Rapp, and D. Andor, “Real-time loop closure in 2D lidar SLAM,” 2016 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 1271-1278, 2016.
  21. [21] R. Miyagusuku, A. Yamashita, and H. Asama, “Precise and accurate wireless signal strength mappings using Gaussian processes and path loss models,” Robotics and Autonomous Systems, Vol.103, pp. 134-150, 2018.
  22. [22] C. E. Rasmussen and C. K. I. Williams, “Gaussian processes for machine learning,” MIT Press, Cambridge, MA, 2006.
  23. [23] R. Miyagusuku, A. Yamashita, and H. Asama, “Improving gaussian processes based mapping of wireless signals using path loss models,” Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 4610-4615, 2016.
  24. [24] R. Miyagusuku, A. Yamashita, and H. Asama, “Data information fusion from multiple access points for wifi-based self-localization,” IEEE Robotics and Automation Letters, Vol.4, No.2, pp. 269-276, 2019.
  25. [25] Y. Konishi, K. Shigematsu, T. Tsubouchi, and A. Ohya, “Detection of Target Persons Using Deep Learning and Training Data Generation for Tsukuba Challenge,” J. Robot. Mechatron., Vol.30, No.4, pp. 513-522, 2018.
  26. [26] Y. Ishida and H. Tamukoh, “Semi-Automatic Dataset Generation for Object Detection and Recognition and its Evaluation on Domestic Service Robots,” J. Robot. Mechatron., Vol.32, No.1, pp. 245-253, 2020.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 05, 2024