JRM Vol.35 No.2 pp. 279-287
doi: 10.20965/jrm.2023.p0279

Development Report:

Development of Autonomous Moving Robot Using Appropriate Technology for Tsukuba Challenge

Yuta Kanuki*, Naoya Ohta**, and Nobuaki Nakazawa***

*REVAST Co., Ltd.
2-68-12 Ikebukuro, Toshima-ku, Tokyo 171-0014, Japan

**Faculty of Informatics, Gunma University
1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515, Japan

***Graduate School of Science and Technology, Gunma University
29-1 Hon-cho, Ota, Gunma 373-0057, Japan

October 9, 2022
November 17, 2022
April 20, 2023
Tsukuba Challenge, appropriate technology, scan matching, image pyramid, traffic signal recognition
The robot developed for Tsukuba Challenge using appropriate technology

The robot developed for Tsukuba Challenge using appropriate technology

We have been participating in the Tsukuba Challenge, an open experiment involving autonomous robots, since 2014. The technology of our robot has stabilized, and our robot has continued to win the Tsukuba Mayor Prize from 2018 to 2021 without changing the basic configuration of the body and navigation software. Here, we report the robot’s structure as the project’s current completed form. Our robot is designed with the policy of selecting the most rational technology (appropriate technology) to achieve the purpose, even if it is not the latest. For example, we used image-like two-dimensional data instead of a three-dimensional point cloud in map matching for robot positioning. For pedestrian signal recognition, which was required to perform an optional task, we did not use deep learning but rather conventional color image processing. These techniques are advantageous for balancing the execution time and accuracy required in the challenge.

Cite this article as:
Y. Kanuki, N. Ohta, and N. Nakazawa, “Development of Autonomous Moving Robot Using Appropriate Technology for Tsukuba Challenge,” J. Robot. Mechatron., Vol.35 No.2, pp. 279-287, 2023.
Data files:
  1. [1] B. Hazeltine and C. Bull, “Appropriate Technology: Tools, Choices, and Implications (1st ed.),” Academic Press, New York, 1999.
  2. [2] J. Redmon, S. Divvala, R. Girshick, and A.Farhadi, “You only look once: Unified, real-time object detection,” Proc. of the Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, 2016.
  3. [3] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” arXiv preprint, arXiv:1804.02767, 2018.
  4. [4] T. Cormen, C. Leiserson, R. Rivest, and C. Stein, “Introduction to Algorithms (2nd ed),” Section 24.3, MIT Press and McGraw-Hill, 2001.
  5. [5] S. Toshiaki, K. Kazushige, and N. Ohta, “Minimal Autonomous Mover – MG-11 for Tsukuba Challenge –,” J. Robot. Mechatron., Vol.26, No.2, pp. 225-235, 2014.
  6. [6] Y. Chen and G. Medioni, “Object Modelling by Registration of Multiple Range Images.” Image Vision Computing, Vol.10, Issue 3, pp. 145-155, 1992.
  7. [7] J. Besl and N. McKay, “A Method for Registration of 3-D Shapes,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.14, Issue 2, pp. 239-256, 1992.
  8. [8] P. Biber and W. Straßer, “The Normal Distributions Transform: A New Approach to Laser Scan Matching,” Proc. of the Int. Conf. on Intelligent Robots and Systems (IROS), Vol.3, pp. 2743-2748, 2003.
  9. [9] Y. Kanuki and N. Ohta, “Development of Autonomous Robot with Simple Navigation System for Tsukuba Challenge 2015,” J. Robot. Mechatron., Vol.28, No.4, pp. 432-440, 2016.
  10. [10] H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping (SLAM): part I,” IEEE Automation Magazine, Vol.13, No.2, pp. 99-110, 2006.
  11. [11] T. Bailey and H. Durrant-Whyte, “Simultaneous localization and mapping (SLAM): part II,” IEEE Automation Magazine, Vol.13, No.3, pp. 108-117, 2006.
  12. [12] Y. Hara, T. Tomizawa, H. Date, Y. Kuroda, and T. Tsubouchi, “Tsukuba Challenge 2019: Task Settings and Experimental Results,” J. Robot. Mechatron., Vol.32, No.6, pp. 1104-1111, 2020.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jun. 07, 2023