single-rb.php

JRM Vol.33 No.6 pp. 1359-1372
doi: 10.20965/jrm.2021.p1359
(2021)

Paper:

A Novel Method for Goal Recognition from 10 m Distance Using Deep Learning in CanSat

Miho Akiyama* and Takuya Saito**

*Graduate School of Electrical and Information Engineering, Shonan Institute of Technology
1-1-25 Tsujido-nishikaigan, Fujisawa, Kanagawa 251-8511, Japan

**Department of Information Science, Faculty of Engineering, Shonan Institute of Technology
1-1-25 Tsujido-nishikaigan, Fujisawa, Kanagawa 251-8511, Japan

Received:
May 31, 2021
Accepted:
October 18, 2021
Published:
December 20, 2021
Keywords:
CanSat, deep learning, image classification, ROI, ARLISS
Abstract

In this study, we propose a method for CanSat to recognize and guide a goal using deep learning image classification even 10 m away from the goal, and describe the results of demonstrative evaluation to confirm the effectiveness of the method. We applied deep learning image classification to goal recognition in CanSat for the first time at ARLISS 2019, and succeeded in guiding it almost all the way to the goal in all three races, winning the first place as overall winner. However, the conventional method has a drawback in that the goal recognition rate drops significantly when the CanSat is more than 6–7 m away from the goal, making it difficult to guide the CanSat to the goal when it moves away from the goal because of various factors. To enable goal recognition from a distance of 10 m from the goal, we investigated the number of horizontal regions of interest divisions and the method of vertical shifts during image recognition, and clarified the effective number of divisions and recognition rate using experiments. Although object detection is commonly used to detect the position of an object from an image by deep learning, we confirmed that the proposed method has a higher recognition rate at long distances and a shorter computation time than SSD MobileNet V1. In addition, we participated in the CanSat contest ACTS 2020 to evaluate the effectiveness of the proposed method and achieved the zero-distance goal in all three competitions, demonstrating its effectiveness by winning first place in the comeback category.

CanSat recognizing a goal from 10m away using deep learning in ACTS

CanSat recognizing a goal from 10m away using deep learning in ACTS

Cite this article as:
M. Akiyama and T. Saito, “A Novel Method for Goal Recognition from 10 m Distance Using Deep Learning in CanSat,” J. Robot. Mechatron., Vol.33 No.6, pp. 1359-1372, 2021.
Data files:
References
  1. [1] N. Sako, Y. Tsuda, S. Nakasuka et al., “CanSat Suborbital Launch Experiment – University Educational Space Program Using Can Sized Pico-Satellite,” Acta Astronautica, Vol.48, Issues 5-12, pp. 767-776, 2001.
  2. [2] S. Nakasuka, “Students’ Challenges towards New Frontier-Enlarging Activities of UNISEC and Japanese Universities,” Trans. of the Japan Society for Aeronautical and Space Sciences, Space Technology Japan, Vol.7, ists26, 2009.
  3. [3] M. E. Aydemir, M. Celebi, S. Ay, E. V. Vivas, F. Calle Bustinza, and D. Phan, “Design and implementation of a rover-back CANSAT,” Proc. of the 5th Int. Conf. on Recent Advances in Space Technologies (RAST 2011), pp. 800-803, 2011.
  4. [4] M. Çelebi et al., “Design and navigation control of an advanced level CANSAT,” Proc. of the 5th Int. Conf. on Recent Advances in Space Technologies (RAST 2011), pp. 752-757, 2011.
  5. [5] F. Uwano, Y. Tajima, A. Murata, and K. Takadama, “Recovery System Based on Exploration-Biased Genetic Algorithm for Stuck Rover in Planetary Exploration,” J. Robot. Mechatron., Vol.29, No.5, pp. 877-886, doi: 10.20965/jrm.2017.p0877, 2017.
  6. [6] A. Colin and M. Jimenez-Lizárraga, “The CanSat technology for climate Monitoring in small regions at altitudes below 1 km,” Proc. of IAA Climate Change and Disaster Management Conf., 2015.
  7. [7] T. Saito and M. Akiyama, “Development of Rover with ARLISS Requirements and the Examination of the Rate of Acceleration that Causes Damages During a Rocket Launch,” J. Robot. Mechatron., Vol.31, No.6, pp. 913-925, doi: 10.20965/jrm.2019.p0913, 2019.
  8. [8] M. Akiyama and T. Saito, “Study on Parachute Entanglement Prevention Method Using Image Recognition in CanSat,” Proc. of 2020 Int. Conf. on Computational Science and Computational Intelligence (CSCI), pp. 1629-1634, doi: 10.1109/CSCI51800.2020.00300, 2020.
  9. [9] T. Saito and M. Akiyama, “Analysis of Log Data in ARLISS 2016 of a Planetary Exploration Rover,” Bulletin of Aichi University of Technology, Vol.15, pp. 19-25, 2018.
  10. [10] T. Saito and M. Akiyama, “Analysis of Results in ARLISS 2016 of a Planetary Exploration Rover,” The Special Interest Group Technical Reports of IPSJ, Embedded System Symp. 2017, pp. 112-113, 2017.
  11. [11] T. Saito, M. Akiyama et al., “Practical Evaluation and Functional Design of a Small Autonomous Robot in ARLISS,” Proc., JSME Conf. on Robotics and Mechatronics (Robomec’17), 2P2-A12, doi: 10.1299/jsmermd.2017.2P2-A12, 2017.
  12. [12] M. Akiyama and T. Saito, “A Novel CanSat-Based Implementation of the Guidance Control Mechanism Using Goal-Image Recognition,” Proc. of 2020 IEEE 9th Global Conf. on Consumer Electronics (GCCE), pp. 580-581, doi: 10.1109/GCCE50665.2020.9292063, 2020.
  13. [13] M. Akiyama and T. Saito, “Study on a Method to Guide Cansat to the Goal at a Distance of 0m Using Deep Learning,” IEICE Trans. on Information and Systems, Vol.J104-D, No.7, pp. 540-550, doi: 10.14923/transinfj.2020FIP0005, 2021 (in Japanese).
  14. [14] W. Liu et al., “Ssd: Single shot multibox detector,” European Conf. on Computer Vision, B. Leibe, J. Matas, N. Sebe, and M. Welling (Eds.), “Computer Vision – ECCV 2016,” Springer, pp. 21-37, doi: 10.1007/978-3-319-46448-0_2, 2016.
  15. [15] Y. Yoshimoto and H. Tamukoh, “FPGA Implementation of a Binarized Dual Stream Convolutional Neural Network for Service Robots,” J. Robot. Mechatron., Vol.33, No.2, pp. 386-399, doi: 10.20965/jrm.2021.p0386, 2021.
  16. [16] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” Proc. of Int. Conf. on Learning Representations, pp. 1-14, 2015.
  17. [17] R. C. Çalik and M. F. Demirci, “Cifar-10 Image Classification with Convolutional Neural Networks for Embedded Systems,” Proc. of 2018 IEEE/ACS 15th Int. Conf. on Computer Systems and Applications (AICCSA), pp. 1-2, doi: 10.1109/AICCSA.2018.8612873, 2018.
  18. [18] Y. LeCun and Y. Bengio, “Convolutional Networks for Images, Speech, and Time-Series,” the Handbook of Brain Theory and Neural Networks, Vol.3361, 1995.
  19. [19] M. Akiyama and T. Saito, “Influence of Radio Waves Generated by XBee Module on GPS Positioning Performance,” Proc. of 2020 IEEE Int. Conf. on Consumer Electronics – Taiwan (ICCE-Taiwan), pp. 1-2, doi: 10.1109/ICCE-Taiwan49838.2020.9258086, 2020.
  20. [20] N. Ketkar, “Introduction to keras,” Deep Learning with Python, Apress, pp. 97-111, 2017.
  21. [21] M. Sústrik, “ZeroMQ,” Introduction Amy Brown and Greg Wilson, 2015.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024