Paper:
Autonomous Motion Control of a Mobile Robot Using Marker Recognition via Deep Learning in GPS-Denied Environments
Takashi Shimoda, Shoya Koga, and Kazuya Sato
Department of Mechanical Engineering, Faculty of Science and Engineering, Saga University
1 Honjo Saga, Saga 840-8502, Japan
In this study, an autonomous traveling control system for a mobile robot was developed using the results for calculation of the relative positions and angles between the mobile robot and markers, based on the image information obtained from a camera mounted on the mobile robot. The mobile robot runs autonomously based on the path of the marker. However, as the conventional method uses OpenCV to identify the shape of the marker using the color information of the marker, the marker may be misrecognized owing to the influence of light. Furthermore, the specifications of the camera limit the detection distance of the marker placed opposite it, which inevitably limits the straight traveling distance of the mobile robot in the proposed method. The proposed method improves the accuracy of marker recognition by using deep learning, and also devises the method of placing markers that allows the user to move straight ahead over a longer distance. It can also easily achieve autonomous path travel control, including long-distance straight lines, for a mobile robot in an environment where global positioning systems (GPS) cannot be received. In addition, the system can be easily operated by an actual user, who need not have any knowledge of programming, because the travel path of the mobile robot can be set up simply by placing markers. The effectiveness of the proposed system was demonstrated through several experiments.
- [1] N. Noguchi, “Agricultural Vehicle Robot,” J. Robot. Mechatron., Vol.30, No.2, pp. 165-172, 2018.
- [2] T. Fukao, “Field Robotics: Applications and Fundamentals,” J. Robot. Mechatron., Vol.33, No.6, pp. 1216-1222, 2021.
- [3] N. Kondo, M. Monta, and N. Noguchi, “Agricultural robots – mechanisms and practice –,” Kyoto University Press, 2011.
- [4] C. Cadena et al., “Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age,” IEEE Trans. on Robotics, Vol.32, No.6, pp. 1309-1332, 2016.
- [5] M. Risqi, U. Saputra, A. Markham, and N. Trigon, “Visual SLAM and Structure from Motion in Dynamic Environments: A Survey,” ACM Computing Surveys, Vol.51, No.2, Article No.37, 2018.
- [6] S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu, “A survey of deep learning techniques for autonomous driving,” J. of Field Robotics, Vol.37, Issue 3, pp. 362-386, 2019.
- [7] T. Nakagawa, “Automatic Construction Machines by GNSS Information Used,” J. of the Robotics Society of Japan, Vol.37, No.7, pp. 593-597, 2019 (in Japanese).
- [8] H. Fushimi, “Development of Markerless AR System Using Tablet Device,” Fujita Technical Research Report, Vol.56, pp. 71-76, 2020 (in Japanese).
- [9] K. Yamada, S. Koga, T. Shimoda, and K. Sato, “Autonomous Path Travel Control of Mobile Robot Using Internal and External Camera Images in GPS-Denied Environments,” J. Robot. Mechatron., Vol.33, No.6, pp. 1284-1293, 2021.
- [10] A. Bochkovskiy, C. Y. Wang, and H. Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” ArXiv, abs/2004.10934, 2020.
- [11] K. Sato, M. Yanagi, and K. Tsuruta, “Robust Adaptive Trajectory Control of Nonholonomic Mobile Robot with Compensation of Input Uncertainty,” J. of System Design and Dynamics, Vol.6, No.3, pp. 273-286, 2012.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.