Autonomous Mobile Robot Navigation Using Scene Matching with Local Features
Toshiaki Shioya*, Kazushige Kogure*, Tomoyuki Iwata*, and Naoya Ohta**
1-2681 Hirosawa-cho, Kiryu-shi, Gunma 376-8555, Japan
1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515, Japan
Received:April 20, 2016Accepted:September 27, 2016Published:December 20, 2016
Keywords:local feature, view sequence, autonomous mobile robot, self-localization
The validity of navigation using an autonomous traveling algorithm based on loose localization (the lax localization algorithm) for an autonomous mobile robot with view sequences  was studied, and autonomous travel of the robot in a general town environment with people walking was experimentally demonstrated. The autonomous mobile robot was assumed to be a general cart-type robot with steering wheels and fixed wheels, which was able to make a translational motion in the traveling direction and a rotational motion around the midpoint between the fixed wheels ( Fig. 1). The lax localization algorithm estimates only the distance in the traveling direction and the orientation of the cart, to make the robot follow the teaching path. The algorithm does not estimate the position in the transverse direction normal to the traveling direction. The cart orientation is estimated based on the direction in which a reference image can be viewed, and the position relative to the reference image in the robot traveling direction is estimated based the change in matching score. An autonomous travel experiment was conducted on the course of Tsukuba Challenge 2015 [a], and based on the estimation results, the robot could travel successfully in almost all the location of the course.
Cite this article as:Toshiaki Shioya, Kazushige Kogure, Tomoyuki Iwata, and Naoya Ohta, “Autonomous Mobile Robot Navigation Using Scene Matching with Local Features,” J. Robot. Mechatron., Vol.28, No.6, pp. 887-898, 2016.Data files:
Autonomous mobile robot
-  Y. Matsumoto, M. Inaba, and H. Inoue, “View-Based Approach to Robot Navigation,” J. of Robotics Society of Japan, Vol.20, No.5, pp. 506-514, 2002.
-  J. Eguchi and K. Ozaki, “Development of Autonomous Mobile Robot Based on Accurate Map in the Tsukuba Challenge 2014,” J. of Robotics and Mechatronics, Vol.27, No.4, 2015.
-  T. Yamada, T. Ishida, M. Sekiguchi, K. Okamura, K. Fukunaga, and A. Ohya, “Mobile Robot Outdoor Navigation with Upper Landmark Localization and Explicit Motion Planning,” J. of Robotics Society of Japan, Vol.30, No.3, pp. 253-261, 2012.
-  T. Shioya, K. Kogure, and N. Ohta, “Minimal Autonomous Mover – MG-11 for Tsukuba Challenge –,” J. of Robotics and Mechatronics, Vol.26, No.2, pp. 225-235, 2014.
-  C. Harris and M. Stephens, “A combined corner and edge detector,” Proc. 4th Alvey Vision Conf., pp. 147-151, 1988.
-  D. G. Lowe, “Object recognition from local scale-invariant features,” Proc. of IEEE Int. Conf. on Computer Vision (ICCV), pp. 1150-1157, 1999.
-  H. Bay, T. Tuytelaars, and V. Gool, “SURF: Speed Up Robust Features,” European Conf. on Computer Vision, pp. 404-417, 2006.
-  Y. Matsumoto, M. Inaba, and H. Inoue, “Visual Navigation Based on View Sequenced Route Representation,” J. of Robotics Society of Japan, Vol.15, No.2, pp. 74-80, 1997.