single-rb.php

JRM Vol.28 No.6 pp. 887-898
doi: 10.20965/jrm.2016.p0887
(2016)

Paper:

Autonomous Mobile Robot Navigation Using Scene Matching with Local Features

Toshiaki Shioya*, Kazushige Kogure*, Tomoyuki Iwata*, and Naoya Ohta**

*Mitsuba Corporation
1-2681 Hirosawa-cho, Kiryu-shi, Gunma 376-8555, Japan

**Gunma University
1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515, Japan

Received:
April 20, 2016
Accepted:
September 27, 2016
Published:
December 20, 2016
Keywords:
local feature, view sequence, autonomous mobile robot, self-localization
Abstract
The validity of navigation using an autonomous traveling algorithm based on loose localization (the lax localization algorithm) for an autonomous mobile robot with view sequences [1] was studied, and autonomous travel of the robot in a general town environment with people walking was experimentally demonstrated. The autonomous mobile robot was assumed to be a general cart-type robot with steering wheels and fixed wheels, which was able to make a translational motion in the traveling direction and a rotational motion around the midpoint between the fixed wheels ( Fig. 1). The lax localization algorithm estimates only the distance in the traveling direction and the orientation of the cart, to make the robot follow the teaching path. The algorithm does not estimate the position in the transverse direction normal to the traveling direction. The cart orientation is estimated based on the direction in which a reference image can be viewed, and the position relative to the reference image in the robot traveling direction is estimated based the change in matching score. An autonomous travel experiment was conducted on the course of Tsukuba Challenge 2015 [a], and based on the estimation results, the robot could travel successfully in almost all the location of the course.
Autonomous mobile robot

Autonomous mobile robot

Cite this article as:
T. Shioya, K. Kogure, T. Iwata, and N. Ohta, “Autonomous Mobile Robot Navigation Using Scene Matching with Local Features,” J. Robot. Mechatron., Vol.28 No.6, pp. 887-898, 2016.
Data files:
References
  1. [1] Y. Matsumoto, M. Inaba, and H. Inoue, “View-Based Approach to Robot Navigation,” J. of Robotics Society of Japan, Vol.20, No.5, pp. 506-514, 2002.
  2. [2] J. Eguchi and K. Ozaki, “Development of Autonomous Mobile Robot Based on Accurate Map in the Tsukuba Challenge 2014,” J. of Robotics and Mechatronics, Vol.27, No.4, 2015.
  3. [3] T. Yamada, T. Ishida, M. Sekiguchi, K. Okamura, K. Fukunaga, and A. Ohya, “Mobile Robot Outdoor Navigation with Upper Landmark Localization and Explicit Motion Planning,” J. of Robotics Society of Japan, Vol.30, No.3, pp. 253-261, 2012.
  4. [4] T. Shioya, K. Kogure, and N. Ohta, “Minimal Autonomous Mover – MG-11 for Tsukuba Challenge –,” J. of Robotics and Mechatronics, Vol.26, No.2, pp. 225-235, 2014.
  5. [5] C. Harris and M. Stephens, “A combined corner and edge detector,” Proc. 4th Alvey Vision Conf., pp. 147-151, 1988.
  6. [6] D. G. Lowe, “Object recognition from local scale-invariant features,” Proc. of IEEE Int. Conf. on Computer Vision (ICCV), pp. 1150-1157, 1999.
  7. [7] H. Bay, T. Tuytelaars, and V. Gool, “SURF: Speed Up Robust Features,” European Conf. on Computer Vision, pp. 404-417, 2006.
  8. [8] Y. Matsumoto, M. Inaba, and H. Inoue, “Visual Navigation Based on View Sequenced Route Representation,” J. of Robotics Society of Japan, Vol.15, No.2, pp. 74-80, 1997.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 18, 2024