Mobile Robot Playback Navigation Based on Robot Pose Calculation Using Memorized Omnidirectional Images
Lixin Tang and Shin'ichi Yuta
Intelligent Robot Laboratory, University of Tsukuba Tsukuba, Ibaraki, 305-8573 Japan
We propose a method of autonomous navigation for mobile robots in indoor environments by a teaching and playback scheme. During teaching, an operator guides a robot to move by manual control. While moving, the robot memorizes its motion measured by odometry and an environmental image taken by an omnidirectional camera at each time interval, and regards places where images were taken as target positions. When navigating autonomously, the robot plays back memorized motion to track each target position and corrects its position by calculating its relative pose using current and memorized images, to follow the taught route. In this method, vertical edges existing in the environment are used as landmarks to calculate robot position, and an evaluation function defined by us is used to find corresponding vertical edges between two images. The robot thus can navigate robustly in real building environments. The system can avoid the problem of the operator covering a part of the environment in images during the teaching stage.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Copyright© 2002 by Fuji Technology Press Ltd. and Japan Society of Mechanical Engineers. All right reserved.