Lixin Tang and Shin'ichi Yuta
We propose a method of autonomous navigation for mobile robots in indoor environments by a teaching and playback scheme. During teaching, an operator guides a robot to move by manual control. While moving, the robot memorizes its motion measured by odometry and an environmental image taken by an omnidirectional camera at each time interval, and regards places where images were taken as target positions. When navigating autonomously, the robot plays back memorized motion to track each target position and corrects its position by calculating its relative pose using current and memorized images, to follow the taught route. In this method, vertical edges existing in the environment are used as landmarks to calculate robot position, and an evaluation function defined by us is used to find corresponding vertical edges between two images. The robot thus can navigate robustly in real building environments. The system can avoid the problem of the operator covering a part of the environment in images during the teaching stage.
Keywords: navigation, teaching and playback, omnidirectional image, robot motion, robot pose calculation