single-rb.php

JRM Vol.21 No.3 pp. 376-383
doi: 10.20965/jrm.2009.p0376
(2009)

Paper:

View-Based Localization Using Head-Mounted Multi Sensors Information

Hiroaki Yaguchi, Nikolaus Zaoputra, Naotaka Hatao, Kimitoshi Yamazaki, Kei Okada, and Masayuki Inaba

The University of Tokyo, Graduate School of Information Science and Technology
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan

Received:
October 2, 2008
Accepted:
February 28, 2009
Published:
June 20, 2009
Keywords:
view-based navigation, omnidirectional vision, sensor fusion
Abstract

In view-based navigation, view sequences are constructed by considering only the appearance of images. This approach can work only in limited situation, because the structure of environment and camera poses with 3D camera motion is not considered. In this paper, we construct a multi sensor system using an omnidirectional camera, a motion sensor and laser range finders. Using this system, we propose a method of construction view sequence, that takes 3D camera poses into account.

Cite this article as:
Hiroaki Yaguchi, Nikolaus Zaoputra, Naotaka Hatao, Kimitoshi Yamazaki, Kei Okada, and Masayuki Inaba, “View-Based Localization Using Head-Mounted Multi Sensors Information,” J. Robot. Mechatron., Vol.21, No.3, pp. 376-383, 2009.
Data files:
References
  1. [1] Y. Matsumoto, M. Inaba, and H. Inoue, “View-based approach to robot navigation,” Journal of the Robotics Society of Japan, Vol.26, No.5, pp. 506-514, 2002.
  2. [2] H. Morita, M. Hild, J. Miura, and Y. Shirai, “Panoramic view-based navigation in outdoor environments based on support vector learning,” In Int. Conf. on Intelligent Robots and Systems (IROS), pp. 2303-2307, 2006.
  3. [3] H. Katsura, J. Miura, M. Hild, and Y. Shirai, “A view-based outdoor navigation using object recognition robust to changes of weather and seasons,” In Int. Conf. on Intelligent Robots and Systems (IROS), pp. 2974-2979, 2003.
  4. [4] O. Stasse, A. J. Davison, R. Sellaouti, and K. Yokoi, “Real-time 3d slam for humanoid robot considering pattern generator information,” In Int. Conf. on Intelligent Robots and Systems (IROS), pp. 348-355, 2006.
  5. [5] A. A. Argyros, D. P. Tsakiris, and C. Groyer, “Biomimetric centering behavior,” IEEE Robotics & Automation Magazine, Vol.11, No.4, pp. 21-30, 2004.
  6. [6] J. Gaspar, N. Winters, and J. Santos-Victor, “Vision-based navigation and environmental representations with an omni-directional camera,” IEEE Transactions on Robotics and Automation, Vol.16, No.6, pp. 890-898, 2000.
  7. [7] H. Ishiguro and S. Tsuji, “Image-based memory of environment,” In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 634-639, 1996.
  8. [8] E. Menegatti, T. Maeda, and H. Ishiguro, “Image-based memory for robot navigation using properties of omnidirectional images,” Robotics and Autonomous Systems, Vol.47, No.4, pp. 251-267, 2004.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 15, 2021