single-rb.php

JRM Vol.25 No.1 pp. 38-52
doi: 10.20965/jrm.2013.p0038
(2013)

Paper:

Robust Global Localization Using Laser Reflectivity

Dongxiang Zhang, Ryo Kurazume, Yumi Iwashita,
and Tsutomu Hasegawa

Graduate School of Information Science and Electrical Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan

Received:
October 15, 2011
Accepted:
March 21, 2012
Published:
February 20, 2013
Keywords:
global localization, appearance-based localization, map-based localization, laser range finder, reflectance image
Abstract
Global localization, which determines an accurate global position without prior knowledge, is a fundamental requirement for a mobile robot. Map-based global localization gives a precise position by comparing a provided geometric map and current sensory data. Although 3D range data is preferable for 6D global localization in terms of accuracy and reliability, comparison with large 3D data is quite timeconsuming. On the other hand, appearance-based global localization, which determines the global position by comparing a captured image with recorded ones, is simple and suitable for real-time processing. However, this technique does not work in the dark or in an environment in which the lighting conditions change remarkably. We herein propose a two-step strategy, which combines map-based global localization and appearance-based global localization. Instead of camera images, which are used for appearance-based global localization, we use reflectance images, which are captured by a laser range finder as a byproduct of range sensing. The effectiveness of the proposed technique is demonstrated through experiments in real environments.
Cite this article as:
D. Zhang, R. Kurazume, Y. Iwashita, and T. Hasegawa, “Robust Global Localization Using Laser Reflectivity,” J. Robot. Mechatron., Vol.25 No.1, pp. 38-52, 2013.
Data files:
References
  1. [1] R. Hodoshima, M. Guarnieri, R. Kurazume et al., “HELIOS Tracked Robot Team: Mobile RT System for Special Urban Search and Rescue Operations,” J. of Robotics and Mechatronics, Vol.23, No.6, pp. 1041-1054, 2011.
  2. [2] F. Bonin-Font, A. Ortiz, and G. Oliver, “Visual Navigation for Mobile Robots: A Survey,” J. of Intelligent and Robotic Systems: Theory and Applications, Vol.53, No.3, pp. 263-296, 2008.
  3. [3] D. Filliat and J.-A. Meyer, “Map-based navigation in mobile robots: I. A review of localization strategies,” Cognitive Systems Research, Vol.4, No.4, pp. 243-282, 2003.
  4. [4] Y. Matsumoto, M. Inaba, and H. Inoue, “Visual navigation using view-sequenced route representation,” Proc. – IEEE Int. Conf. on Robotics and Automation, Vol.1, pp. 83-88, 1996.
  5. [5] H. Yaguchi, N. Zaoputra, N. Hatao et al., “View-Based Localization Using Head-Mounted Multi Sensors Information,” J. of Robotics and Mechatronics, Vol.21, No.3, pp. 376-383, 2009.
  6. [6] S. D. Jones, C. Andresen, and J. L. Crowley, “Appearance based processes for visual navigation,” IEEE Int. Conf. on Intelligent Robots and Systems, Vol.2, pp. 551-557, 1997.
  7. [7] T. Ohno, A. Ohya, and S. Yuta, “Autonomous navigation for mobile robots referring pre-recorded image sequence,” IEEE Int. Conf. on Intelligent Robots and Systems, Vol.2, pp. 672-679, 1996.
  8. [8] A. C. Murillo, C. Sagüés, J. J. Guerrero, T. Goedemé, T. Tuytelaars, and L. Van Gool, “From omnidirectional images to hierarchical localization,” Robotics and Autonomous Systems, Vol.55, No.5, pp. 372-382, 2007.
  9. [9] E. Menegatti, M. Zoccarato, E. Pagello, and H. Ishiguro, “Imagebased Monte Carlo localisation with omnidirectional images,” Robotics and Autonomous Systems, Vol.48, No.1, pp. 17-30, 2004.
  10. [10] H. Andreasson, A. Treptow, and T. Duckett, “Self-localization in non-stationary environments using omni-directional vision,” Robotics and Autonomous Systems, Vol.55, No.7, pp. 541-551, 2007.
  11. [11] A. Angeli, D. Filliat, S. Doncieux, and J. A. Meyer, “A Fast and Incremental Method for Loop-Closure Detection Using Bags of Visual Words,” IEEE Trans. on Robotics, Vol.24, No.5, pp. 1027-1037, 2008.
  12. [12] M. Cummins and P. Newman, “FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance,” Int. J. of Robotics Research, Vol.27, No.6, pp. 647-665, 2008.
  13. [13] L. Tang and S. Yuta, “Mobile robot playback navigation based on robot pose calculation using memorized omnidirectional image,” J. of Robotics and Mechatronics, Vol.14, No.4, pp. 366-374, 2002.
  14. [14] A. C. Murillo, J. J. Guerrero, and C. Sagüés, “SURF features for efficient robot localization with omnidirectional images,” IEEE Int. Conf. on Robotics and Automation, Art. No.4209695, pp. 3901-3907, 2007.
  15. [15] J.Wolf,W. Burgard, and H. Burkhardt, “Robust vision-based localization for mobile robots using an image retrieval system based on invariant features,” IEEE Int. Conf. on Robotics and Automation, Vol.1, pp. 359-365, 2002.
  16. [16] K. Ishikawa, Y. Amano, and T. Hashizume et al., “A Mobile Mapping System for Precise Road Line Localization Using a Single Camera and 3D Road Model,” J. of Robotics and Mechatronics, Vol.19, No.2, pp. 174-180, 2007.
  17. [17] T. Saitoh and Y. Kuroda, “Self-Supervised Mapping for Road Shape Estimation Using Laser Remission in Urban Environments,” J. of Robotics and Mechatronics, Vol.22, No.6, pp. 726-736, 2010.
  18. [18] Y. Kuroda, M. Suzuki, T. Saitoh, and E. Terada, “Self-Supervised Online Long-Range Road Estimation in Complicated Urban Environments,” J. of Robotics and Mechatronics, Vol.24, No.1, pp. 16-27, 2012.
  19. [19] Y. Hara, H. Kawata, A. Ohya, and S. Yuta, “Mobile Robot Localization and Mapping by Scan Matching using Laser Reflection Intensity of the SOKUIKI Sensor,” IEEE Industrial Electronics, IECON, Art. No.4153430, pp. 3018-3023, 2006.
  20. [20] S. Thrun, “Probabilistic algorithms in robotics,” AI Magazine, Vol.21, No.4, pp. 93-109, 2000.
  21. [21] S. Thrun, D. Fox, W. Bugard, and F. Dellaert, “Robust Monte Carlo localization for mobile robots,” Artificial Intelligence, Vol.128, pp. 99-141, 2001.
  22. [22] D. Droeschel, S. May, D. Holz, P. Ploeger, and S. Behnke, “Robust Ego-Motion Estimation with ToF Cameras,” the European Conf. on Mobile Robots (ECMR), pp. 187-192, 2009.
  23. [23] Z. Kang, J. Li, L. Zhang, Q. Zhao, and S. Zlatanova, “Automatic Registration of Terrestrial Laser Scanning Point Clouds using Panoramic Reflectance Images,” Sensors, Vol.9, No.4, pp. 2621-2646, 2009.
  24. [24] R. Kurazume and S. Hirose, “An Experimental Study of a Cooperative Positioning System,” J. of Autonomous Robots, Vol.8, No.1, pp. 43-52, 2000.
  25. [25] H. Bay, A. Ess, and T. Tuytelaars, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, Vol.110, No.3, pp. 346-359, 2008.
  26. [26] D. Lowe, “Distinctive image feature from scale-invariant keypoint,” Int. J. of Computer Vision, Vol.60, No.2, pp. 91-110, 2004.
  27. [27] P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.14, No.2, pp. 239-256, 1992.
  28. [28] C. S. Chen and Y. P. Hung, “RANSAC-Based DARCES: A new approach to fast automatic registration of partially overlapping range images,” IEEE Trans. on Pattern Analysis andMachine Intelligence, Vol.21, No.11, pp. 1229-1234, 1999.
  29. [29] T. Suzuki, Y. Amano, and T. Hashizume, “6-DOF localization for a mobile robot using outdoor 3D point clouds,” J. of Robotics and Mechatronics, Vol.22, No.2, pp. 158-166, 2010.
  30. [30] A. Nüchter, H. Surmann, K. Lingemann, J. Hertzberg, and S. Thrun, “6D SLAM with an application in autonomous mine mapping,” Proc. IEEE Int. Conf. on Robotics and Automation, Vol.2, pp. 1998-2003, 2004.
  31. [31] M. Guarnieri, R. Kurazume, and H. Masuda, “HELIOS system: A team of tracked robots for special urban search and rescue operations,” IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS, Art. No.5354452, pp. 2795-2800, 2009.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024