single-jc.php

JACIII Vol.19 No.1 pp. 11-22
doi: 10.20965/jaciii.2015.p0011
(2015)

Paper:

A Robust Visual-Feature-Extraction Method for Simultaneous Localization and Mapping in Public Outdoor Environment

Gangchen Hua* and Osamu Hasegawa**

*Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, J3-13, 4259 Nagatsuta, Midori-ku, Yokohama 226-8503, Japan

**Imaging Science and Engineering Laboratory, Tokyo Institute of Technology, J3-13, 4259 Nagatsuta, Midori-ku, Yokohama 226-8503, Japan

Received:
February 20, 2014
Accepted:
August 20, 2014
Published:
January 20, 2015
Keywords:
computer vision, visual SLAM
Abstract
We describe a new feature extraction method based on the geometric structure of matched local feature points that extracts robust features from an image sequence and performs satisfactorily in highly dynamic environments. Our proposed method is more accurate than other such methods in appearance-only simultaneous localization and mapping (SLAM). Compared to position-invariant robust features [1], it is also more suitable for low-cost, single lens cameras with narrow fields of view. Testing our method in an outdoor environment at Shibuya Station. We captured images using a conventional hand-held single-lens video camera. These environments of experiments are public environments without any planned landmarks. Results have shown that the proposed method accurately obtains matches for two visual-feature sets and that stable, accurate SLAM is achieved in dynamic public environments.
Cite this article as:
G. Hua and O. Hasegawa, “A Robust Visual-Feature-Extraction Method for Simultaneous Localization and Mapping in Public Outdoor Environment,” J. Adv. Comput. Intell. Intell. Inform., Vol.19 No.1, pp. 11-22, 2015.
Data files:
References
  1. [1] A. Kawewong, S. Tangruamsub, and O. Hasegawa, “Positioninvariant robust features for long-term recognition of dynamic outdoor scenes,” IEICE Trans. on Information and Systems, Vol.E93-D, No.9, pp. 2587-2601, 2010.
  2. [2] A. Torralba, A. Oliva, M. S. Castelhano, and J. M. Henderson, “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychol. Rev., Vol.113, pp. 766-786, Oct. 2006.
  3. [3] D. Lowe, “Object recognition from local scale-invariant features,” IEEE Int. Conf. on Computer Vision, Vol. 2, pp. 1150-1157, 1999.
  4. [4] H. Bay, T. Tuytelaars, and L. Gool, “Surf: Speeded up robust features,” ECCV, Vol.3951, pp. 404-417, 2006.
  5. [5] J. Beis and D. G. Lowe, “Shape indexing using approximate nearest-neighbour search in high-dimensional spaces,” Conf. on Computer Vision and Pattern Recognition, pp. 1000-1006, 1997.
  6. [6] M. Cummins and P. Newman, “Invited Applications Paper FABMAP: Appearance-Based Place Recognition and Mapping using a Learned Visual Vocabulary Model,” 27th Int. Conf. on Machine Learning (ICML2010), 2010.
  7. [7] A. Kawewong, N. Tongprasit, S. Tangruamsub, and O. Hasegawa, “Online incremental appearance-based slam in highly dynamic environments,” Int. J. of Robotics Research, Vol.30, No.1, pp. 33-55, 2011.
  8. [8] N. Tongprasit, A. Kawewong, and O. Hasegawa, “Pirf-nav 2: speeded-up online and incremental appearance-based slam in highly dynamic environment,” IEEE Workshop on Applications of Computer Vision (WACV), 2011.
  9. [9] E. Rosten and T. Drummond, “Fusing points and lines for high performance tracking,” IEEE Int. Conf. on Computer Vision, pp. 1508-1515, Springer, 2005.
  10. [10] C. Harris and M. Stephens, “A combined corner and edge detector,” Proc. of the 4th Alvey Vision Conf., pp. 147-151, 1988.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 02, 2024