single-jc.php

JACIII Vol.16 No.7 pp. 793-799
doi: 10.20965/jaciii.2012.p0793
(2012)

Paper:

Multi-Scale Bag-of-Features for Scalable Map Retrieval

Kanji Tanaka and Kensuke Kondo

Graduate School of Engineering, University of Fukui, 3-9-1 Bunkyo, Fukui 910-8507, Japan

Received:
October 17, 2011
Accepted:
September 24, 2012
Published:
November 20, 2012
Keywords:
mobile robots, map retrieval, bag-of-features, multi-scale
Abstract
Retrieving a large collection of environment maps built by mapper robots is a key problem in mobile robot self-localization. The map retrieval problem is studied from the novel perspective of the multi-scale Bag-Of-Features (BOF) approach in this paper. In general, the multi-scale approach is advantageous in capturing both the global structure and the local details of a given map. BOF map retrieval is advantageous in its compact map representation as well as the efficient map retrieval using an inverted file system. The main contribution of this paper is combining the advantages of both approaches. Our approach is based on multi cue BOF as well as packing BOF, and achieves the efficiency and compactness of the map retrieval system. Experiments evaluate the effectiveness of the techniques presented using a large collection of environment maps.
Cite this article as:
K. Tanaka and K. Kondo, “Multi-Scale Bag-of-Features for Scalable Map Retrieval,” J. Adv. Comput. Intell. Intell. Inform., Vol.16 No.7, pp. 793-799, 2012.
Data files:
References
  1. [1] M. Ozuysal, M. Calonder, V. Lepetit, and P. Fua, “Fast keypoint recognition using random ferns,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.32, pp. 448-461, 2010.
  2. [2] K. Saeki, K. Tanaka, and T. Ueda, “Lsh-ransac: An incremental scheme for scalable localization,” Proc. IEEE Int. Conf. Robotics and Automation, pp. 3523-3530, 2009.
  3. [3] A. Angeli, D. Filliat, S. Doncieux, and J. A. Meyer, “A fast and incremental method for loop-closure detection using bags of visual words,” IEEE Trans. Robotics, Special Issue on Visual SLAM, 2008.
  4. [4] T. Botterill, S. Mills, and R. Green, “Speeded-up Bag-of-Words algorithm for robot localisation through scene recognition,” pp. 1-6, 2008.
  5. [5] M. Cummins and P. Newman, “Accelerated appearance-only SLAM,” In Proc. IEEE Int. Conf. Robotics and Automation, 2008.
  6. [6] J. Nieto, T. Bailey, and E. Nebot, “Recursive scan-matching slam,” Robotics and Autonomous Systems, Vol.55, pp. 39-49, 2007.
  7. [7] D. Hähnel, W. Burgard, D. Fox, and S. Thrun, “A highly efficient fastslam algorithm for generating cyclic maps of large-scale environments from raw laser range measurements,” Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2003.
  8. [8] M. Cummins and P. Newman, “Accelerated appearance-only slam,” Proc. IEEE Int. Conf. Robotics and Automation, 2008.
  9. [9] L. A. A. Andersson and J. Nygards, “C-sam: Multi-robot slam using square root information smoothing,” Proc. IEEE Int. Conf. Robotics and Automation, pp. 2798-2805, 2008.
  10. [10] E. Menegatti, A. Zanella, S. Zilli, F. Zorzi, and E. Pagello, “Rangeonly slam with a mobile robot and a wireless sensor networks,” In Proc. IEEE Int. Conf. Robotics and Automation, pp. 1699-1705, 2009.
  11. [11] A. Makarenko and H. Durrant-Whyte, “Decentralized bayesian algorithms for active sensor networks,” Information Fusion, Vol.7, No.4, pp. 418-433, 2006.
  12. [12] J. Sivic and A. Zisserman, “Video Google: a text retrieval approach to object matching in videos,” Proc. IEEE Int. Conf. Computer Vision, pp. 1470-1477, 2003.
  13. [13] A. Ramisa, A. Tapus, D. Aldavert, R. Toledo, and R. de Mantaras, “Robust vision-based robot localization using combinations of local feature region detectors,” Auton. Robots, Vol.27, No.4, pp. 373-385, 2009.
  14. [14] A. Angeli, D. Filliat, S. Doncieux, and J.-A. Meyer, “Fast and incremental method for loop-closure detection using bags of visual words,” Trans. IEEE Robotics, Vol.24, No.5, pp. 1027-1037, 2008.
  15. [15] D. Huber, O. Carmichael, and M. Hebert, “3d map reconstruction from range data,” In Proc. IEEE Int. Conf. Robotics and Automation, pp. 891-897, 2000.
  16. [16] E. Silani and M. Lovera, “Star identification algorithms: Novel approach & comparison study,” IEEE Trans. on Aerospace and Electronic Systems, Vol.42, No.4, pp. 1275-1288, 2006.
  17. [17] F. S. Khan, J. van deWeijer, and M. Vanrell, “Top-down color attention for object recognition,” Proc. IEEE Int. Conf. Computer Vision, 2009.
  18. [18] H. Jégou, M. Douze, and C. Schmid, “Packing bag-of-features,” In Proc. IEEE Int. Conf. Computer Vision, 2009.
  19. [19] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision, Vol.60, No.2, pp. 91-110, 2004.
  20. [20] D. Han, W. Li, and Z. Li, “Semantic image classification using statistical local spatial relations model,” Multimedia Tools Appl., Vol.39, No.2, pp. 169-188, 2008.
  21. [21] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” In IEEE Computer Society Conf. on Computer Vision and Pattern Recognition 2006, Vol.2, pp. 2169-2178, 2006.
  22. [22] T. Tuytelaars, “Dense interest points,” pp. 2281-2288, 2010.
  23. [23] T. Tuytelaars, “Vector quantizing feature space with a regular lattice,” Proc. Int. Conf. Computer Vision, 2007.
  24. [24] M. Montemerlo, “FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem with Unknown Data Association,” Ph.D. thesis, Carnegie Mellon University, 2003.
  25. [25] G. Schindler, M. Brown, and R. Szeliski, “City-scale location recognition,” Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, pp. 1-7, 2007.
  26. [26] J. Neira, J. D. Tardos, and J. A. Castellanos, “Linear time vehicle relocation in slam,” Proc. IEEE Int. Conf. Robotics and Automation, Vol.1, pp. 427-433, 2003.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024