single-jc.php

JACIII Vol.19 No.4 pp. 523-531
doi: 10.20965/jaciii.2015.p0523
(2015)

Paper:

Unsupervised Part-Based Scene Modeling for Map Matching

Kanji Tanaka and Shogo Hanada

Graduate School of Engineering, University of Fukui
3-9-1 Bunkyo, Fukui, Fukui 910-8507, Japan

Received:
October 28, 2014
Accepted:
April 30, 2015
Published:
July 20, 2015
Keywords:
mobile robots, map matching, part-model, common pattern discovery
Abstract
In exploring the 1-to-N map matching problem that exploits a compact map data description, we hope to improve map matching scalability used in robot vision tasks. We propose explicitly targeting fast succinct map matching, which consists of map matching subtasks alone. These tasks include offline map matching attempts to find compact part-based scene models that effectively explain individual maps by using fewer larger parts. These tasks also include online map matching to find correspondence between part-based maps efficiently. Our part-based scene modeling approach is unsupervised and uses common pattern discovery (CPD) between input and known reference maps. Results of our experiments, which use a publicly available radish dataset, confirm the effectiveness of our proposed approach.
Cite this article as:
K. Tanaka and S. Hanada, “Unsupervised Part-Based Scene Modeling for Map Matching,” J. Adv. Comput. Intell. Intell. Inform., Vol.19 No.4, pp. 523-531, 2015.
Data files:
References
  1. [1] B. Yamauchi and R. Beer, “Spatial learning for navigation in dynamic environments,” IEEE Trans. Systems, Man, and Cybernetics, Part B, Vol.26, No.3, pp. 496-505, 1996.
  2. [2] S. Huang, Z. Wang, and G. Dissanayake, “Sparse local submap joining filter for building large-scale maps,” IEEE Trans. Robotics (TRO), Vol.24, No.5, pp. 1121-1130, 2008.
  3. [3] A. Wendel, M. Maurer, G. Graber, T. Pock, and H. Bischof, “Dense reconstruction on-the-fly,” IEEE Int. Conf. Computer Vision and Pattern Recognition (CVPR), pp. 1450-1457, 2012.
  4. [4] S. Se, D. G. Lowe, and J. J. Little, “Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks,” I. J. Robotic Res., Vol.21, No.8, pp. 735-760, 2002.
  5. [5] M. Cummins and P. Newman, “Highly scalable appearance only slam – fab-map 2.0,” Robotics: Science and Systems, 2009.
  6. [6] D. Scaramuzza, F. Fraundorfer, and M. Pollefeys, “Closing the loop in appearance-guided omnidirectional visual odometry by using vocabulary trees. Robot. Auton. Syst., Vol.58, No.6, pp. 820-827, 2010.
  7. [7] M. A. Fischler and R. A. Elschlager, “The representation and matching of pictorial structures,” IEEE Trans. on Computers, Vol.C-22, No.1, pp. 67-92, 1973.
  8. [8] B. Leibe, A. Leonardis, and B. Schiele, “Combined object categorization and segmentation with an implicit shape model,” Euro. Conf. Compuer Vision (ECCV) Workshop on Statistical Learning in Computer Vision, pp. 17-32, 2004.
  9. [9] P. Arbelaez, B. Hariharan, C. Gu, S. Gupta, L. D. Bourdev, and J. Malik, “Semantic segmentation using regions and parts,” IEEE Int. Conf. Computer Vision and Pattern Recognition (CVPR), pp. 3378-3385, 2012.
  10. [10] H.-K. Tan and C.-W. Ngo, “Common pattern discovery using earth mover s distance and local flow maximization,” IEEE Int. Conf. Computer Vision (ICCV), pp. 1222-1229, 2005.
  11. [11] Y. Jiang, J. Meng, and J. Yuan, “Randomized visual phrases for object search,” IEEE Int. Conf. Computer Vision and Pattern Recognition (CVPR), pp. 3100-3107, 2012.
  12. [12] Y. Chokushi, K. Tanaka, and M. Ando, “Common landmark discovery in urban scenes,” IAPR Int. Conf. Machine Vision Applications, 2013.
  13. [13] A. Howard and N. Roy, “The robotics data set repository (radish),” 2003.
  14. [14] P. F. Felzenszwalb, R. B. Girshick, D. A. McAllester, and D. Ramanan, “Object detection with discriminatively trained partbased models,” IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), Vol.32, No.9, pp. 1627-1645, 2010.
  15. [15] T. Tuytelaars, C. H. Lampert, M. B. Blaschko, and W. L. Buntine, “Unsupervised object discovery: A comparison,” Int. J. of Computer Vision, Vol.88, No.2, pp. 284-302, 2010.
  16. [16] J. Sivic and A. Zisserman, “Video google: A text retrieval approach to object matching in videos,” IEEE Int. Conf. Computer Vision (ICCV), pp. 1470-1477, 2003.
  17. [17] J. Neira, J. D. Tardos, and J. A. Castellanos, “Linear time vehicle relocation in slam,” Proc. IEEE Int. Conf. Robotics and Automation, Vol.1, pp. 427-433, 2003.
  18. [18] S. Olufs and M. Vincze, “Robust single view room structure segmentation in manhattan-like environments from stereo vision,” IEEE Int. Conf. Robotics and Automation (ICRA), pp. 5315-5322, 2011.
  19. [19] A. Eliazar and R. Parr, “Dp-slam: Fast, robust simultaneous localization and mapping without predetermined landmarks,” Proc. 18th Int. Joint Conf. on Artificial Intelligence (IJCAI-03), pp. 1135-1142, Morgan Kaufmann, 2003.
  20. [20] T. Nagasaka and K. Tanaka, “An incremental scheme for dictionary-based compressive slam,” IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), pp. 872-879, 2011.
    Available at: http://rc.his.u-fukui.ac.jp/ICDCS.pdf. [Accessed October 28, 2014]
  21. [21] M. Ando, K. Tanaka, and Y. Inagaki, “A bag-ofbounding-boxes approach to object-level view image retrieval,” Proc. SICE Annual Conf., 2013.
    Available at: http://rc.his.u-fukui.ac.jp/BOBB.pdf. [Accessed October 28, 2014]

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024