single-jc.php

JACIII Vol.21 No.1 pp. 59-66
doi: 10.20965/jaciii.2017.p0059
(2017)

Paper:

Incremental Loop Closure Verification by Guided Sampling

Tanaka Kanji

University of Fukui
3-9-1 Bunkyo, Fukui 910-8507, Japan

Received:
March 10, 2016
Accepted:
October 28, 2016
Published:
January 20, 2017
Keywords:
loop closure detection, bag-of-words, post verification, guided sampling
Abstract
Loop closure detection, which is the task of identifying locations revisited by a robot in a sequence of odometry and perceptual observations, is typically formulated as a combination of two subtasks: (1) bag-of-words image retrieval and (2) post-verification using random sample consensus (RANSAC) geometric verification. The main contribution of this study is the proposal of a novel post-verification framework that achieves good precision recall trade-off in loop closure detection. This study is motivated by the fact that not all loop closure hypotheses are equally plausible (e.g., owing to mutual consistency between loop closure constraints) and that if we have evidence that one hypothesis is more plausible than the others, then it should be verified more frequently. We demonstrate that the loop closure detection problem can be viewed as an instance of a multi-model hypothesize-and-verify framework. Thus, we can build guided sampling strategies on this framework where loop closures proposed using image retrieval are verified in a planned order (rather than in a conventional uniform order) to operate in a constant time. Experimental results using a stereo simultaneous localization and mapping (SLAM) system confirm that the proposed strategy, the use of loop closure constraints and robot trajectory hypotheses as a guide, achieves promising results despite the fact that there exists a significant number of false positive constraints and hypotheses.
Cite this article as:
T. Kanji, “Incremental Loop Closure Verification by Guided Sampling,” J. Adv. Comput. Intell. Intell. Inform., Vol.21 No.1, pp. 59-66, 2017.
Data files:
References
  1. [1] M. Douze, H. Jégou, H. Sandhawalia, L. Amsaleg, and C. Schmid, “Evaluation of gist descriptors for web-scale image search,” Proc. ACM Int. Conf. Image and Video Retrieval, pp. 19:1-19:8, 2009.
  2. [2] H.-K. Tan and C.-W. Ngo, “Common pattern discovery using earth mover s distance and local flow maximization,” IEEE Int. Conf. Computer Vision (ICCV), pp. 1222-1229, 2005.
  3. [3] T. Zhou, Y. J. Lee, S. X. Yu, and A. A. Efros, “Flowweb: Joint image set alignment by weaving consistent, pixel-wise correspondences,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1191-1200, 2015.
  4. [4] G. Blanc, Y. Mezouar, and P. Martinet, “Indoor navigation of a wheeled mobile robot along visual routes,” Proc. of the 2005 IEEE Int. Conf. on Robotics and Automation 2005 (ICRA 2005), pp. 3354-3359, 2005.
  5. [5] M. Cummins and P. M. Newman, “Appearance-only SLAM at large scale with FAB-MAP 2.0,” I. J. Robotic Res., Vol.30, No.9, pp. 1100-1123, 2011.
  6. [6] K. Pirker, M. Rüther, and H. Bischof, “CD SLAM – continuous localization and mapping in a dynamic world,” 2011 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2011), San Francisco, CA, USA, September 25-30, 2011, pp. 3990-3997, 2011.
  7. [7] D. Gálvez-López and J. D. Tardós, “Bags of binary words for fast place recognition in image sequences,” IEEE Trans. on Robotics, Vol.28, No.5, pp. 1188-1197, 2012.
  8. [8] K. Lebeda, J. Matas, and O. Chum, “Fixing the locally optimized RANSAC,” British Machine Vision Conf. (BMVC 2012), Surrey, UK, September 3-7, 2012, pp. 1-11, 2012.
  9. [9] N. Sunderhauf and P. Protzel, “Switchable constraints vs. max-mixture models vs. rrr-a comparison of three approaches to robust pose graph slam,” 2013 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 5198-5203, 2013.
  10. [10] R. Raguram, O. Chum, M. Pollefeys, J. Matas, and J. Frahm, “Usac: a universal framework for random sample consensus,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.35, No.8, pp. 2022-2038, 2013.
  11. [11] Y. Kanazawa and H. Kawakami, “Detection of planar regions with uncalibrated stereo using distributions of feature points,” British Machine Vision Conf., pp. 1-10, Citeseer, 2004.
  12. [12] C. Beall and F. Dellaert, “Appearance-based localization across seasons in a metric map,” 6th Workshop on Planning, Perception and Navigation for Intelligent Vehicles (PPNIV), Citeseer, 2014.
  13. [13] A. Geiger, J. Ziegler, and C. Stiller, “Stereoscan: Dense 3d reconstruction in real-time,” 2011 IEEE Intelligent Vehicles Symposium (IV), pp. 963-968, 2011.
  14. [14] M. Kaess, A. Ranganathan, and F. Dellaert, “isam: Incremental smoothing and mapping,” IEEE Trans. on Robotics, Vol.24, No.6, pp. 1365-1378, 2008.
  15. [15] B. P. Williams, M. Cummins, J. Neira, P. M. Newman, I. D. Reid, and J. D. Tardós, “A comparison of loop closing techniques in monocular SLAM,” Robotics and Autonomous Systems, Vol.57, No.12, pp. 1188-1197, 2009.
  16. [16] B. P. Williams, M. Cummins, J. Neira, P. M. Newman, I. D. Reid, and J. D. Tardós, “An image-to-map loop closing method for monocular SLAM,” 2008 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, September 22-26, 2008, Acropolis Convention Center, Nice, France, pp. 2053-2059, 2008.
  17. [17] G. Klein and D. W. Murray, “Improving the agility of keyframe-based SLAM,” Computer Vision – ECCV 2008, 10th European Conf. on Computer Vision, Marseille, France, October 12-18, 2008, Proc., Part II, pp. 802-815, 2008.
  18. [18] J. Civera, O. G. Grasa, A. J. Davison, and J. M. M. Montiel, “1-point ransac for ekf-based structure from motion,” IEEE/RSJ Int. Conf. on Intelligent Robots and Systems 2009 (IROS 2009), pp. 3498-3504, 2009.
  19. [19] B. P. Williams, G. Klein, and I. Reid, “Automatic relocalization and loop closing for real-time monocular SLAM,” IEEE Trans. Pattern Anal. Mach. Intell., Vol.33, No.9, pp. 1699-1712, 2011.
  20. [20] W. Tan, H. Liu, Z. Dong, G. Zhang, and H. Bao, “Robust monocular SLAM in dynamic environments,” IEEE Int. Symposium on Mixed and Augmented Reality (ISMAR 2013), Adelaide, Australia, October 1-4, 2013, pp. 209-218, 2013.
  21. [21] H. Johannsson, M. Kaess, M. F. Fallon, and J. J. Leonard, “Temporally scalable visual SLAM using a reduced pose graph,” 2013 IEEE Int. Conf. on Robotics and Automation, Karlsruhe, Germany, May 6-10, 2013, pp. 54-61, 2013.
  22. [22] B. Tordoff and D. W. Murray, “Guided sampling and consensus for motion estimation,” European Conf. on Computer Vision, pp. 82-96, Springer, 2002.
  23. [23] H. Strasdat, A. J. Davison, J. M. M. Montiel, and K. Konolige, “Double window optimisation for constant time visual SLAM,” IEEE Int. Conf. on Computer Vision (ICCV 2011), Barcelona, Spain, November 6-13, 2011, pp. 2352-2359, 2011.
  24. [24] Y. Li, N. Snavely, and D. P. Huttenlocher, “Location recognition using prioritized feature matching,” European Conf. on Computer Vision, pp. 791-804, Springer, 2010.
  25. [25] R. Raguram, J. Tighe, and J.-M. Frahm, “Improved geometric verification for large scale landmark image collections,” British Machine Vision Conf., pp. 1-11, 2012.
  26. [26] G. H. Lee and M. Pollefeys, “Unsupervised learning of threshold for geometric verification in visual-based loop-closure,” 2014 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 1510-1516, 2014.
  27. [27] K. Tanaka, “Cross-season place recognition using nbnn scene descriptor,” 2015 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2015.
  28. [28] K. Tanaka, “nsupervised part-based scene modeling for visual robot localization,” Proc. of the 2015 IEEE Int. Conf. on Robotics and Automation 2015 (ICRA 2015), pp. 6359-6365, 2015.
  29. [29] M. Ando, Y. Chokushi, K. Tanaka, and K. Yanagihara, “Leveraging image-based prior in cross-season place recognition,” Proc. of the 2015 IEEE Int. Conf. on Robotics and Automation, 2015 (ICRA 2015), pp. 5455-5461, 2015.
  30. [30] G. Grisetti, R. Kümmerle, C. Stachniss, and W. Burgard, “A tutorial on graph-based slam,” IEEE Intelligent Trans. Systems Magazine, Vol.2, No.4, pp. 31-43, 2010.
  31. [31] M. Brubaker, A. Geiger, R. Urtasun et al., “Lost! leveraging the crowd for probabilistic visual self-localization,” 2013 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 3057-3064, 2013.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024