single-rb.php

JRM Vol.27 No.6 pp. 681-690
doi: 10.20965/jrm.2015.p0681
(2015)

Paper:

FPGA-Based Stereo Vision System Using Gradient Feature Correspondence

Hayato Hagiwara, Yasufumi Touma, Kenichi Asami, and Mochimitsu Komori

Department of Applied Science for Integrated System Engineering, Kyushu Institute of Technology
1-1 Sensui, Tobata, Kitakyushu 804-8550, Japan

Received:
June 23, 2015
Accepted:
August 24, 2015
Published:
December 20, 2015
Keywords:
stereo vision, corner detection, feature description, gradient features, FPGA
Abstract
Mobile robot with a stereo vision
This paper describes an autonomous mobile robot stereo vision system that uses gradient feature correspondence and local image feature computation on a field programmable gate array (FPGA). Among several studies on interest point detectors and descriptors for having a mobile robot navigate are the Harris operator and scale-invariant feature transform (SIFT). Most of these require heavy computation, however, and using them may burden some computers. Our purpose here is to present an interest point detector and a descriptor suitable for FPGA implementation. Results show that a detector using gradient variance inspection performs faster than SIFT or speeded-up robust features (SURF), and is more robust against illumination changes than any other method compared in this study. A descriptor with a hierarchical gradient structure has a simpler algorithm than SIFT and SURF descriptors, and the result of stereo matching achieves better performance than SIFT or SURF.
Cite this article as:
H. Hagiwara, Y. Touma, K. Asami, and M. Komori, “FPGA-Based Stereo Vision System Using Gradient Feature Correspondence,” J. Robot. Mechatron., Vol.27 No.6, pp. 681-690, 2015.
Data files:
References
  1. [1] K. Asami, H. Hagiwara, and M. Komori, “Visual Navigation System Based on Evolutionary Computation on FPGA for Patrol Service Robot,” Proc. of the 1st IEEE Global Conf. on Consumer Electronics, pp. 169-172, Chiba, Japan, October 2-5, 2012.
  2. [2] H. P. Moravec, “Towards Automatic Visual Obstacle Avoidance,” Proc. of Int. Joint Conf. on Artificial Intelligence (5th: Massachusetts Institute of Technology), p. 584, 1977.
  3. [3] C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” Proc. of 4th Alvey Vision Conf., Manchester, U.K., pp. 147-151, Aug. 1988.
  4. [4] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int. J. of Computer Vision, Vol.60, No.2, pp. 91-110, January 2004.
  5. [5] H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: Speeded Up Robust Features,” Proc. of the 9th European Conf. on Computer Vision, May 2006.
  6. [6] M. Tomono, “3D Object Modeling and Segmentation Using Image Edge Points in Cluttered Environments,” J. of Robotics and Mechatronics, Vol.21, No.6, pp. 672-679, 2009.
  7. [7] T. Suzuki, Y. Amano, T. Hashizume, and S. Suzuki, “3D Terrain Reconstruction by Small Unmanned Aerial Vehicle Using SIFT-Based Monocular SLAM,” J. of Robotics and Mechatronics, Vol.23, No.2, pp. 292-301, 2011.
  8. [8] S. Aoyagi, A. Kohama, Y. Inaura, M. Suzuki, and T. Takahashi, “Image-Searching for Office Equipment Using Bag-of-Keypoints and AdaBoost,” J. of Robotics and Mechatronics, Vol.23, No.6, pp. 1080-1090, 2011.
  9. [9] T. Tasaki, S. Tokura, T. Sonoura, F. Ozaki, and N. Matsuhira, “Obstacle Location Classification and Self-Localization by Using a Mobile Omnidirectional Camera Based on Tracked Floor Boundary Points and Tracked Scale-Rotation Invariant Feature Points,” J. of Robotics and Mechatronics, Vol.23, No.6, pp. 1012-1023, 2011.
  10. [10] H.-H. Yu, H.-W. Hsieh, Y.-K. Tasi, Z.-H. Ou, Y.-S. Huang, and T. Fukuda, “Visual Localization for Mobile Robots Based on Composite Map,” J. of Robotics and Mechatronics, Vol.25, No.1, pp. 25-37, 2013.
  11. [11] S. Se, D. G. Lowe, and J. J. Little, “Vision-Based Global Localization and Mapping for Mobile Robots,” IEEE Trans. on Robotics, Vol.21, No.3, pp. 364-375, 2005.
  12. [12] K. Mikolajczyk and C. Schmid, “A Performance Evaluation of Local Descriptors,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.27, No.10, pp. 1615-1630, October 2005.
  13. [13] H. Fujiyoshi, “Gradient-Based Feature Extraction: SIFT and HOG,” Information Processing Society of Japan Research Paper CVIM160, pp. 211-224, 2007.
  14. [14] T. Inoue, “A Study on Feature-Extraction Methods for Improvement of Image-Recognition Performance,” Pioneer R&D, Vol.22, pp. 57-62, 2013.
  15. [15] H. Fujiyoshi and M. Ambai, “Gradient-Based Image Local Features,” J. of the Japan Society for Precision Engineering, Vol.77, No.12, pp. 1109-1116, 2011.
  16. [16] H. Hagiwara, K. Asami, and M. Komori, “Real-Time Image Processing System by Using FPGA for Service Robots,” Proc. of the 1st IEEE Global Conf. on Consumer Electronics, pp. 178-181, Chiba, Japan, October 2-5, 2012.
  17. [17] S. Simard, R. Beguenane, and J. G. Mailloux, “Performance Evaluation of Rotor Flux-Oriented Control on FPGA for Advanced AC Drives,” J. of Robotics and Mechatronics, Vol.21, No.1, pp. 113-120, 2009.
  18. [18] S. Hadjitheophanous, C. Ttofis, A. S. Georghiades, and T. Theocharides, “Towards Hardware Stereoscopic 3D Reconstruction: A Real-Time FPGA Computation of the Disparity Map,” Proc. of Conf. on Design, Automation and Test in Europe, 2010.
  19. [19] L. Chen, H. Yang, T. Takaki, and I. Ishii, “Real-Time Optical Flow Estimation Using Multiple Frame-Straddling Intervals,” J. of Robotics and Mechatronics, Vol.24, No.4, pp. 686-698, 2012.
  20. [20] A. Schmidt, M. Kraft, M. Fularz, and Z. Domagala, “The Comparison of Point Feature Detectors and Descriptors in the Context of Robot Navigation,” J. of Automation, Mobile Robotics & Intelligent Systems, Vol.7, No.1, 2013.
  21. [21] S. Hirai, M. Zakoji, A. Masubuchi, and T. Tsuboi, “Realtime FPGA-Based Vision System,” J. of Robotics and Mechatronics, Vol.17, No.4, pp. 401-409, 2005.
  22. [22] V. Bonato, E. Marques, and G. A. Constantinides, “A Parallel Hardware Architecture for Scale and Rotation Invariant Feature Detection,” IEEE Trans. on Circuits and Systems for Video Technology, Vol.18, No.12, pp. 1703-1712, 2008.
  23. [23] S. Jin, J. Cho, X. Pham, K. M. Lee, S.-K. Park, M. Kim, and J. W. Jeon, “FPGA Design and Implementation of a Real-Time Stereo Vision System,” IEEE Trans. on Circuits and Systems for Video Technology (CSVT), Vol.20, No.1, pp. 15-26, 2010.
  24. [24] C. Ttofis, S. Hadjitheophanous, A. S. Georghiades, and T. Theocharides, “Edge-Directed Hardware Architecture for Real-Time Disparity Map Computation,” IEEE Trans. on Computers, Vol.62, No.4, pp. 690-704, 2013.
  25. [25] Y. Kanazawa and K. Kanatani, “Detection of Feature Points for Computer Vision,” J. of the Institute of Electronics, Information, and Communication Engineers, Vol.87, No.12, pp. 1043-1048, 2004.
  26. [26] E. Rosten and T. Drummond, “Machine Learning for High-Speed Corner Detection,” Proc. of European Conf. on Computer Vision, pp. 430-443, 2006.
  27. [27] M. Kraft, A. Schmidt, and A. Kasinski, “High-Speed Image Feature Detection Using FPGA Implementation of Fast Algorithm,” Proc. of Int. Conf. on Computer Vision Theory and Applications – VISAPP, Vol.1, pp. 174-179, 2008.
  28. [28] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” IEEE Computer Society Conf. on Computer Vision and Pattern Recognition 2005 (CVPR 2005), Vol.1, pp. 886-893, 2005.
  29. [29] A. Gil, O. M. Mozos, M. Ballesta, and O. Reinoso, “A Comparative Evaluation of Interest Point Detectors and Local Descriptors for Visual SLAM,” Machine Vision and Applications, Vol.21, No.6, pp. 905-920, 2010.
  30. [30] Y. Boykov and M.-P. Jolly, “Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in ND Images,” Proc. of the 8th IEEE Int. Conf. on Computer Vision 2001, Vol.1, pp. 105-112, 2001.
  31. [31] W. Tao, H. Jin, and Y. Zhang, “Color Image Segmentation Based on Mean Shift and Normalized Cuts,” IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics, Vol.37, No.5, pp. 1382-1389, 2007.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 02, 2024