single-jc.php

JACIII Vol.19 No.2 pp. 319-329
doi: 10.20965/jaciii.2015.p0319
(2015)

Paper:

Still Corresponding Points Extraction Using a Moving Monocular Camera with a Motion Sensor

Toshihiro Akamatsu, Fangyan Dong, and Kaoru Hirota

Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology
G3-49, 4259 Nagatsuta, Midori-ku, Yokohama 226-8502, Japan

Received:
June 15, 2014
Accepted:
December 28, 2014
Published:
March 20, 2015
Keywords:
3D measurement, corresponding points classification, 6-axis motion sensor, moving monocular camera
Abstract
The method of extracting still corresponding points proposed in this paper uses a moving monocular camera connected to a 6-axis motion sensor. It classifies corresponding points between two consecutive frames containing still/moving objects and chooses corresponding points appropriate for 3D measurement. Experiments are done extracting still corresponding points with 2 scenes from original computer graphics images. Results for scene 1 show that accuracy is 0.98, precision 0.96, and recall 1.00. Robustness against sensor noise is confirmed. Extraction experiment results with real scenes show that accuracy is 0.86, precision 0.88, and recall 0.94. We plan to include the proposed method in 3D measurement with real images containing still/moving objects and to apply it to obstacles avoidance for vehicles and to mobile robot vision systems.
Cite this article as:
T. Akamatsu, F. Dong, and K. Hirota, “Still Corresponding Points Extraction Using a Moving Monocular Camera with a Motion Sensor,” J. Adv. Comput. Intell. Intell. Inform., Vol.19 No.2, pp. 319-329, 2015.
Data files:
References
  1. [1] R. Hartlry and C. Silpa-Anan, “Reconstruction from two views using approximate calibration,” Proc. of 5th Asian Conf. on Computer Vision, Vol.1, pp. 338-343, 2002.
  2. [2] R. Hartley and A. Zisserman, “Multiple View Geometry in Computer Vision Second Edition,” Cambridge University Press, March, 2004.
  3. [3] K. Kanatani and Y. Sugaya, “Compact fundamental matrix computation,” IPSJ Trans. on Computer Vision and Applications, Vol.2, pp. 59-70, 2010.
  4. [4] O. D. Faugeras, “Stratification of three-dimensional vision: projective, affine, and metric representations,” J. of the Optical Society of America, Vol.12, No.3, pp. 465-484, 1995.
  5. [5] Z. Hu and Z. Tan, “Depth recovery and affine reconstruction under camera pure translation,” Pattern Recognition, Vol.40, Issue 10, pp. 2826-2836, 2007.
  6. [6] Z. Zhang, R. Deriche, O. D. Faugeras, and Q. Luong, “A Robust Technique for Matching Two Uncalibrated Images Through the Recovery of the Unknown Epipolar Geometry,” Research Report, No.2273, 1994.
  7. [7] K. Kanatani and Y. Sugaya, “Implementation and evaluation of bundle adjustment for 3-D reconstruction,” Proc. of the 17th Symp. on Sensing via Imaging Information, pp. IS4-02-1-IS4-02-8, 2011.
  8. [8] http://www.blender.org/ [Accessed September 4, 2014]
  9. [9] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: an efficient alternative to SIFT or SURF,” Proc. of the IEEE Int. Conf. on Computer Vision, pp. 2564-2571, 2011.
  10. [10] E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” European Conf. on Computer Vision, pp. 430-443, 2006.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024