single-rb.php

JRM Vol.21 No.5 pp. 574-582
doi: 10.20965/jrm.2009.p0574
(2009)

Paper:

Three-Dimensional Environment Model Construction from an Omnidirectional Image Sequence

Ryosuke Kawanishi, Atsushi Yamashita, and Toru Kaneko

Department of Mechanical Engineering, Shizuoka University, Shizuoka, Japan

Received:
March 18, 2009
Accepted:
June 1, 2009
Published:
October 20, 2009
Keywords:
omnidirectional image sequence, structure from motion, 3D environment model
Abstract

When mobile robots execute autonomous tasks, map information is important in path planning and self-localization. In unknown environments, mobile robots must generate their own environmental maps. This paper proposes three-dimensional (3D) environment modeling by a mobile robot. The model is generated from results of 3D measurement and texture information. To measure environmental objects efficiently, the robot uses an image sequence acquired by an omnidirectional camera with wide field of view. The measurement method is based on structure from motion. Triangular meshes are constructed from 3D measurement data. The 3D model is constructed by texture mapping to the triangular mesh, proven by experimental result to be effective.

Cite this article as:
R. Kawanishi, A. Yamashita, and T. Kaneko, “Three-Dimensional Environment Model Construction from an Omnidirectional Image Sequence,” J. Robot. Mechatron., Vol.21, No.5, pp. 574-582, 2009.
Data files:
References
  1. [1] A. J. Davison, “Real-Time Simultaneous Localisation and Mapping with a Single Camera,” Proc. of the 9th IEEE Int. Conf. on Computer Vision, Vol.2, pp. 1403-1410, 2003.
  2. [2] H. Ishiguro, M. Yamamoto, and S. Tsuji, “Omni-Directional Stereo,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.14, No.2, pp. 257-262, 1992.
  3. [3] T. Nishimoto and J. Yamaguchi, “Three dimensional measurement using fisheye stereo vision,” Proc. of the Society of Instrument and Control Engineers Annual Conf. 2007, 3A05-1, pp. 2008-2012, 2007.
  4. [4] R. Bunschoten and B. Krose, “Robust Scene Reconstruction from an Omnidirectional Vision System,” IEEE Trans. on Robotics and Automation, Vol.19, No.2, pp. 351-357, 2003.
  5. [5] C. Geyer and K. Daniilidis, “Omnidirectional Video,” The Visual Computer, Vol.19, No.6, pp. 405-416, 2003.
  6. [6] J. Gluckman and S. K. Nayar, “Ego-motion and Omnidirectional Cameras,” Proc. of the 6th Int. Conf. on Computer Vision, pp. 999-1005, 1998.
  7. [7] J. Takiguchi, M. Yoshida, A. Takeya, J. Eino, and T. Hashizume, “High Precision Range Estimation from an Omnidirectional Stereo System,” Proc. of the 2002 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 263-268, 2002.
  8. [8] M. Tomono, “3-D Localization and Mapping Using a Single Camera Based on Structure-from-Motion with Automatic Baseline Selection,” Proc. of the 2005 IEEE Int. Conf. on Robotics and Automation, pp. 3353-3358, 2005.
  9. [9] J. Meguro, T. Hashizume, J. Takiguchi, and R. Kurosaki, “Development of an Autonomous Mobile Surveillance System Using a Network-based RTK-GPS,” Proc. of the 2005 IEEE Int. Conf. on Robotics and Automation, pp. 3107-3112, 2005.
  10. [10] J. Meguro, Y. Amano, T. Hashizume, and J. Takiguchi, “Omni-Directional Motion Stereo Vision Based on Accurate GPS/INS Navigation System,” Proc. of the 2nd Workshop on Integration of Vision and Inertial Sensors, 2005.
  11. [11] J. Y. Bouguet, “Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the Algorithm,” OpenCV, Intel Corporation, 2000.
  12. [12] M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Communications of the ACM, Vol.24, No.6, pp. 381-395, 1981.
  13. [13] B. Triggs, P. McLauchlan, R. Hartley, and A. Fitzgibbon, “Bundle Adjustment -A Modern Synthesis,” Vision Algorithms: Theory & Practice, Springer-Verlag LNCS 1883, 2000.
  14. [14] T. Harada, A. Yamashita, and T. Kaneko, “Environment Observation by Structure from Motion with an Omni-directional Camera,” Proc. of Int. Workshop on Advanced Image Technology 2006, pp. 169-174, 2006.
  15. [15] A. Nakatsuji, Y. Sugaya, and K. Kanatani, “Optimizing a Triangular Mesh for Shape Reconstruction from Images,” IEICE Trans. on Information and Systems, Vol.E88-D, No.10, pp. 2269-2276, 2005.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Aug. 09, 2020