single-jc.php

JACIII Vol.24 No.7 pp. 864-871
doi: 10.20965/jaciii.2020.p0864
(2020)

Paper:

Indoor Key Point Reconstruction Based on Laser Illumination and Omnidirectional Vision

Yang Qi and Yuan Li

School of Automation, Beijing Institute of Technology
5 South Zhongguancun Street, Haidian District, Beijing 100081, China

Corresponding author

Received:
October 19, 2020
Accepted:
October 27, 2020
Published:
December 20, 2020
Keywords:
omnidirectional vision, structured light, indoor reconstruction
Abstract

Efficient and precise three-dimensional (3D) measurement is an important issue in the field of machine vision. In this paper, a measurement method for indoor key points is proposed with structured lights and omnidirectional vision system and the system can achieve the wide field of view and accurate results. In this paper, the process of obtaining indoor key points is as follows: Firstly, through the analysis of the system imaging model, an omnidirectional vision system based on structured light is constructed. Secondly, the full convolution neural network is used to estimate the scene for the dataset. Then, according to the geometric relationship between the scenery point and its reference point in structured light, for obtaining the 3D coordinates of the unstructured light point is presented. Finally, combining the full convolution network model and the structured light 3D vision model, the 3D mathematical representation of the key points of the indoor scene frame is completed. The experimental results proved that the proposed method can accurately reconstruct indoor scenes, and the measurement error is about 2%.

A measurement method for indoor key points is proposed with structured lights and omnidirectional vision system. It can achieve the wide field of view and accurate results

A measurement method for indoor key points is proposed with structured lights and omnidirectional vision system. It can achieve the wide field of view and accurate results

Cite this article as:
Y. Qi and Y. Li, “Indoor Key Point Reconstruction Based on Laser Illumination and Omnidirectional Vision,” J. Adv. Comput. Intell. Intell. Inform., Vol.24 No.7, pp. 864-871, 2020.
Data files:
References
  1. [1] H. Zhang, C. Reardon, and L. E. Parker, “Real-Time Multiple Human Perception With Color-Depth Cameras on a Mobile Robot,” IEEE Trans. on Cybernetics, Vol.43, No.5, pp. 1429-1441, 2013.
  2. [2] Q. Zhou, Y. Yang, and Z. Wang, “Structured Light Measurement Technique Based on Binocular Stereo Vision,” Computer Engineering, Vol.44, No.7, pp. 244-249+258, 2018 (in Chinese).
  3. [3] L. Yang, B. Wang, R. Zhang, H. Zhou, and R. Wang, “Analysis on Location Accuracy for the Binocular Stereo Vision System,” IEEE Photonics J., Vol.10, No.1, Article No.7800316, 2017.
  4. [4] S. Ding, X. Zhang, Q. Yu, and X. Yang, “Overview of Non-Contact 3D Reconstruction Measurement Methods,” Laser & Optoelectronics Progress, Vol.54, No.7, pp. 27-41, 2017 (in Chinese).
  5. [5] J. Yang and H. Chen, “The 3D reconstruction of face model with active structured light and stereo vision fusion,” Proc. of the 3rd IEEE Int. Conf. on Computer and Communications (ICCC), pp. 1902-1906, 2017.
  6. [6] H. Kawasaki, Y. Horita, H. Morinaga, Y. Matugano, S. Ono, M. Kimura, and Y. Takane, “Structured light with coded aperture for wide range 3D measurement,” Proc. of the 2012 19th IEEE Int. Conf. on Image Processing, pp. 2777-2780, 2012.
  7. [7] S. Li, “Monitoring Around a Vehicle by a Spherical Image Sensor,” IEEE Trans. on Intelligent Transportation Systems, Vol.7, No.4, pp. 541-550, 2006.
  8. [8] D. Caruso, J. Engel, and D. Cremers, “Large-scale direct SLAM for ominidirectional cameras,” Proc. of the 2015 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 141-148, 2015.
  9. [9] T. Jia, C.-D. Wu, D.-Y. Chen, B.-N. Wang, H.-H. Gao, and Z.-Q. Fang, “A Depth Measurement Method by Omni Directional Image and Structured Light,” Acta Automatica Sinica, Vol.41, No.9, pp. 1553-1562, 2015 (in Chinese).
  10. [10] T. Jia, Y. Shi, Z. Zhou, and D. Chen, “3D depth information extraction with omni-directional camera,” Information Processing Letters, Vol.115, Issue 2, pp. 285-291, 2015.
  11. [11] Y. Zhang, Y. Li, and Q. L. Wang, “Omni-Directional Vision System for Mobile Robot Using Structured Lights,” Applied Mechanics and Materials, Vol.288, pp. 114-120, 2013.
  12. [12] B. Wei, J. Gao, K. Li, Y. Fan, X. Gao, and B. Gao, “Indoor mobile robot obstacle detection based on linear structured light vision system,” Proc. of the 2008 IEEE Int. Conf. on Robotics and Biomimetics, pp. 834-839, 2009.
  13. [13] J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Optics and Lasers in Engineering, Vol.50, Issue 9, pp. 1274-1280, 2012.
  14. [14] W. Yang, G. Zhang, H. Bao, J. Kim, and H. Y. Lee, “Consistent depth maps recovery from a trinocular video sequence,” Proc. of the 2012 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1466-1473, 2012.
  15. [15] D. Scaramuzza, “Omnidirectional Vision: From Calibration to Robot Motion Estimation,” Ph.D. Thesis, Eidgenössische Technische Hochschule (ETH) Zürich, 2008.
  16. [16] B. Mičušík, “Two-View Geometry of Omnidirectional Cameras Center for Machine Perception,” Ph.D. Thesis, Czech Technical University in Prague, 2004.
  17. [17] D. Hoiem, A. Efros, and M. Hebert, “Geometric context from a single image,” Proc. of the 10th IEEE Int. Conf. on Computer Vision, Vol.1, pp. 654-661, 2005.
  18. [18] R. Song, Y. Liu, Y. Zhao, R. R. Martin, and P. L. Rosin, “Conditional random field-based mesh saliency,” Proc. of the 2012 19th Int. Conf. on Image Processing, pp. 637-640, 2012.
  19. [19] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altün, “Large Margin Methods for Structured and Interdependent Output Variables,” The J. of Machine Learning Research, Vol.6, pp. 1453-1484, 2005.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024