single-au.php

IJAT Vol.15 No.3 pp. 258-267
doi: 10.20965/ijat.2021.p0258
(2021)

Paper:

Extraction of Guardrails from MMS Data Using Convolutional Neural Network

Hiroki Matsumoto, Yuma Mori, and Hiroshi Masuda

The University of Electro-Communications
1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan

Corresponding author

Received:
October 26, 2020
Accepted:
February 10, 2021
Published:
May 5, 2021
Keywords:
point processing, guardrail, mobile mapping system, convolutional neural network, terrestrial laser scanner
Abstract

Mobile mapping systems can capture point clouds and digital images of roadside objects. Such data are useful for maintenance, asset management, and 3D map creation. In this paper, we discuss methods for extracting guardrails that separate roadways and walkways. Since there are various shape patterns for guardrails in Japan, flexible methods are required for extracting them. We propose a new extraction method based on point processing and a convolutional neural network (CNN). In our method, point clouds and images are segmented into small fragments, and their features are extracted using CNNs for images and point clouds. Then, features from images and point clouds are combined and investigated using whether they are guardrails or not. Based on our experiments, our method could extract guardrails from point clouds with a high success rate.

Cite this article as:
H. Matsumoto, Y. Mori, and H. Masuda, “Extraction of Guardrails from MMS Data Using Convolutional Neural Network,” Int. J. Automation Technol., Vol.15 No.3, pp. 258-267, 2021.
Data files:
References
  1. [1] H. Yokoyama, H. Date, S. Kanai, and H. Takeda, “Pole-like objects recognition from mobile laser scanning data using smoothing and principal component analysis,” The Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol.XXXVIII-5/W12, pp. 115-120, 2011.
  2. [2] H. Masuda, S. Oguri, and J. He, “Shape reconstruction of poles, and plates from vehicle based laser scanning data,” Proc. of the Int. Symp. on Mobile Mapping Technology, 2013.
  3. [3] A. K. Aijazi, P. Checchin, and L. Trassoudaine, “Segmentation based classification of 3D urban point clouds: A Super-Voxel based approach with evaluation,” Remote Sensing, Vol.5, Issue 4, pp. 1624-1650, 2014.
  4. [4] C. Cabo, C. Ordonez, S. Gracia-Cortes, and J. Martinez, “An algorithm for automatic detection of pole-like street furniture objects from Mobile Laser Scanner point clouds,” ISPRS J. of Photogrammetry and Remote Sensing, Vol.87, pp. 47-56, 2014.
  5. [5] B. Yang, Z. Dong, G. Zhao, and W. Dai, “Hierarchical extraction of urban objects from mobile laser scanning data,” ISPRS J. of Photogrammetry and Remote Sensing, Vol.99, pp. 45-57, 2015.
  6. [6] A. Kamal, A. Paul, and C. Laurent, “Segmentation based classification of 3D urban point clouds: A super-voxel based approach with evaluation,” Remote Sensing, Vol.5, Issue 4, pp. 1624-1650, 2013.
  7. [7] A. Golovinskiy, V. Kim, and T. Funkhouser, “Shape-based recognition of 3D point clouds in urban environments,” Proc. of the 2009 IEEE 12th Int. Conf. on Computer Vision, pp. 2146-2154, 2009.
  8. [8] X. Zhu, H. Zhao, Y. Liu, and H. Zha, “Segmentation and classification of range image from an intelligent vehicle in urban environment,” Proc. of the 2010 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 18-22, 2010.
  9. [9] E. Puttonen, A. Jaakkola, P. Litkey, and J. Hyyppä, “Tree classification with fused mobile laser scanning and hyperspectral data,” Sensors, Vol.11, Issue 5, pp. 5158-5182, 2010.
  10. [10] K. Ishikawa, F. Tonomura, Y. Amano, and T. Hashizume, “Recognition of road objects from mobile mapping data,” Proc. of Asian Conf. on Design and Digital Engineering, 2012.
  11. [11] K. Lai and D. Fox, “3D laser scan classification using web data and domain adaptation,” Proc. of Robotics: Science and Systems, 2009.
  12. [12] K. Fukano and H. Masuda, “Detection, and classification of pole-like objects from mobile mapping data,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol.II-3/W5, pp. 57-64, 2015.
  13. [13] K. Fukano and H. Masuda, “Geometric features suitable for classifying pole-like objects from mobile mapping data,” J. of Japan Society of Civil Engineers, Vol.70, No.1, pp. 40-47, 2014 (in Japanese).
  14. [14] J. He and H. Masuda, “Reconstruction of roadways and walkways using point-clouds from Mobile Mapping System,” Proc. of Asian Conf. on Design and Digital Engineering, 2012.
  15. [15] Z. Kolar, H. Chen, and X. Luo, “Transfer learning and deep convolutional neural networks for safety guardrail detection in 2D images,” Automation in Construction, Vol.89, pp. 58-70, 2018.
  16. [16] H. Matsumoto, Y. Mori, and H. Masuda, “Extraction and Shape Reconstruction of Guardrails Using Mobile Mapping Data,” The Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol.XLII-2/W13, pp. 1061-1068, 2019.
  17. [17] Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, and M. Bennamoun, “Deep learning for 3d point clouds: A survey,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2020.
  18. [18] J. Weng, P. Cohen, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.14, No.10, pp. 965-980, 1992.
  19. [19] S. Karen and Z. Andrew, “Very deep convolutional networks for large-scale image recognition,” Proc. of Int. Conf. on Learning Representations, 2015.
  20. [20] R. Q. Charles, S. Hao, M. Kaichun, and J. G. Leonidas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 652-660, 2017.
  21. [21] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: a large-scale hierarchical image database,” Proc. of the 2009 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 248-255, 2009.
  22. [22] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet ++: Deep hierarchical feature learning on point sets in a metric space,” Proc. of the 31st Conf. on Neural Information Processing Systems (NIPS 2017), 2017.
  23. [23] H. Zhao, L. Jiang, C. W. Fu, and J. Jia, “PointWeb: Enhancing local neighborhood features for point cloud processing,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 5565-5573, 2019.
  24. [24] J. Yang, Q. Zhang, B. Ni, L. Li, J. Liu, M. Zhou, and Q. Tian, “Modeling point clouds with self-attention and gumbel subset sampling,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3323-3332, 2019.
  25. [25] M. Kursa and W. Rudnicki, “Feature selection with the Boruta package,” J. of Statistical Software, Vol.36, Issue 11, 2010.
  26. [26] L. Breiman, “Random Forest,” Machine Learning, Vol.45, pp. 5-23, 2001.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024