single-jc.php

JACIII Vol.26 No.6 pp. 1004-1012
doi: 10.20965/jaciii.2022.p1004
(2022)

Paper:

Semantic Segmentation of Substation Site Cloud Based on Seg-PointNet

Wei Gao*,† and Lixia Zhang**

*Internet Department, State Grid Shanxi Electric Power Company
3 Xieyuan Road, Changfeng Business District, Taiyuan, Shanxi 030021, China

**Information and Communication Branch, State Grid Shanxi Electric Power Company
3 Xieyuan Road, Changfeng Business District, Taiyuan, Shanxi 030021, China

Corresponding author

Received:
November 30, 2021
Accepted:
July 19, 2022
Published:
November 20, 2022
Keywords:
Seg-PointNet, RES-MLP module, multi-scale feature pyramid
Abstract

3D point cloud semantic segmentation has been widely used in industrial scenes and has attracted continuous attention as a critical technology for understanding the intelligent robot scene. However, extracting visual semantics in complex environments remains a challenge. We propose the Seg-PointNet model based on multi-layer residual structure and feature pyramid for the LiDAR point cloud data semantic segmentation task in the complex substations scene. The model is based on the PointNet network and introduces a multi-scale residual structure. The residual structure multilayer perception (RES-MLP) model is proposed to fully excavate features at different levels and improve the characterization capabilities of complex features. Moreover, the 3D point cloud feature pyramid module is proposed to characterize the substation scene’s semantic features. We tested and verified the Seg-PointNet model on a self-built substation cloud point (SCP) dataset. The results show that the proposed Seg-PointNet model effectively improves the point cloud data segmentation accuracy, with an accuracy of 89.23% and mean intersection over union (mIoU) of 63.57%. This shows that the model can be applied to substation scenarios and provide technical support to intelligent robots in complex substation environments.

Cite this article as:
W. Gao and L. Zhang, “Semantic Segmentation of Substation Site Cloud Based on Seg-PointNet,” J. Adv. Comput. Intell. Intell. Inform., Vol.26 No.6, pp. 1004-1012, 2022.
Data files:
References
  1. [1] C. Wang et al., “An intelligent robot for indoor substation inspection,” Industrial Robot, the Int. J. of Robotics Research and Application, Vol.47, No.5, pp. 705-712, 2020.
  2. [2] H. Wang et al., “Researching on the Adaptability Technology of Substation Inspection Robot to the Substation’s Complex Environments,” 2nd Int. Conf. on Mechanical Control and Automation (ICMCA), 2017.
  3. [3] D.-L. Way, W. Chang, and Z. Shi, “Deep learning for anime style transfer,” Proc. of the 3rd Int. Conf. on Advances in Image Processing, pp. 139-143, 2019.
  4. [4] K. He et al., “Deep residual learning for image recognition,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
  5. [5] Z. Zhang et al., “Deep learning for environmentally robust speech recognition: An overview of recent developments,” ACM Trans. on Intelligent Systems and Technology (TIST), Vol.9, No.5, Article No.49, 2018.
  6. [6] N. Hamid et al., “Adaptive Update in Deep Learning Algorithms for LiDAR Data Semantic Segmentation,” IEEE Region 10 Symp. (TENSYMP), pp. 1038-1041, 2020.
  7. [7] F. Tian et al., “Supervoxel based point cloud segmentation algorithm,” Proc. of Society of Photo-Optical Instrumentation Engineers (SPIE) Conf. Series, Vol.11187, Article No.111870Y, 2019.
  8. [8] J. Masci et al., “Geodesic Convolutional Neural Networks on Riemannian Manifolds,” Proc. of the IEEE Int. Conf. on Computer Vision Workshop, pp. 832-840, 2015.
  9. [9] Q. Chen et al., “Semantic Segmentation of Substation Scenes Using Attention-Based Model,” IEEE 4th Int. Conf. on Electronics Technology (ICET), pp. 1031-1035, 2021.
  10. [10] R. Q. Charles et al., “PointNet: Deep learning on point sets for 3d classification and segmentation,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 77-85, 2017.
  11. [11] R. Q. Charles et al., “PointNet++: Deep hierarchical feature learning on point sets in a metric space,” 31th Conf. on Neural Information Processing Systems (NIPS), pp. 5105-5114, 2017.
  12. [12] R. Q. Charles et al., “Frustum pointnets for 3d object detection from rgb-d data,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 918-927, 2018.
  13. [13] M. Jiang et al., “PointSIFT: A SIFT-like Network Module for 3D Point Cloud Semantic Segmentation,” arXiv:1807.00652, 2018.
  14. [14] Y. Li et al., “Pointcnn: Convolution on X-Transformed Points,” Advances in Neural Information Processing Systems, Vol.31, pp. 820-830, 2018.
  15. [15] W. Shi and R. Rajkumar, “Point-GNN: Graph neural network for 3D object detection in a point cloud,” Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 1711-1719, 2020.
  16. [16] J. Zhang and X. Lin, “Advances in fusion of optical imagery and LiDAR point cloud applied to photogrammetry and remote sensing,” Int. J. of Image and Data Fusion, Vol.8, No.1, pp. 1-31, 2017.
  17. [17] X. Yao et al., “Using deep learning in semantic classification for point cloud data,” IEEE Access, Vol.7, pp. 37121-37130, 2019.
  18. [18] X. A. Li, L. Y. Wang, and J. Lu, “Multiscale Receptive Fields Graph Attention Network for Point Cloud Classification,” Complexity, doi: 10.1155/2021/8832081, 2021.
  19. [19] H. Sun and Y. L. Ling, “Research on Pointnet++ Optimization Algorithm Integrating MKF,” J. of Chinese Computer Systems, Vol.41, No.6, pp. 1269-1273, 2020.
  20. [20] J. Meng and S. Choi, “S-PointNet: A New Semantic Segmentation Algorithm Based on PointNet Architecture,” IEIE Trans. on Smart Processing & Computing, Vol.10, No.3, pp. 204-208, 2021.
  21. [21] K. Xu et al., “Residual blocks PointNet: A novel faster PointNet framework for segmentation and estimated pose,” 5th IEEE Int. Conf. on Cloud Computing and Intelligence Systems (CCIS), pp. 446-450, 2018.
  22. [22] A. Paigwar et al., “Attentional PointNet for 3D-Object Detection in Point Clouds,” Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops, pp. 1297-1306, 2019.
  23. [23] J. Komori and K. Hotta, “AB-PointNet for 3D point cloud recognition,” Digital Image Computing: Techniques and Applications (DICTA), Article No.8945926, 2019.
  24. [24] G. Huang et al., “Densely connected convolutional networks,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 4700-4708, 2017.
  25. [25] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 7132-7141, 2018.
  26. [26] K. Xu et al., “Residual blocks PointNet: A novel faster PointNet framework for segmentation and estimated pose,” 5th IEEE Int. Conf. on Cloud Computing and Intelligence Systems (CCIS), pp. 446-450, 2018.
  27. [27] T. Suzuki, K. Ozawa, and Y. Sekikawa, “Rethinking PointNet Embedding for Faster and Compact Model,” Int. Conf. on 3D Vision (3DV), pp. 791-800, 2020.
  28. [28] X. Yao et al., “Using deep learning in semantic classification for point cloud data,” IEEE Access, Vol.7, pp. 37121-37130, 2019.
  29. [29] J. Komori and K. Hotta, “AB-PointNet for 3D point cloud recognition,” 2019 Digital Image Computing: Techniques and Applications (DICTA), Article No.8945926, 2019.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 05, 2024