single-au.php

IJAT Vol.18 No.5 pp. 603-612
doi: 10.20965/ijat.2024.p0603
(2024)

Research Paper:

Ceiling Equipment Extraction from TLS Point Clouds for Reflected Ceiling Plan Creation

Riho Akiyama*, Hiroaki Date*,†, Satoshi Kanai* ORCID Icon, and Kazushige Yasutake** ORCID Icon

*Hokkaido University
Kita 14, Nishi 9, Kita-ku, Sapporo, Hokkaido 060-0814, Japan

Corresponding author

**Kyudenko Corporation
Fukuoka, Japan

Received:
March 2, 2024
Accepted:
May 8, 2024
Published:
September 5, 2024
Keywords:
point clouds, terrestrial laser scanning, ceiling equipment, reflected ceiling plan, footprint
Abstract

The reflected ceiling plan (RCP) is a two-dimensional drawing of facilities with ceiling equipment, such as lighting, fire alarms, sprinklers, and inspection holes. The RCP is often created from existing facilities for safety standard verification, renovation, and inspection. However, the creation of RCPs of large-scale facilities requires significant time and effort. In this study, a method for extracting ceiling equipment information from point clouds acquired using a terrestrial laser scanner (TLS) was developed for RCP creation. The proposed method is based on footprint detection for ceiling equipment and involves three steps. First, circular and quadrilateral footprints of the ceiling equipment from point clouds of each scan are detected. Next, the footprints are merged for multiple scans and clustered using their dimensions and point distributions. Finally, the labels of pieces of equipment are interactively assigned to each cluster. The performance of the proposed method was evaluated for four facilities using TLS point clouds. The experimental results showed that the detection rates of footprints (recall) exceeded 90% within a scan distance of 6 m, and the labeling accuracy was also more than 90%. For 79 scans (point clouds) of a facility, the time for extracting 80% of equipment information for RCP creation was approximately 25 min, which corresponds to 2% of the manual RCP creation time of the facility. This demonstrates that the proposed method achieves efficient RCP creation.

Cite this article as:
R. Akiyama, H. Date, S. Kanai, and K. Yasutake, “Ceiling Equipment Extraction from TLS Point Clouds for Reflected Ceiling Plan Creation,” Int. J. Automation Technol., Vol.18 No.5, pp. 603-612, 2024.
Data files:
References
  1. [1] H. Masuda and I. Tanaka, “As-built 3D modeling of large facilities based on interactive feature editing,” Computer-Aided Design and Applications, Vol.7, No.3, pp. 349-360, 2010. https://doi.org/10.3722/cadaps.2010.349-360
  2. [2] H. Macher, T. Landes, and P. Grussenmeyer, “From point clouds to building information models: 3D semi-automatic reconstruction of indoors of existing buildings,” Applied Sciences, Vol.7, No.10, Article No.1030, 2017. https://doi.org/10.3390/app7101030
  3. [3] H. Takahashi, H. Date, and S. Kanai, “Automatic indoor environment modeling from laser-scanned point clouds using graph-based regular arrangement recognition,” Proc. of the 4th Int. Conf. on Civil and Building Engineering Informatics (ICCBEI 2019), pp. 368-375, 2019.
  4. [4] M. Awrangjeb and C. S. Fraser, “Rule-based segmentation of LIDAR point cloud for automatic extraction of building roof planes,” ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol.II-3/W3, pp. 1-6, 2013. https://doi.org/10.5194/isprsannals-II-3-W3-1-2013
  5. [5] A. E. Johnson and M. Hebert, “Using spin images for efficient object recognition in cluttered 3D scenes,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.21, No.5, pp. 433-449, 1999. https://doi.org/10.1109/34.765655
  6. [6] B. Drost, M. Ulrich, N. Navab, and S. Ilic, “Model globally, match locally: Efficient and robust 3D object recognition,” 2010 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, pp. 998-1005, 2010. https://doi.org/10.1109/CVPR.2010.5540108
  7. [7] A. Golovinskiy, V. G. Kim, and T. Funkhouser, “Shape-based recognition of 3D point clouds in urban environments,” 2009 IEEE 12th Int. Conf. on Computer Vision, pp. 2154-2161, 2009. https://doi.org/10.1109/ICCV.2009.5459471
  8. [8] L. Breiman, “Random forests,” Machine Learning, Vol.45, No.1, pp. 5-32, 2001. https://doi.org/10.1023/A:1010933404324
  9. [9] F. Poux, C. Mattes, and L. Kobbelt, “Unsupervised segmentation of indoor 3D point cloud: Application to object-based classification,” The Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol.XLIV-4/W1-2020, pp. 111-118, 2020. https://doi.org/10.5194/isprs-archives-XLIV-4-W1-2020-111-2020
  10. [10] R. Q. Charles, H. Su, M. Kaichun, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” 2017 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 77-85, 2017. https://doi.org/10.1109/CVPR.2017.16
  11. [11] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep hierarchical feature learning on point sets in a metric space,” Proc. of the 31st Int. Conf. on Neural Information Processing Systems (NIPS’17), pp. 5105-5114, 2017.
  12. [12] M. Hossain et al., “Building indoor point cloud datasets with object annotation for public safety,” Proc. of the 10th Int. Conf. on Smart Cities and Green ICT Systems (SMARTGREENS), pp. 45-56, 2022. https://doi.org/10.5220/0010454400450056
  13. [13] Y. Pan, A. Braun, I. Brilakis, and A. Borrmann, “Enriching geometric digital twins of buildings with small objects by fusing laser scanning and AI-based image recognition,” Automation in Construction, Vol.140, Article No.104375, 2022. https://doi.org/10.1016/j.autcon.2022.104375
  14. [14] G. Pintore, R. Pintus, F. Ganovelli, R. Scopigno, and E. Gobbetti, “Recovering 3D existing-conditions of indoor structures from spherical images,” Computers & Graphics, Vol.77, pp. 16-29, 2018. https://doi.org/10.1016/j.cag.2018.09.013
  15. [15] H. Takahashi, H. Date, S. Kanai, and K. Yasutake, “Detection of indoor attached equipment from TLS point clouds using planar region boundary,” The Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol.XLIII-B2-2020, pp. 495-500, 2020. https://doi.org/10.5194/isprs-archives-XLIII-B2-2020-495-2020
  16. [16] L. Cheng et al., “Registration of laser scanning point clouds: A review,” Sensors, Vol.18, No.5, Article No.1641, 2018. https://doi.org/10.3390/s18051641
  17. [17] P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.14, No.2, pp. 239-256, 1992. https://doi.org/10.1109/34.121791
  18. [18] H. Date et al., “Efficient registration of laser-scanned point clouds of bridges using linear features,” Int. J. Automation Technol., Vol.12, No.3, pp. 328-338, 2018. https://doi.org/10.20965/ijat.2018.p0328
  19. [19] T. Watanabe, T. Niwa, and H. Masuda, “Registration of point-clouds from terrestrial and portable laser scanners,” Int. J. Automation Technol., Vol.10, No.2, pp. 163-171, 2016. https://doi.org/10.20965/ijat.2016.p0163
  20. [20] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, Vol.24, No.6, pp. 381-395, 1981. https://doi.org/10.1145/358669.358692
  21. [21] E. Che and M. J. Olsen, “Fast edge detection and segmentation of terrestrial laser scans through normal variation analysis,” ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol.IV-2/W4, pp. 51-57, 2017. https://doi.org/10.5194/isprs-annals-IV-2-W4-51-2017
  22. [22] F. Poux, C. Mattes, Z. Selman, and L. Kobbelt, “Automatic region-growing system for the segmentation of large point clouds,” Automation in Construction, Vol.138, Article No.104250, 2022. https://doi.org/10.1016/j.autcon.2022.104250
  23. [23] OpenCV, “Image thresholding.” https://docs.opencv.org/4.x/d7/d4d/tutorial_py_thresholding.html [Accessed January 31, 2024]
  24. [24] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” Proc. of the 2nd Int. Conf. on Knowledge Discovery and Data Mining (KDD’96), pp. 226-231, 1996.
  25. [25] M. J. Swain and D. H. Ballard, “Color indexing,” Int. J. of Computer Vision, Vol.7, No.1, pp. 11-32, 1991. https://doi.org/10.1007/BF00130487

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Sep. 09, 2024