single-au.php

IJAT Vol.20 No.2 pp. 129-136
(2026)

Research Paper:

Monocular 3D Measurement in Featureless Elongated Structures Using Light-Section Method and Active Laser-Based SfM

Hiroshi Higuchi, Qi An ORCID Icon, and Atsushi Yamashita ORCID Icon

The University of Tokyo
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan

Corresponding author

Received:
September 24, 2025
Accepted:
November 17, 2025
Published:
March 5, 2026
Keywords:
3D measurement, light-section method, structure from motion, scale estimation, featureless environment
Abstract

When measuring large, elongated structures such as tunnels, the integration of the local three-dimensional (3D) shapes measured at multiple points using tools such as laser scanners is necessary. However, because tunnel interiors often have smooth, texture-less surfaces, estimating the relative pose between measurement points is difficult. This paper proposes a lightweight 3D measurement method using a single camera and laser projection. The system performs cross-sectional shape measurements using the light-section method and pose estimation using the projected laser features. By introducing a scale optimization approach that minimizes the nearest-neighbor distances between point clouds, accurate global 3D reconstruction is achieved without relying on external sensors. The proposed method enables efficient and precise measurements, even in featureless environments.

Cite this article as:
H. Higuchi, Q. An, and A. Yamashita, “Monocular 3D Measurement in Featureless Elongated Structures Using Light-Section Method and Active Laser-Based SfM,” Int. J. Automation Technol., Vol.20 No.2, pp. 129-136, 2026.
Data files:
References
  1. [1] C. Boje, A. Guerriero, S. Kubicki, and Y. Rezgui, “Towards a semantic construction digital twin: Directions for future research,” Automation in Construction, Vol.114, Article No.103179, 2020. https://doi.org/10.1016/j.autcon.2020.103179
  2. [2] J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” European Conf. on Computer Vision, pp. 834-849, 2014. https://doi.org/10.1007/978-3-319-10605-2_54
  3. [3] B. Zheng, T. Oishi, and K. Ikeuchi, “Rail sensor: A mobile lidar system for 3D archiving the bas-reliefs in Angkor Wat,” IPSJ Trans. on Computer Vision and Applications, Vol.7, pp. 59-63, 2015. https://doi.org/10.2197/ipsjtcva.7.59
  4. [4] A. Duda, J. Schwendner, and C. Gaudig, “SRSL: Monocular self-referenced line structured light,” Proc. of the 2015 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 717-722, 2015. https://doi.org/10.1109/IROS.2015.7353451
  5. [5] R. Kaijaluoto and A. Hyyppä, “Precise indoor localization for mobile laser scanner,” The Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol.XL-4/W5, pp. 1-6, 2015. https://doi.org/10.5194/isprsarchives-XL-4-W5-1-2015
  6. [6] C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. M. Montiel, and J. D. Tardós, “ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM,” IEEE Trans. on Robotics, Vol.37, No.6, pp. 1874-1890, 2021. https://doi.org/10.1109/TRO.2021.3075644
  7. [7] C. Godard, O. Mac Aodha, M. Firman, and G. Brostow, “Digging into self-supervised monocular depth estimation,” Proc. of the IEEE/CVF Int. Conf. on Computer Vision, pp. 3827-3837, 2019. https://doi.org/10.1109/ICCV.2019.00393
  8. [8] R. Ranftl, K. Lasinger, D. Hafner, K. Schindler, and V. Koltun, “Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.44, No.3, pp. 1623-1637, 2022. https://doi.org/10.1109/TPAMI.2020.3019967
  9. [9] J. Jia and Y. Li, “Deep learning for structural health monitoring: Data, algorithms, applications, challenges, and trends,” Sensors, Vol.23, No.21, Article No.8824, 2023. https://doi.org/10.3390/s23218824
  10. [10] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “NeRF: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, Vol.65, No.1, pp. 99-106, 2021. https://doi.org/10.1145/3503250
  11. [11] B. Kerbl, G. Kopanas, T. Leimkühler, and G. Drettakis, “3D Gaussian splatting for real-time radiance field rendering,” ACM Trans. on Graphics, Vol.42, No.4, Article No.139, 2023. https://doi.org/10.1145/3592433
  12. [12] J. Geng, “Structured-light 3D surface imaging: A tutorial,” Advances in Optics and Photonics, Vol.3, No.2, pp. 128-160, 2011. https://doi.org/10.1364/AOP.3.000128
  13. [13] D. Zhan, L. Yu, J. Xiao, and T. Chen, “Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels,” Sensors, Vol.15, No.4, pp. 8664-8684, 2015. https://doi.org/10.3390/s150408664
  14. [14] X. Yang and G. Jiang, “A practical 3D reconstruction method for weak texture scenes,” Remote Sensing, Vol.13, No.16, Article No.3103, 2021. https://doi.org/10.3390/rs13163103
  15. [15] C. Yao, S. He, H. Chen, X. Zhang, and Z. Wang, “Pose estimation of nonoverlapping FOV cameras for shield tunnel convergence measurement,” Measurement, Vol.242, Article No.116101, 2025. https://doi.org/10.1016/j.measurement.2024.116101
  16. [16] J. Wang and Z. Zhou, “The 3D reconstruction method of a line-structured light vision sensor based on composite depth images,” Measurement Science and Technology, Vol.32, No.7, Article No.075101, 2021. https://doi.org/10.1088/1361-6501/abcf64
  17. [17] Y. Xue, S. Zhang, M. Zhou, and H. Zhu, “Novel SfM-DLT method for metro tunnel 3D reconstruction and visualization,” Underground Space, Vol.6, No.2, pp. 134-141, 2021. https://doi.org/10.1016/j.undsp.2020.01.002
  18. [18] T. Igaue, T. Haymizu, H. Higuchi, M. Ikura, K. Yoshida, S. Yamanaka, T. Yamaguchi, H. Asama, and A. Yamashita, “Cooperative 3D tunnel measurement based on 2D–3D registration of omnidirectional laser light,” J. of Field Robotics, Vol.40, No.8, pp. 2042-2056, 2023. https://doi.org/10.1002/rob.22241
  19. [19] M. Janiszewski, M. Torkan, L. Uotinen, and M. Rinne, “Rapid photogrammetry with a 360-degree camera for tunnel mapping,” Remote Sensing, Vol.14, No.21, Article No.5494, 2022. https://doi.org/10.3390/rs14215494
  20. [20] Y. Pan, X. Zhong, L. Wiesmann, T. Posewsky, J. Behley, and C. Stachniss, “PIN-SLAM: LiDAR SLAM using a point-based implicit neural representation for achieving global map consistency,” IEEE Trans. on Robotics, Vol.40, pp. 4045-4064, 2024. https://doi.org/10.1109/TRO.2024.3422055

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Mar. 05, 2026