single-au.php

IJAT Vol.19 No.5 pp. 837-850
doi: 10.20965/ijat.2025.p0837
(2025)

Research Paper:

Algorithm and Marker Development for 6D Motion Measurement Using a Single Camera

Haochen Huang ORCID Icon and Daisuke Kono ORCID Icon

Department of Micro Engineering, Kyoto University
C3, Kyotodaigakukatsura, Nisikyo-ku, Kyoto, Kyoto 615-8540, Japan

Corresponding author

Received:
November 29, 2024
Accepted:
March 5, 2025
Published:
September 5, 2025
Keywords:
composite marker, camera extrinsic calibration, 6D motion measurement, single vision
Abstract

This study developed a novel composite marker system and an associated six-dimensional (6D) motion measurement algorithm for complex motions using a single camera. The composite marker, consisting of three spherical markers arranged in an equilateral triangle, provided a stable reference to achieve sub-pixel accuracy in motion measurement. The proposed three-dimensional (3D) measurement algorithm enabled the determination of the 3D coordinates of the composite marker from a single image, thereby reducing the complexity and cost compared to traditional multi-camera systems. This study also introduced an automated intensity distribution fitting method to precisely determine the visual center of each marker, enabling accurate evaluation of the deviation between the optical and physical centers for each individual marker within the composite marker. These measurements support accurate camera extrinsic calibration based on the markers. The proposed method achieved a static resolution between 0.05 and 0.2 mm, depending on the direction of movement, with further improvements possible through low-pass filtering. Motion measurement involved both circular trajectory tracking and complex 6D motion measurement of the composite marker. The circular trajectory motion included depth changes relative to the camera, with deviation in the circular trajectory of less than 0.23 mm. For complex 6D motion, the positional accuracy deviation was less than 1.5 mm, and the estimated normal vector compared to the actual normal vector had an offset of 4°. Experimental results confirmed the effectiveness of the method for capturing 6D motion, showing reliable performance in tracking both depth and tangential movements.

Cite this article as:
H. Huang and D. Kono, “Algorithm and Marker Development for 6D Motion Measurement Using a Single Camera,” Int. J. Automation Technol., Vol.19 No.5, pp. 837-850, 2025.
Data files:
References
  1. [1] X. Gao, W. Ding, T. Gao, and Z. Gong, “High precision contactless object-diameter measurement using IR light source,” 2009 IEEE Int. Conf. on Automation and Logistics, pp. 915-920, 2009. https://doi.org/10.1109/ICAL.2009.5262792.
  2. [2] P. Lu, Z. Li, Y. Wang, J. Chen, and J. Zhao, “The research and development of noncontact 3-D laser dental model measuring and analyzing system,” The Chinese J. of Dental Research, Vol.3, No.3, pp. 7-14, 2000.
  3. [3] R. Sato and K. Shirase, “Geometric error compensation of five-axis machining centers based on on-machine workpiece measurement,” Int. J. Automation Technol., Vol.12, No.2, pp. 230-237, 2018. https://doi.org/10.20965/ijat.2018.p0230
  4. [4] D. Kono, S. Weikert, A. Matsubara, and K. Yamazaki, “Estimation of dynamic mechanical error for evaluation of machine tool structures,” Int. J. Automation Technol., Vol.6, No.2, pp. 147-153, 2012. https://doi.org/10.20965/ijat.2012.p0147
  5. [5] A. Wahab, A. Khalid, and R. Nawaz, “Non-contact metrology inspection system for precision micro products,” Int. Conf. on Robotics and Emerging Allied Technologies in Engineering (iCREATE 2014), pp. 151-156, 2014. https://doi.org/10.1109/iCREATE.2014.6828356.
  6. [6] K. Hosono, W. Kim, A. Kimura, Y. Shimizu, and W. Gao, “Surface encoders for a mosaic scale grating,” Int. J. Automation Technol., Vol.5, No.2, pp. 91-96, 2011. https://doi.org/10.20965/ijat.2011.p0091
  7. [7] V. Nasir and F. Sassani, “A review on deep learning in machining and tool monitoring: Methods, opportunities, and challenges,” The Int. J. of Advanced Manufacturing Technology, Vol.115, No.9, pp. 2683-2709, 2021. https://doi.org/10.1007/s00170-021-07325-7
  8. [8] V. Pasku, A. D. Angelis, G. D. Angelis, D. D. Arumugam, M. Dionigi, P. Carbone, A. Moschitta, and D. S. Ricketts, “Magnetic field-based positioning systems,” IEEE Communications Surveys and Tutorials, Vol.19, No.3, pp. 2003-2017, 2017. https://doi.org/10.1109/COMST.2017.2684087
  9. [9] A. Takanose, K. Kondo, Y. Hoda, J. Meguro, and K. Takeda, “Localization system for vehicle navigation based on GNSS/IMU using time-series optimization with road gradient constrain,” J. Robot. Mechatron., Vol.35, No.2, pp. 387-397, 2023. https://doi.org/10.20965/jrm.2023.p0387
  10. [10] T. Akita, Y. Yamauchi, and H. Fujiyoshi, “Stereo vision by combination of machine-learning techniques for pedestrian detection at intersections utilizing surround-view cameras,” J. Robot. Mechatron., Vol.32, No.3, pp. 494-502, 2020. https://doi.org/10.20965/jrm.2020.p0494
  11. [11] A. Shibata, Y. Okumura, H. Fujii, A. Yamashita, and H. Asama, “Refraction-based bundle adjustment for scale reconstructible structure from motion,” J. Robot. Mechatron., Vol.30, No.4, pp. 660-670, 2018. https://doi.org/10.20965/jrm.2018.p0660
  12. [12] X. Jian, J. Li, X. Chen, X.-A. Wang, J. Chen, and C. Wu, “A high-precision power line recognition and location method based on structured-light binocular vision,” J. Adv. Comput. Intell. Intell. Inform., Vol.26, No.5, pp. 691-697, 2022. https://doi.org/10.20965/jaciii.2022.p0691
  13. [13] Y. Horikawa, A. Mizutani, T. Noda, and H. Kikuta, “Stereo camera system with digital image correlation method for accurate measurement of position and orientation of positioning stage,” Int. J. Automation Technol., Vol.9, No.4, pp. 436-443, 2015. https://doi.org/10.20965/ijat.2015.p0436
  14. [14] M. Li, Y. Xu, Z. Chen, K. Ma, and L. Liu, “Two-dimensional high-precision non-contact automatic measurement method based on image corner coordinates,” Proc. of the Institution of Mechanical Engineers, Part C: J. of Mechanical Engineering Science., Vol.235, No.21, pp. 5814-5832, 2021. https://doi.org/10.1177/0954406220981124
  15. [15] R. Miura, D. Usagawa, K. Noguchi, S. Iwaki, and T. Ikeda, “Self-localization of mobile robot based on beacon beam of TOF laser sensor mounted on pan-tilt actuator: Estimation method that combines spot coordinates on laser receiver and odometry,” J. Robot. Mechatron., Vol.34, No.3, pp. 654-663, 2022. https://doi.org/10.20965/jrm.2022.p0654
  16. [16] R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape-from-shading: A survey,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.21, No.8, pp. 690-706, 1999. https://doi.org/10.1109/34.784284
  17. [17] A. Criminisi, I. Reid, and A. Zisserman, “Single view metrology,” Int. J. of Computer Vision, Vol.40, Issue 2, pp. 123-148, 2000. https://doi.org/10.1023/A:1026598000963
  18. [18] C. Mei, S. Benhimane, E. Malis, and P. Rives, “Efficient homography-based tracking and 3-D reconstruction for single-viewpoint sensors,” IEEE Trans. on Robotics, Vol.24, No.6, pp. 1352-1364, 2008. https://doi.org/10.1109/TRO.2008.2007941
  19. [19] Y. F. Li, X. X. Han, and S. Y. Li, “Non-contact dimension measurement of mechanical parts based on image processing,” 8th Int. Congress on Image and Signal Processing (CISP 2015), pp. 974-978, 2015. https://doi.org/10.1109/CISP.2015.7408020
  20. [20] R. Sulzer, L. Landrieu, A. Boulch, R. Marlet, and B. Vallet, “Deep surface reconstruction from point clouds with visibility information,” 26th Int. Conf. on Pattern Recognition (ICPR 2022), pp. 2415-2422, 2022. https://doi.org/10.1109/ICPR56361.2022.9956560
  21. [21] H. Isack and Y. Boykov, “Energy-based geometric multi-model fitting,” Int. J. of Computer Vision, Vol.97, No.2, pp. 123-147, 2012. https://doi.org/10.1007/s11263-011-0474-7
  22. [22] V. Knyaz, “Machine learning for scene 3D reconstruction using a single image,” Proc. of SPIE Photonics Europe, Vol.11353, Article No.1135321, 2020. https://doi.org/10.1117/12.2556122
  23. [23] M. Brand, L. A. Wulff, Y. Hamdani, and T. Schüppstuhl, “Accuracy of marker tracking on an optical see-through head mounted display,” Annals of Scientific Society for Assembly, Handling and Industrial Robotics, pp. 21-31, 2020. https://doi.org/10.1007/978-3-662-61755-7_3
  24. [24] X. Li, W. Liu, Y. Pan, J. Ma, and F. Wang, “Binocular vision-based 3D method for detecting high dynamic and wide-range contouring errors of CNC machine tools,” Measurement Science and Technology, Vol.30, No.12, Article No.125019, 2019. https://doi.org/10.1088/1361-6501/ab217d
  25. [25] H. Huang, D. Kono, and M. Toyoura, “Vision-based vibration measurement of machine tool,” J. of Advanced Mechanical Design, Systems, and Manufacturing, Vol.16, Issue 1, Article No.JAMDSM0014, 2022. https://doi.org/10.1299/jamdsm.2022jamdsm0014
  26. [26] H. Huang, D. Kono, and M. Toyoura, “Vision based two-dimensional measurement of machine tool motion trajectory,” J. of Advanced Mechanical Design, Systems, and Manufacturing, Vol.18, Issue 6, Article No.JAMDSM0080, 2024. https://doi.org/10.1299/jamdsm.2024jamdsm0080
  27. [27] A. J. Lacey, N. Pinitkarn, and N. A. Thacker, “An evaluation of the performance of RANSAC algorithms for stereo camera calibrarion,” Proc. of the British Machine Vision Conf. 2000, pp. 65.1-65.10, 2000. http://dx.doi.org/10.5244/C.14.65
  28. [28] C. Lv, T. Zhang, and C. Liu, “An improved Otsu’s thresholding algorithm on gesture segmentation,” J. Adv. Comput. Intell. Intell. Inform., Vol.21, No.2, pp. 247-250, 2017. https://doi.org/10.20965/jaciii.2017.p0247

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Sep. 05, 2025