single-au.php

IJAT Vol.19 No.4 pp. 554-565
doi: 10.20965/ijat.2025.p0554
(2025)

Research Paper:

Automatic Viewpoint Selection for Teleoperation Assistance in Unmanned Environments Using Rail-Mounted Observation Robots

Zixuan Liu, Shinsuke Nakashima ORCID Icon, Ren Komatsu ORCID Icon, Nobuto Matsuhira ORCID Icon, Hajime Asama ORCID Icon, Qi An ORCID Icon, and Atsushi Yamashita ORCID Icon

The University of Tokyo
5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8563, Japan

Corresponding author

Received:
November 30, 2024
Accepted:
February 13, 2025
Published:
July 5, 2025
Keywords:
teleoperation, robot vision, viewpoint selection, unmanned environment, nuclear decommissioning
Abstract

In irradiated environments that are inaccessible to human workers, operations are often conducted via teleoperation. Consequently, robot operators must maintain continuous situational awareness of a previously unknown working environment. Visual information regarding task targets and robot manipulators is of utmost importance. The proposed method employs rail-mounted observation robots to position easily replaceable cameras capable of long-term deployment in such environments. To reduce the cognitive load on teleoperators, the automatic viewpoint selection system eliminates the need for direct control of observation robots. This research presents a method for using a single rail-mounted observation robot to gather information on an unknown environment and automatically determine an optimal viewpoint. A key contribution of this study is the viewpoint presentation system, which can adapt to occlusions caused by robots and adjust its positions accordingly. The proposed method was validated through computer simulation using a hybrid model consisting of a static environment and a dynamic robot arm, which moves within the environment and may obstruct views. Furthermore, the feasibility of the approach was demonstrated in a real-world experiment involving a robot arm performing a teleoperation task.

Cite this article as:
Z. Liu, S. Nakashima, R. Komatsu, N. Matsuhira, H. Asama, Q. An, and A. Yamashita, “Automatic Viewpoint Selection for Teleoperation Assistance in Unmanned Environments Using Rail-Mounted Observation Robots,” Int. J. Automation Technol., Vol.19 No.4, pp. 554-565, 2025.
Data files:
References
  1. [1] T. Maruyama et al., “Robot vision system R&D for ITER blanket remote-handling system,” Fusion Engineering and Design, Vol.89, Nos.9-10, pp. 2404-2408, 2014. https://doi.org/10.1016/j.fusengdes.2014.01.004
  2. [2] R. Yokomura et al., “Rail DRAGON: Long-reach bendable modularized rail structure for constant observation inside PCV,” IEEE Robotics and Automation Letters, Vol.9, No.4, pp. 3275-3282, 2024. https://doi.org/10.1109/LRA.2024.3366022
  3. [3] B. G. Brooks, G. T. McKee, and P. S. Schenker, “The visual acts model for automated camera placement: Further results and analysis,” Proc. of 2002 IEEE Int. Conf. on Robotics and Automation, Vol.4, pp. 3706-3711, 2002. https://doi.org/10.1109/ROBOT.2002.1014285
  4. [4] H. Das, T. B. Sheridan, and J.-J. E. Slotine, “Kinematic control and visual display of redundant teleoperators,” Proc. of IEEE Int. Conf. on Systems, Man and Cybernetics, Vol.3, pp. 1072-1077, 1989. https://doi.org/10.1109/ICSMC.1989.71462
  5. [5] J. Dufek, X. Xiao, and R. R. Murphy, “Best viewpoints for external robots or sensors assisting other robots,” IEEE Trans. on Human-Machine Systems, Vol.51, No.4, pp. 324-334, 2021. https://doi.org/10.1109/THMS.2021.3090765
  6. [6] H. Liu et al., “Viewpoint selection for the efficient teleoperation of a robot arm using reinforcement learning,” IEEE Access, Vol.11, pp. 119647-119658, 2023. https://doi.org/10.1109/ACCESS.2023.3327826
  7. [7] H. Liu et al., “Viewpoint selection without subject experiments for teleoperation of robot arm in reaching task using reinforcement learning,” 2022 IEEE/SICE Int. Symp. on System Integration, pp. 1015-1020, 2022. https://doi.org/10.1109/SII52469.2022.9708809
  8. [8] R. Sato, M. Kamezaki, S. Niuchi, S. Sugano, and H. Iwata, “Derivation of an optimum and allowable range of pan and tilt angles in external sideway views for grasping and placing tasks in unmanned construction based on human object recognition,” 2019 IEEE/SICE Int. Symp. on System Integration, pp. 776-781, 2019. https://doi.org/10.1109/SII.2019.8700335
  9. [9] G. T. McKee and P. S. Schenker, “Visual acts for remote viewing during teleoperation,” Proc. of 1995 IEEE Int. Conf. on Robotics and Automation, Vol.1, pp. 53-58, 1995. https://doi.org/10.1109/ROBOT.1995.525263
  10. [10] G. T. McKee and P. S. Schenker, “Human-robot cooperation for automated viewing during teleoperation,” Proc. of 1995 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, Vol.1, pp. 124-129, 1995. https://doi.org/10.1109/IROS.1995.525785
  11. [11] G. T. McKee, B. G. Brooks, and P. S. Schenker, “Human-robot interaction for intelligent assisted viewing during teleoperation,” 36th Annual Hawaii Int. Conf. on System Sciences, 2003. https://doi.org/10.1109/HICSS.2003.1174286
  12. [12] X. Xiao, J. Dufek, and R. R. Murphy, “Autonomous visual assistance for robot operations using a tethered UAV,” Field and Service Robotics: Results of the 12th Int. Conf., pp. 15-29, 2021. https://doi.org/10.1007/978-981-15-9460-1_2
  13. [13] Y. Wakita, S. Hirai, and T. Kino, “Automatic camera-work control for intelligent monitoring of telerobotic tasks,” Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 1130-1135, 1992. https://doi.org/10.1109/IROS.1992.594531
  14. [14] S. Ullman, “The interpretation of structure from motion,” Proc. of the Royal Society of London, Series B: Biological Sciences, Vol.203, No.1153, pp. 405-426, 1979. https://doi.org/10.1098/rspb.1979.0006
  15. [15] OpenGL-Wiki, “Main page—opengl wiki,” 2018. http://www.khronos.org/opengl/wiki_opengl/index.php?title=Main_Page&oldid=14430 [Accessed November 29, 2024]
  16. [16] T. Wright, T. Hanari, K. Kawabata, and B. Lennox, “Fast in-situ mesh generation using Orb-SLAM2 and OpenMVS,” 17th Int. Conf. on Ubiquitous Robots, pp. 315-321, 2020. https://doi.org/10.1109/UR49135.2020.9144879
  17. [17] AgiSoft, “Agisoft metashape,” 2025. https://www.agisoft.com/ [Accessed January 19, 2025]
  18. [18] TEPCO, “Unit 1 PCV internal investigation (aerial surveillance),” 2024 (in Japanese). https://www.meti.go.jp/earthquake/nuclear/decommissioning/committee/osensuitaisakuteam/2024/06/06/3-3-4.pdf [Accessed January 21, 2025]
  19. [19] Y. Zeng et al., “High-resolution image inpainting with iterative confidence feedback and guided upsampling,” 16th European Conference on Computer Vision (ECCV 2020), 2020. https://doi.org/10.1007/978-3-030-58529-7_1

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jul. 04, 2025