single-rb.php

JRM Vol.28 No.2 pp. 173-184
doi: 10.20965/jrm.2016.p0173
(2016)

Paper:

Research on Superimposed Terrain Model for Teleoperation Work Efficiency

Takanobu Tanimoto*, Ryo Fukano**, Kei Shinohara*, Keita Kurashiki*, Daisuke Kondo*, and Hiroshi Yoshinada*

*Graduate School of Engineering, Osaka University
2-8 Yamada-oka, Suita, Osaka 565-0871, Japan

**Komatsu Ltd.
2-3-6 Akasaka, Minato, Tokyo 107-8414, Japan

Received:
October 20, 2015
Accepted:
December 22, 2015
Published:
April 20, 2016
Keywords:
teleoperation, augmented reality, digital terrain model, hydraulic excavator, unmanned construction
Abstract
In recent years, unmanned construction based on the teleoperation of construction equipment has increasingly been used in disaster sites or mines. However, operations based on teleoperation are based on 2D images, in which the lack of perspective results in considerably lower efficiency when compared with on-board operations. Previous studies employed multi-viewpoint images or binocular stereo, which resulted in problems, such as lower efficiency, caused by the operator's need to evaluate distances by shifting his or her line of sight, or eye fatigue due to binocular stereo. Thus, the present study aims to improve the work efficiency of teleoperation by superimposing a 3D model of the terrain on the on-board operator's view image. The surrounding terrain is measured by a depth image sensor and represented as a digital terrain model, which is generated and updated in real time. The terrain model is transformed into the on-board operator's view, on which an artificial shadow of the bucket tip and an evenly spaced grid projected to the ground surface are superimposed. This allows the operator to visually evaluate the bucket tip position from the artificial shadow and the distance between the excavation point and bucket tip from the terrain grid. An experiment was conducted investigating the positioning of the bucket tip by teleoperation using a miniature excavator and the terrain model superimposed display. The results showed that the standard deviations of the positioning errors measured with the superimposed display were lower by 30% or more than those obtained without the superimposed display, while they were approximately equal to those acquired using binocular stereo. We thus demonstrated the effectiveness of the superimposed display in improving work efficiency in teleoperation.
Superimposed terrain model in operator's view image

Superimposed terrain model in operator's view image

Cite this article as:
T. Tanimoto, R. Fukano, K. Shinohara, K. Kurashiki, D. Kondo, and H. Yoshinada, “Research on Superimposed Terrain Model for Teleoperation Work Efficiency,” J. Robot. Mechatron., Vol.28 No.2, pp. 173-184, 2016.
Data files:
References
  1. [1] Y. Hiramatsu, T. Aono, and M. Nishio, “Disaster restoration work for the eruption of Mt Usuzan using an unmanned construction system,” Advanced Robotics, Vol.16, No.6, pp. 505-508, 2002.
  2. [2] M. Moteki, K. Fujino, and A. Nishiyama, “Research on operator's mastery of unmanned construction,” Proc. the 30th Int. Symp. on Automation and Robotics in Construction and Mining (ISARC), pp. 540-547, 2013.
  3. [3] M. Kamezaki, J. Yang, H. Iwata, and S. Sugano, “A Basic Framework of Virtual Reality Simulator for Advancing Disaster Response Work Using Teleoperated Work Machines,” J. of Robotics and Mechatronics, Vol.26, No.4, pp. 486-495, 2014.
  4. [4] T. Hirabayashi, “Examination of information presentation method for teleoperation excavator,” J. of Robotics and Mechatronics, Vol.24, No.6, pp. 967-976, 2012.
  5. [5] A. Nishiyama, M. Moteki, K. Fujino, and T. Hashimoto, “Research on the comparison of operator viewpoints between manned and remote control operation in unmanned construction systems,” Proc. the 30th Int. Symp. on Automation and Robotics in Construction and Mining (ISARC), pp. 772780, 2013.
  6. [6] M. Moteki, K. Fujino, T. Ohtsuki, and T. Hashimoto, “Research on Visual Point of Operator in Remote Control of Construction Machinery,” Proc. the 28th Int. Symp. on Automation and Robotics in Construction (ISARC), pp. 532-537, 2011.
  7. [7] H. Furuya, N. Kuriu, and C. Shimizu, “Development of next generation remote-controlled machinery system – Remote operation using the apparatus and experience 3D images –,” Proc. the 13th Symp. on Construction Robotics in Japan, pp. 109-116, 2012 (in Japanese).
  8. [8] S. Yano, M. Eomoto, and T. Mitsuhashi, “Two factors in visual fatigue caused by stereoscopic HDTV images,” DISPLAYS, Vol.25, No.4, pp. 141-150, 2004.
  9. [9] N. Fujiwara, T. Onda, H. Masuda, and K. Chayama, “Virtual property lines drawing on the monitor for observation of unmanned dam construction site,” Proc. IEEE/ACMInt. Symp. Augmented Reality, pp. 101-104, 2000.
  10. [10] T. Takanobu, R. Fukano, K. Shinohara, H. Yoshinada, K. Kurashiki, and D. Kondo, “Superimposed Terrain Model on the Operator's View Image of Teleoperation,” Proc. the 15th Symp. on Construction Robotics in Japan, O-22, 2015 (in Japanese).
  11. [11] K. Shinohara, T. Koike, K. Kurashiki, R. Fukano, and H. Yoshinada, “Miniature hydraulic excavator model for teleoperability evaluation test platform,” Proc. the 9th JFPS Int. Symp. on Fluid Power (ISFP), pp. 340-347, 2014.
  12. [12] S. Oishi, Y. Jeong, R. Kurazume, Y. Iwashita, and T. Hasegawa, “ND voxel localization using large-scale 3D environmental map and RGB-D camera,” 2013 IEEE Int. Conf. on Robotics and Biomimetics (ROBIO), pp. 538-545, 2013.
  13. [13] S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera,” Proc. the 10th IEEE Int. Symposium on Mixed and Augmented Reality (ISMAR), pp. 127-136, 2011.
  14. [14] G. Bradski and A. Kaehler, “Learning OpenCV, Computer Vision with the OpenCV Library,” O'Reilly Media, 2008.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024