single-au.php

IJAT Vol.17 No.3 pp. 284-291
doi: 10.20965/ijat.2023.p0284
(2023)

Review:

Digital Twin of Experience for Human–Robot Collaboration Through Virtual Reality

Tetsunari Inamura*,**,† ORCID Icon

*National Institute of Informatics (NII)
2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan

**The Graduate University for Advanced Studies (SOKENDAI)
Hayama, Japan

Corresponding author

Received:
November 18, 2022
Accepted:
February 7, 2023
Published:
May 5, 2023
Keywords:
digital twin, virtual reality, human–robot interaction, behavior change
Abstract

The keyword “human digital twin” has received considerable attention in recent years, and information technology has been developed in healthcare and sports training systems to guide human behavior to a better state. In contrast, from optimizing the production and maintenance processes of industrial products, which is the origin of the term “digital twin,” intelligent robot systems can be interpreted as a mainstream of digital twin. In other words, assistive robots that support humans in their daily lives and improve their life behavior require the integration of human digital twin and conventional object digital twin. However, integrating these two digital twins is not easy from the viewpoint of system integration. In addition, it is necessary to encourage humans to change their behavior to provide users with subjective and immersive experiences rather than simply displaying numerical information. This study reviews the current status and limitations of these digital twin technologies and proposes the concept of a virtual reality (VR) digital twin that integrates digital twins and VR toward assistive robotic systems. This will expand the experience of both humans and robots and open the way to the realization of robots that can better support our daily lives.

Cite this article as:
T. Inamura, “Digital Twin of Experience for Human–Robot Collaboration Through Virtual Reality,” Int. J. Automation Technol., Vol.17 No.3, pp. 284-291, 2023.
Data files:
References
  1. [1] M. Grieves and J. Vickers, “Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems,” F.-J. Kahlen, S. Flumerfelt, and A. Alves (Eds.), “Transdisciplinary perspectives on complex systems: New findings and approaches,” pp. 85-113, Springer, 2017.
  2. [2] W. Kritzinger, M. Karner, G. Traar, J. Henjes, and W. Sihn, “Digital twin in manufacturing: A categorical literature review and classification,” IFAC-PapersOnLine, Vol.51, No.11, pp. 1016-1022, 2018.
  3. [3] N. Bagaria, F. Laamarti, H. F. Badawi, A. Albraikan, R. A. M. Velazquez, and A. E. Saddik, “Health 4.0: Digital twins for health and well-being,” A. E. Saddik, M. S. Hossain, and B. Kantarci (Eds.), “Connected Health in Smart Cities,” pp. 143-152, Springer, 2020.
  4. [4] M. Mochimaru, “Digital human models for human-centered design,” J. Robot. Mechatron., Vol.29, No.5, pp. 783-789, 2017.
  5. [5] T. Maruyama, T. Ueshiba, M. Tada, H. Toda, Y. Endo, Y. Domae, Y. Nakabo, T. Mori, and K. Suita, “Digital twin-driven human robot collaboration using a digital human,” Sensors, Vol.21, No.24, 8266, 2021.
  6. [6] R. Kitahara, T. Kurahashi, T. Nishimura, I. Naito, D. Tokunaga, and K. Mori, “Research and development of digital twin computing for creating a digitalized world,” NTT Technical Review, Vol.19, No.12, pp. 16-22, 2021.
  7. [7] V. Kuts, T. Otto, T. Tähemaa, and Y. Bondarenko, “Digital twin based synchronised control and simulation of the industrial robotic cell using virtual reality,” J. of Machine Engineering, Vol.19, No.1, pp. 128-145, 2019.
  8. [8] A. Bilberg and A. A. Malik, “Digital twin driven human–robot collaborative assembly,” CIRP Annals, Vol.68, No.1, pp. 499-502, 2019.
  9. [9] F. Kaneko, K. Shindo, M. Yoneta, M. Okawada, K. Akaboshi, and M. Liu, “A case series clinical trial of a novel approach using augmented reality that inspires self-body cognition in patients with stroke: Effects on motor function and resting-state brain functional connectivity,” Frontiers in Systems Neuroscience, Vol.13, 76, 2019.
  10. [10] R. Valner, S. Wanna, K. Kruusamäe, and M. Pryor, “Unified meaning representation format (UMRF)—a task description and execution formalism for HRI,” ACM Trans. on Human-Robot Interaction, Vol.11, No.4, 38, 2022.
  11. [11] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” The Int. J. of Robotics Research, Vol.37, Nos.4-5, pp. 421-436, 2018.
  12. [12] I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, J. Schneider, N. Tezak, J. Tworek, P. Welinder, L. Weng, Q. Yuan, W. Zaremba, and L. Zhang, “Solving Rubik’s cube with a robot hand,” arXiv: 1910.07113, 2019.
  13. [13] P.-C. Yang, K. Sasaki, K. Suzuki, K. Kase, S. Sugano, and T. Ogata, “Repeatable folding task by humanoid robot worker using deep learning,” IEEE Robotics and Automation Letters, Vol.2, No.2, pp. 397-403, 2017.
  14. [14] C. Shi, S. Satake, T. Kanda, and H. Ishiguro, “A robot that distributes flyers to pedestrians in a shopping mall,” Int. J. of Social Robotics, Vol.10, No.4, pp. 421-437, 2018.
  15. [15] T.-C. Chi, M. Shen, M. Eric, S. Kim, and D. Hakkani-Tur, “Just Ask: An interactive learning framework for vision and language navigation,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.34, No.3, pp. 2459-2466, 2020.
  16. [16] M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik, D. Parikh, and D. Batra, “Habitat: A platform for embodied AI research,” 2019 IEEE/CVF Int. Conf. on Computer Vision (ICCV), pp. 9338-9346, 2019.
  17. [17] A. Padmakumar, J. Thomason, A. Shrivastava, P. Lange, A. Narayan-Chen, S. Gella, R. Piramuthu, G. Tur, and D. Hakkani-Tur, “TEACh: Task-driven embodied agents that chat,” arXiv: 2110.00534, 2021.
  18. [18] E. Kolve, R. Mottaghi, D. Gordon, Y. Zhu, A. Gupta, and A. Farhadi, “AI2-THOR: An interactive 3D environment for visual AI,” arXiv: 1712.05474, 2017.
  19. [19] X. Puig, K. Ra, M. Boben, J. Li, T. Wang, S. Fidler, and A. Torralba, “VirtualHome: Simulating household activities via programs,” 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 8494-8502, 2018.
  20. [20] S. Almeaibed, S. Al-Rubaye, A. Tsourdos, and N. P. Avdelidis, “Digital twin analysis to promote safety and security in autonomous vehicles,” IEEE Communications Standards Magazine, Vol.5, No.1, pp. 40-46, 2021.
  21. [21] T. Inamura and Y. Mizuchi, “SIGVerse: A cloud-based VR platform for research on multimodal Human-Robot interaction,” Frontiers in Robotics and AI, Vol.8, 549360, 2021.
  22. [22] G. Schrotter and C. Hürzeler, “The digital twin of the city of Zurich for urban planning,” PFG – J. of Photogrammetry, Remote Sensing and Geoinformation Science, Vol.88, No.1, pp. 99-112, 2020.
  23. [23] N. Mohammadi and J. E. Taylor, “Smart city digital twins,” 2017 IEEE Symp. Series on Computational Intelligence, pp. 1-5, 2017.
  24. [24] E. C. Kingsley, N. A. Schofield, and K. Case, “A computer aid for man machine modelling,” Proc. of the 8th Annual Conf. on Computer Graphics and Interactive Techniques (SIGGRAPH’81), pp. 163-169, 1981.
  25. [25] N. Koenig and A. Howard, “Design and use paradigms for Gazebo, an open-source multi-robot simulator,” 2004 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 2149-2154, 2004.
  26. [26] J. D. N. Dionisio, W. G. Burns III, and R. Gilbert, “3D virtual worlds and the metaverse: Current status and future possibilities,” ACM Computing Surveys, Vol.45, No.3, 34, 2013.
  27. [27] A. Furui, S. Eto, K. Nakagaki, K. Shimada, G. Nakamura, A. Masuda, T. Chin, and T. Tsuji, “A myoelectric prosthetic hand with muscle synergy–based motion determination and impedance model–based biomimetic control,” Science Robotics, Vol.4, No.31, 2019. https://doi.org/10.1126/scirobotics.aaw6339
  28. [28] E. Lendaro, E. Mastinu, B. Håkansson, and M. Ortiz-Catalan, “Real-time classification of non-weight bearing lower-limb movements using EMG to facilitate phantom motor execution: Engineering and case study application on phantom limb pain,” Frontier in Neurology, Vol.8, 470, 2017.
  29. [29] J. I. Lipton, A. J. Fay, and D. Rus, “Baxter’s homunculus: Virtual reality spaces for teleoperation in manufacturing,” IEEE Robotics and Automation Letters, Vol.3, No.1, pp. 179-186, 2018.
  30. [30] T. Zhang, Z. McCarthy, O. Jow, D. Lee, X. Chen, K. Goldberg, and P. Abbeel, “Deep imitation learning for complex manipulation tasks from virtual reality teleoperation,” 2018 IEEE Int. Conf. on Robotics and Automation, pp. 5628-5635, 2018.
  31. [31] T. Inamura, Y. Mizuchi, and H. Yamada, “VR platform enabling crowdsourcing of embodied HRI experiments – case study of online robot competition,” Advanced Robotics, Vol.35, No.11, pp. 697-703, 2021.
  32. [32] Y. Mizuchi and T. Inamura, “Optimization of criterion for objective evaluation of HRI performance that approximates subjective evaluation: A case study in robot competition,” Advanced Robotics, Vol.34, Nos.3-4, pp. 142-156, 2020.
  33. [33] H. Kawasaki, S. Wakisaka, H. Saito, A. Hiyama, and M. Inami, “A system for augmenting humans’ ability to learn Kendama tricks through virtual reality training,” Proc. of the Augmented Humans Int. Conf. 2022 (AHs’22), pp. 152-161, 2022.
  34. [34] A. A. Ravankar, S. A. Tafrishi, J. V. S. Luces, F. Seto, and Y. Hirata, “Care: Cooperation of AI robot enablers to create a vibrant society,” IEEE Robotics & Automation Magazine, Vol.30, No.1, pp. 8-23, 2023. https://doi.org/10.1109/MRA.2022.3223256
  35. [35] Y. Goutsu and T. Inamura, “Instant difficulty adjustment: Predicting success rate of VR Kendama when changing the difficulty level,” Augmented Humans Int. Conf. 2023 (AHs’23), 2023.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024