single-rb.php

JRM Vol.33 No.1 pp. 24-32
doi: 10.20965/jrm.2021.p0024
(2021)

Development Report:

Design and Evaluation of Attention Guidance Through Eye Gazing of “NAMIDA” Driving Agent

Shintaro Tamura*, Naoki Ohshima**, Komei Hasegawa*, and Michio Okada*

*Department of Computer Science and Engineering, Toyohashi University of Technology
1-1 Hibarigaoka, Tempaku-cho, Toyohashi, Aichi 441-8580, Japan

**Electronics-Inspired Interdisciplinary Research Institute (EIIRIS), Toyohashi University of Technology
1-1 Hibarigaoka, Tempaku-cho, Toyohashi, Aichi 441-8580, Japan

Received:
September 30, 2019
Accepted:
September 7, 2020
Published:
February 20, 2021
Keywords:
human-robot interaction, autonomous car, pre-cueing task, gaze direction of social robot
Abstract

The driving agents considered thus far have aimed at navigating the driver’s attention while driving, for example, using interactions through linguistic conversations. Therefore, in this study, to investigate such a role in automatic driving from the perspective of nonverbal communication focusing on physicality (e.g., head movements and eye gaze), we constructed a driving agent called NAMIDA, along with its physical properties, as a research platform to investigate the role of nonverbal communication. We conducted a cognitive experiment on attention guidance, focusing on “gaze direction,” i.e., the movement of the eyes of NAMIDA. As a result, we confirmed that the attention of the participants is attracted by such eye-gaze movements of “NAMIDA,” which become a “cue” to exploring the surroundings.

Dashboard-embedded driving agent NAMIDA

Dashboard-embedded driving agent NAMIDA

Cite this article as:
S. Tamura, N. Ohshima, K. Hasegawa, and M. Okada, “Design and Evaluation of Attention Guidance Through Eye Gazing of “NAMIDA” Driving Agent,” J. Robot. Mechatron., Vol.33 No.1, pp. 24-32, 2021.
Data files:
References
  1. [1] K. Williams and C. Breazeal, “Reducing Driver Task Load and Promoting Sociability through an Affective Intelligent Driving Agent (AIDA),” P. Kotz, G. Marsden, G. Lindgaard, J. Wesson, and M. Winckler (Eds.), “Human-Computer Interaction – INTERACT 2013,” Springer, Vol.8120, pp. 619-626, 2013.
  2. [2] K. Williams, J. C. Peters, and C. Breazeal, “Towards leveraging the driver’s mobile device for an intelligent, sociable in-car robotic assistant,” 2013 IEEE Intelligent Vehicles Symp. (IV), pp. 369-376, 2013.
  3. [3] N. Karatas, S. Yoshikawa, P. Ravindra S. De Silva, and M. Okada, “How Multi-Party Conversation Can Become an Effective Interface While Driving,” J. of Human Interface, Vol.20, No.3, pp. 83-100, 2018.
  4. [4] N. Karatas, S. Tamura, M. Fushiki, and M. Okada, “Multi-party Conversation of Driving Agents: The Effects of Overhearing Information on Lifelikeness and Distraction,” Proc. of the 6th Int. Conf. on Human-Agent Interaction (HAI ’18), pp. 84-91, 2018.
  5. [5] N. Karatas, S. Yoshikawa, P. Ravindra S. De Silva, and M. Okada, “NAMIDA: Multiparty Conversation Based Driving Agents in Futuristic Vehicle,” Proc. of the 17th Int. Conf. on Human-Computer Interaction (HCII 2014), pp. 198-207, 2015.
  6. [6] Y. Yoshiike, P. Ravindra S. De Silva, and M. Okada, “MAWARI: A Social Interface to Reduce the Workload of the Conversation,” Proc. of Int. Conf. on Social Robotics (ICSR 2011), pp. 11-20, 2011.
  7. [7] N. Karatas, S. Yoshikawa, S. Tamura, S. Otaki, R. Funayama, and M. Okada, “Sociable Driving Agents to Maintain Driver’s Attention in Autonomous Driving,” Proc. of 26th IEEE Int. Symp. on Robot and Human Interactive Communication (RO-MAN), pp. 143-149, 2017.
  8. [8] N. Karatas, S. Tamura, M. Fushiki, and M. Okada, “Improving Human-Autonomous Car Interaction through Gaze Following Behaviors of Driving Agents,” Trans. of the Japanese Society for Artificial Intelligence, Vol.34, No.2, pp. 1-11, 2019.
  9. [9] N. Matsumoto, H. Fujii, M. Goan, and M. Okada, “Minimal communication design of embodied interface,” Proc. of the 2005 Int. Conf. on Active Media Technology (AMT 2005), pp. 225-230, 2005.
  10. [10] N. Matsumoto, H. Fujii, and M. Okada, “Minimal design for human-agent Communication,” Artificial Life and Robotics, Vol.10, No.1, pp. 49-54, 2006.
  11. [11] M. I. Posner, “Orienting of attention,” Quarterly J. of Psychology, Vol.32, Issue 1, pp. 3-25, 1980.
  12. [12] M. Otsuki, K. Maruyama, H. Kuzuoka, and Y. Suzuki, “Effects of Enhanced Gaze Presentation on Gaze Leading in Remote Collaborative Physical Tasks,” Proc. of the 2018 CHI Conf. on Human Factors in Computing Systems (CHI ’18), No.368, pp. 1-11, 2018.
  13. [13] R. Sato and Y. Takeuchi, “Coordinating turn-taking and talking in multi-party conversations by controlling robot’s eye-gaze,” Proc. of the 23rd IEEE Int. Symp. on Robot and Human Interactive Communication (ROMAN 2014), pp. 280-285, 2014.
  14. [14] C. Stanton and C. J. Stevens, “Robot Pressure: The Impact of Robot Eye Gaze and Lifelike Bodily Movements upon Decision-Making and Trust,” Proc. of Int. Conf. on Social Robotics (ICSR) 2014, M. Beetz, B. Johnston, and M. A. Williams (Eds.), “Social Robotics,” Lecture Notes in Computer Science, Vol.8755, pp. 330-339, 2014.
  15. [15] Y. Kuno, H. Sekiguchi, T. Tsubota, S. Moriyama, K. Yamazaki, and A. Yamazaki, “Museum Guide Robot with Communicative Head Motion,” Proc. of the 15th IEEE Int. Symp. on Robot and Human Interactive Communication (RO-MAN06), pp. 33-38, 2006.
  16. [16] J. Driver IV, G. Davis, P. Ricciardelli, P. Kidd, E. Maxwell, and S. Baron-Cohen, “Gaze perception triggers reflexive visuospatial orienting,” Visual Cognition, Vol.6, No.5, pp. 509-540, 1999.
  17. [17] R. A. Barton and R. I. M. Dunbar, “Evolution of the social brain,” A. Whiten and R. W. Byrne (Eds.), “Machiavellian intelligence II: Extensions and evaluations,” Cambridge University Press, pp. 240-263, 1997.
  18. [18] B. Reeves and C. Nass, “The media equation: How people treat computers, television, and new media like real people and places,” The University of Chicago Press, 1996.
  19. [19] T. Tanaka, K. Fujikake, Y. Yoshihara, T. Yonekawa, M. Inagami, H. Aoki, and H. Kanamori, “Driving Behavior Improvement through Driving Support and Review Support from Driver Agent,” Proc. of the 6th Int. Conf. on Human-Agent Interaction (HAI ’18), pp. 36-44, 2018.
  20. [20] T. Tanaka, K. Fujikake, T. Yonekawa, M. Yamagishi, M. Inagami, F. Kinoshita, H. Aoki, and H. Kanamori, “Driver Agent for Encouraging Safe Driving Behavior for the Elderly,” Proc. of the 5th Int. Conf. on Human Agent Interaction (HAI ’17), pp. 71-79, 2017.
  21. [21] D. Katagami, M. Hongo, and T. Tanaka, “Agent design for the construction of new relationship between a car and a driver,” Proc. of the 30th Fuzzy System Symp. (Fuzzy System Symp. 2014), Vol.30, pp. 356-357, 2015.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024