single-jc.php

JACIII Vol.21 No.4 pp. 660-666
doi: 10.20965/jaciii.2017.p0660
(2017)

Paper:

Where Robot Looks Is Not Where Person Thinks Robot Looks

Yusuke Tamura*, Takafumi Akashi**, and Hisashi Osumi**

*Graduate School of Engineering, The University of Tokyo
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan

**Faculty of Science and Engineering, Chuo University
1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan

Received:
November 28, 2016
Accepted:
February 16, 2017
Published:
July 20, 2017
Keywords:
human-robot interaction, attention
Abstract

For a robot to smoothly interact with humans, it has to possess the capability to manipulate human attention to a certain degree. In this study, we start with a hypothesis that humans cannot correctly perceive what a robot is looking at. To examine the hypothesis, an experiment, which focuses on the relationship between a robot’s geometrical gaze point and the gaze point perceived by a human, was conducted. The results of the experiment supported the hypothesis. Based on the results, we propose a computational model that calculates where robots are to look in order to guide human’s attention to the desired area. The validity of the proposed model was demonstrated by cross validation.

Cite this article as:
Y. Tamura, T. Akashi, and H. Osumi, “Where Robot Looks Is Not Where Person Thinks Robot Looks,” J. Adv. Comput. Intell. Intell. Inform., Vol.21 No.4, pp. 660-666, 2017.
Data files:
References
  1. [1] M. Bennewitz, F. Faber, D. Joho, M. Schreiber, and S. Behnke, “Towards a Humanoid Museum Guide Robot that Interacts with Multiple Persons,” Proc. of 2005 5th IEEE-RAS Int. Conf. on Humanoid Robots, pp. 418-423, 2005.
  2. [2] M. Shiomi, T. Kanda, H. Ishiguro, and N. Hagita, “Interactive Humanoid Robots for a Science Museum,” IEEE Intelligent Systems, Vol.22, No.2, pp. 25-32, 2007.
  3. [3] Y. Nozawa, H. Dohi, H. Iba, and M. Ishizuka, “Humanoid Robot Presentation Controlled by Multimodal Presentation Markup Language MPML,” Proc. of the 13th IEEE Int. Workshop on Robot and Human Interactive Communication, pp. 153-158, 2004.
  4. [4] H. Kamide, K. Kawabe, S. Shigemi, and T. Arai, “Nonverbal Behaviors toward an Audience and a Screen for a Presentation by a Humanoid Robot,” Artificial Intelligence Research, Vol.3, No.2, pp. 57-66, 2014.
  5. [5] L. Itti, C. Koch, and E. Niebur, “A Model of Saliency-based Visual Attention for Rapid Scene Analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.20, No.11, pp. 1254-1258, 1998.
  6. [6] M. Cerf, J. Harel, W. Einhäuser, and C. Koch, “Predicting Human Gaze Using Low-Level Saliency Combined with Face Detection,” Advances in Neural Information Processing Systems, Vol.20, pp. 241-248, 2008.
  7. [7] M. Ozeki, Y. Kashiwagi, M. Inoue, and N. Oka, “Top-Down Visual Attention Control Based on a Particle Filter for Human-Interactive Robots,” Proc. of the 4th Int. Conf. on Human System Interaction, pp. 188-194, 2011.
  8. [8] Y. Tamura, S. Yano, and H. Osumi, “Visual Attention Model for Manipulating Human Attention by a Robot,” Proc. of the 2014 IEEE Int. Conf. on Robotics and Automation, pp. 5307-5312, 2014.
  9. [9] Y. Tamura, T. Akashi, S. Yano, and H. Osumi, “Human Visual Attention Model Based on Analysis of Magic for Smooth Human-Robot Interaction,” Int. J. of Social Robotics, Vol.8, No.5, pp. 689-694, 2016.
  10. [10] T. Akashi, Y. Tamura, S. Yano, and H. Osumi, “Analysis of Manipulating Other’s Attention for Smooth Interaction between Human and Robot,” Proc. of the 2013 IEEE/SICE Int. Symp. on System Integration, pp. 340-345, 2013.
  11. [11] C. Breazeal and B. Scasselati, “A Context-Dependent Attention System for a Social Robot,” Proc. of the 16th Int. Joint Conf. on Artificial Intelligence, pp. 1146-1151, 1999.
  12. [12] H. Kozima and H. Yano, “A Robot that Learns to Communicate with Human Caregivers,” Proc. of the 1st Int. Workshop on Epigenetic Robotics, 2001.
  13. [13] Y. Nagai, K. Hosoda, A. Morita, and M. Asada, “A Constructive Model for the Development of Joint Attention,” Connection Science, Vol.15, No.4, pp. 211-229, 2003.
  14. [14] M. Imai, T. Ono, and H. Ishiguro, “Physical Relation and Expression: Joint Attention for Human-Robot Interaction,” IEEE Trans. on Industrial Electronics, Vol.50, No.4, pp. 636-643, 2003.
  15. [15] M.M. Hoque, T. Onuki, Y. Kobayashi, and Y. Kuno, “Effect of Robot’s Gaze Behaviors for Attracting and Controlling Human Attention,” Advanced Robotics, Vol.27, No.11, pp. 813-829, 2013.
  16. [16] C. Moore, M. Angelopoulos, and P. Bennett, “The Role of Movement in the Development of Joint Visual Attention,” Infant Behavior and Development, Vol.20, No.1, pp. 83-92, 1997.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024