single-jc.php

JACIII Vol.21 No.4 pp. 686-696
doi: 10.20965/jaciii.2017.p0686
(2017)

Paper:

Generation of Bystander Robot Actions Based on Analysis of Relative Probability of Human Actions

Kazuki Sakai*,**, Fabio Dalla Libera*, Yuichiro Yoshikawa*,**, and Hiroshi Ishiguro*,**

*Osaka University
1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan

**JST ERATO, Ishiguro Symbiotic Human-Robot Interaction Project, Japan

Received:
November 20, 2016
Accepted:
January 18, 2017
Published:
July 20, 2017
Keywords:
bystander robot, social robot, human robot interaction
Abstract

This paper describes a method of rule extraction for generation of appropriate actions by a robot in a multiparty conversation based on the relative probability of human actions in a similar situation. The proposed method was applied to a dataset collected from multiparty interactions between two robots and one human subject who took on the role of supporting one robot. By computing the relative occurrence probabilities of human actions after the execution of the robots’ actions, twenty rules describing human behavior in such a role were identified. To evaluate the rules, the human role was filled by a new bystander robot and other subjects were asked to report their impressions of video clips in which the bystander robot acted or did not act in accordance with the rules. The reported impressions and a quantitative analysis of the rules suggest that the behavior of listening and the supporting role that the subjects play can be reproduced by a bystander robot acting in accordance with the rules identified by the proposed method.

Cite this article as:
K. Sakai, F. Libera, Y. Yoshikawa, and H. Ishiguro, “Generation of Bystander Robot Actions Based on Analysis of Relative Probability of Human Actions,” J. Adv. Comput. Intell. Intell. Inform., Vol.21 No.4, pp. 686-696, 2017.
Data files:
References
  1. [1] D. Sakamoto, T. Kanda, T. Ono, H. Ishiguro, and N. Hagita, “Android as a telecommunication medium with a human-like presence,” Proc. of the ACM/IEEE Int. Conf. on Human–robot interaction, 2007, New York, USA, pp. 193-200, 2007.
  2. [2] P. Liu, D. F. Glas, T. Kanda, H. Ishiguro, and N. Hagita, “How to train your robot-teaching service robots to reproduce human social behavior,” The 23rd IEEE Int. Symp. on Robot and Human Interactive Communication, 2014, Edinburgh, IEEE, pp. 961-968, 2014.
  3. [3] B. Mutlu, J. Forlizzi, and J. Hodgins, “A storytelling robot: Modeling and evaluation of human-like gaze behavior,” 2006 6th IEEE-RAS Int. Conf. on Humanoid Robots, 2006, Genova, IEEE, pp. 518-523, 2006.
  4. [4] Y. Yoshikawa, K. Shinozawa, H. Ishiguro, N. Hagita, and T. Miyamoto, “The effects of responsive eye movement and blinking behavior in a communication robot,” 2006 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2006, Beijing, IEEE, pp. 4564-4569, 2006.
  5. [5] R. Vertegaal, R. Slagter, G. van der Veer and A. Nijholt, “Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes,” Proc. of the SIGCHI Conf. on Human factors in computing systems, ACM, pp. 301-308, 2001.
  6. [6] C. L. Sidner, C. Lee, L. P. Morency, and C. Forlines, “The effect of head-nod recognition in human-robot conversation,” Proc. of the 1st ACM SIGCHI/SIGART Conf. on Human-robot interaction, 2006, Salt Lake City, Utah, pp. 290-296, 2006.
  7. [7] M. Salem, S. Kopp, I. Wachsmuth, K. Rohlfing, and F. Joublin, “Generation and evaluation of communicative robot gesture,” Int. J. of Social Robotics. Vol.4, No,2, pp. 201-217, 2012.
  8. [8] A. Csapo. E. Gilmartin, J. Grizou, J. Han, R. Meena, D. Anastasiou, K. Jokinen, and G. Wilcock, “Multimodal conversational interaction with a humanoid robot,” IEEE 3rd Int. Conf. on Cognitive Infocommunications (CogInfoCom) 2012, Kosice, Slovakia, IEEE, pp. 667-672, 2012.
  9. [9] C. T. Ishi, C. Liu, H, Ishiguro, and N. Hagita, “Evaluation of formant-based lip motion generation in tele-operated humanoid robots.” 2012 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems(IROS) 2012, Vilamoura, IEEE, pp. 2377-2382, 2012.
  10. [10] K. Sakai, C. T. Ishi, T. Minato, and H, Ishiguro, “Online speech-driven head motion generating system and evaluation on a tele-operated robot,” 2015 24th IEEE Int. Symp. on Robot and Human Interactive Communication (RO-MAN) 2015, Kobe, IEEE, pp. 529-534, 2015.
  11. [11] T. Watanabe, R. Danbara, and M. Okubo, “InterActor: Speech-driven embodied interactive actor,” Proc. of 11th IEEE Int. Workshop on Robot and Human Interactive Communication, IEEE, pp. 430-435, 2002.
  12. [12] R. Nishimura, Y. Todo, K. Yamamoto, and S. Nakagawa, “Chat-like spoken dialog system for a multi-party dialog incorporating two agents and a user,” The 1st Int. Conf. on Human-Agent Interaction, 2013.
  13. [13] K. Hayashi, D. Sakamoto, T. Kanda, M. Shiomi, S. Koizumi, H. Ishiguro, T. Ogasawara, and N. Hagita, “Humanoid robots as a passive-social medium-a field experiment at a train station,” 2007 2nd ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI), pp. 137-144, 2007.
  14. [14] M. Shimada, Y. Yoshikawa, M. Asada, N. Saiwaki, and H. Ishiguro, “Effects of observing eye contact between a robot and another person,” Int. J. of Social Robotics, Vol.3, pp. 143-154, 2011.
  15. [15] Y. Yoshikawa, Attention and preference of humans and robots, M. Kasaki et al. (eds.), Cognitive Nueroscience Robotics A: Springer Japan, pp. 95-119, 2016.
  16. [16] T. Arimoto, Y. Yoshikawa, and H. Ishiguro, “Nodding responses by collective proxy robots for enhancing social telepresence,” Proc. of the 2nd Int. Conf. on Human-agent interaction, pp. 97-102, 2014.
  17. [17] A. N. Meltzoff, R. Brooks, A. P. Shon, and R. P. Rao, ““Social” robots are psychological agents for infants: A test of gaze following,” Neural Networks, Vol.23, pp. 966-972, 2010.
  18. [18] F. Yamaoka, T. Kanda, H. Ishiguro, and N. Hagita, “A model of proximity control for information-presenting robots,” IEEE Trans. on Robotics, Vol.26, No.1, pp. 187-195, 2010.
  19. [19] R. Ishii, K. Otsuka, S. Kumano, M. Matsuda, and J. Yamato, “Predicting next speaker and timing from gaze transition patterns in multiparty meetings,” Proc. of the 15th ACM on Int. Conf. on multimodal interaction, pp. 79-86, 2013.
  20. [20] J. Lee and S. Marsella, “Modeling side participants and bystanders: the importance of being a laugh track,” Proc. of the 10th Int. Conf. on Intelligent virtual agents, pp. 240-247, 2011.
  21. [21] H. Ishiguro, T. Minato, Y. Yoshikawa, and M. Asada, “Humanoid platforms for cognitive developmental robotics,” Int. J. of Humanoid Robotics, Vol.8, pp. 391-418, 2011.
  22. [22] C. E. Jack and W. R. Thurlow, “Effects of degree of visual association and angle of displacement on the “ventriloquism” effect,” Percept. Mot. Skills., Vol.37, pp. 967-979, 1973.
  23. [23] C. L. Sidner, C. Lee, and N. Lesh, “Engagement Rules for Human-Robot Collaborative Interaction,” Proc. of the IEEE Conf. on Systems, Man and Cybernetics, 2003.
  24. [24] Y. Matsusaka, T. Tojo, S. Kubota, K. Furukawa, D. Tamiya, K. Hayata, Y. Nakano, and T. Kobayashi, “Multi-person conversation via multi-modal interface – a robot who communicate with multi-user –,” EUROSPEECH, Vol.99, pp. 1723-1726, 1999.
  25. [25] InertiaCube4 Website, http://www.intersense.com/pages/18/234/
  26. [26] J. A. Deddens and M. R. Petersen, “Approaches for estimating prevalence ratios,” Occup Environ Med 2008, Vol.65, pp. 501-506, 2008.
  27. [27] J. Zhang and K. Yu, “What’s the relative risk? A method of correcting the odds ratio in cohort studies of common outcomes,” JAMA, 1998.
  28. [28] U. Stromberg, “Prevalence odds ratio v prevalence ratio,” Occup Environ Med., 1994.
  29. [29] L. A. McNutt, C. Wu, X. Xue, and J. P. Hafner, “Estimating the relative risk in cohort studies and clinical trials of common outcomes,” Am J Epidemiol, 2003.
  30. [30] Social communication robot, CommU Website, http://www.jst.go.jp/pr/announce/20150120-2/
  31. [31] A. Kikuchi, “Notes on the researches using KiSS-18,” Bulletin of the faculty of social welfare, Iwate Prefectural University, Vol.6, pp. 41-51, 2004.
  32. [32] N. Ohshima, K. Kimijima, J. Yamato, and N. Mukawa, “Do robot’s fillers silences contribute to fluent communications with humans? Toward appropriate design of conversation robot behaviors who have insufficient ability of communication,” Abstract of IEICE Trans. on Fundamentals of Electronics, Communications and Computer Sciences (Japanese Edition), Vol.J99-A, No.1, pp. 2-13, 2016 (in Japanese).
  33. [33] T. Koda and H. Higashino, “Effects of users’ social skill on evaluation of a virtual agent that exhibits self-adaptors,” Intelligent Virtual Agents: 13th Int. Conf., IVA 2013, 2013;8108:423, 2013.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024