single-jc.php

JACIII Vol.19 No.1 pp. 91-99
doi: 10.20965/jaciii.2015.p0091
(2015)

Paper:

Deep Level Emotion Understanding Using Customized Knowledge for Human-Robot Communication

Jesus Adrian Garcia Sanchez*, Kazuhiro Ohnishi*, Atsushi Shibata*,
Fangyan Dong**, and Kaoru Hirota*

*Department of Computational Intelligence and Systems Science, Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, G3-49, 4259 Nagatsuta, Midori-ku, Yokohama 226-8502, Japan

**Education Academy of Computational Life Sciences (ACLS), Tokyo Institute of Technology, J3-141, 4259 Nagatsuta-cho, Midori-ku, Yokohama 226-8501, Japan

Received:
April 30, 2014
Accepted:
September 2, 2014
Published:
January 20, 2015
Keywords:
emotion understanding, human-robot communication, multi agent, kinect sensor
Abstract
In this study, a method for acquiring deep level emotion understanding is proposed to facilitate better human-robot communication, where customized learning knowledge of an observed agent (human or robot) is used with the observed input information from a Kinect sensor device. It aims to obtain agentdependent emotion understanding by utilizing special customized knowledge of the agent rather than ordinary surface level emotion understanding that uses visual/acoustic/distance information without any customized knowledge. In the experiment employing special demonstration scenarios where a company employee’s emotion is understood by a secretary eye robot equipped with a Kinect sensor device, it is confirmed that the proposed method provides deep level emotion understanding that is different from ordinary surface level emotion understanding. The proposal is being planned to be applied to a part of the emotion understanding module in the demonstration experiments of an ongoing robotics research project titled “Multi-Agent Fuzzy Atmosfield.”
Cite this article as:
J. Sanchez, K. Ohnishi, A. Shibata, F. Dong, and K. Hirota, “Deep Level Emotion Understanding Using Customized Knowledge for Human-Robot Communication,” J. Adv. Comput. Intell. Intell. Inform., Vol.19 No.1, pp. 91-99, 2015.
Data files:
References
  1. [1] L. Cañamero, “Emotion understanding from the perspective of autonomous robots research,” Neural Networks, Emotion and Brain, Vol.18, Issue 4, pp. 445-455, 2005.
  2. [2] C. L. Bethel, “Survery of Psychophysiology Measurements Applied to Human-Robot Interaction,” The 16th IEEE Int. Symposium on Robot and Human interactive Communication, pp. 732-737, 2007.
  3. [3] J. R. Fontaine, K. R. Scherer, E. B. Roesch, and P. C. Ellsworth, “The World of Emotions is No.Two-Dimensional,” Psychological Science, Vol.18, No.12, pp. 1050-1057, 2007.
  4. [4] M. Ilbeygia and H. Shah-Hosseinib, “A novel fuzzy facial expression recognition system based on facial feature extraction from color face images,” Engineering Applications of Artificial Intelligence, Vol.25, Issue 1, pp. 30-146, 2012.
  5. [5] B. I. Ashish and D. S. Chaudhari, “Speech Emotion Recognition,” Int. Journal of Soft Computing and Engineering (IJSCE), Vol.2, Issue 1, pp. 235-238, 2012.
  6. [6] Y. Zhao, X. Wang, M. Goubran, T. Whalen, and E. M. Petriu, “Human emotion and cognition recognition from body language of the head using soft computing techniques,” Journal of Ambient Intelligence and Humanized Computing, Vol.4, Issue 1, pp. 121-140, 2013.
  7. [7] L. Miranda, T. Vieira, D. Martinez, T. Lewiner, and A. W. Vieira, F. M. Campos, “Real-time gesture recognition from depth data through key poses learning and decision forests,” Brazilian Symposium of Computer Graphic and Image Processing, 25th SIBGRAPI: Conf. on Graphics, Patterns and Images, pp. 268-275, 2012.
  8. [8] G. Castellano, L. Kessous, and G. Caridakis, “Emotion Recognition through Multiple Modalities: Face, Body Gesture, Speech,” Affect and Emotion in Human-Computer Interaction, Lecture Notes in Computer Science, Vol.4868, pp. 92-103, Springer Berlin Heidelberg, 2008.
  9. [9] L. Kessous, G. Castellano, and G. Caridakis, “Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis,” J. on Multimodal User Interfaces, Vol.3, Issue 1-2, pp. 33-48, 2010.
  10. [10] M. Kazemifard, N. Ghasem-Aghaee, adn B. L. Koenig, T. I. Ören, “An emotion understanding framework for intelligent agents based on episodic and semantic memories,” Autonomous Agents and Multi-Agent Systems, Springer US, 2013.
  11. [11] Y. Yamazaki, Y. Hatakeyama, F. Dong, K. Nomoto, and K. Hirota, “Fuzzy Inference based Mentality Expression for Eye Robot in Affinity Pleasure-Arousal Space,” J. of Advanced Computational Intelligence and Intelligent Informatics (JACIII), Vol.12, No.3, pp. 304-313, 2008.
  12. [12] D. Matsumoto, “Cultural Similities and Differences in Display Rules,” Motivation and Emotion, Vol.14, No.3, pp. 195-214, 1990.
  13. [13] D. Matsumoto, “Culture and Emotional Expression. Understanding Culture: Theory, Research, and Application,” Psychology Press, pp. 263-279, 2009.
  14. [14] J. Russel, “A Circumplex model of affect,” J. of Psychology and Social Psychology, Vol.39, No.6, pp. 1161-1178, 1980.
  15. [15] J. Russel, T. Niit, and M. Lewicka, “A Cross-Cultural Study of a Circumplex Model of Affect,” J. of Personality and Social Psycology, Vol.57, No.5, pp. 848-856, 1989.
  16. [16] M. A. Livingston, J. Sebastian, Z. Ai, and J. Decker, “Performance Measurements for the Microsoft Kinect Skeleton,” Conf. Proc., Virtual Reality Short Papers and Posters IEEE, pp. 119-120, 2012.
  17. [17] Kinect libraries used in the coding: Audio library
    http://msdn.microsoft.com/en-us/us-en/library/jj131025.aspx,
    Face Tracking library
    http://msdn.microsoft.com/en-us/library/jj130970.aspx,
    Skeletal Tracking library
    http://msdn.microsoft.com/en-us/us-en/library/jj131025.aspx [Accessed April, 2013].
  18. [18] F. Burkhardt, A. Paeschke, M. Rolfes, W. Sendlmeier, and B.Weiss, “A Database of German Emotional Speech,” Proc. Interspeech Lissabon, Portugal, pp. 1517-1520, 2005.
  19. [19] Z. Liu, M. Wu, D. Li, L. Chen, F. Dong, Y. Yamazaki, and K. Hirota, “Concept of Fuzzy Atmosfield for Representing Communication Atmosphere and its Application to Humans-Robots Interaction,” J. of Advanced Computational Intelligence and Intelligent Informatics (JACIII), Vol.17, No.1, pp. 3-17, 2013.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024