Deep Level Emotion Understanding Using Customized Knowledge for Human-Robot Communication
Jesus Adrian Garcia Sanchez*, Kazuhiro Ohnishi*, Atsushi Shibata*,
Fangyan Dong**, and Kaoru Hirota*
*Department of Computational Intelligence and Systems Science, Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, G3-49, 4259 Nagatsuta, Midori-ku, Yokohama 226-8502, Japan
**Education Academy of Computational Life Sciences (ACLS), Tokyo Institute of Technology, J3-141, 4259 Nagatsuta-cho, Midori-ku, Yokohama 226-8501, Japan
-  L. Cañamero, “Emotion understanding from the perspective of autonomous robots research,” Neural Networks, Emotion and Brain, Vol.18, Issue 4, pp. 445-455, 2005.
-  C. L. Bethel, “Survery of Psychophysiology Measurements Applied to Human-Robot Interaction,” The 16th IEEE Int. Symposium on Robot and Human interactive Communication, pp. 732-737, 2007.
-  J. R. Fontaine, K. R. Scherer, E. B. Roesch, and P. C. Ellsworth, “The World of Emotions is No.Two-Dimensional,” Psychological Science, Vol.18, No.12, pp. 1050-1057, 2007.
-  M. Ilbeygia and H. Shah-Hosseinib, “A novel fuzzy facial expression recognition system based on facial feature extraction from color face images,” Engineering Applications of Artificial Intelligence, Vol.25, Issue 1, pp. 30-146, 2012.
-  B. I. Ashish and D. S. Chaudhari, “Speech Emotion Recognition,” Int. Journal of Soft Computing and Engineering (IJSCE), Vol.2, Issue 1, pp. 235-238, 2012.
-  Y. Zhao, X. Wang, M. Goubran, T. Whalen, and E. M. Petriu, “Human emotion and cognition recognition from body language of the head using soft computing techniques,” Journal of Ambient Intelligence and Humanized Computing, Vol.4, Issue 1, pp. 121-140, 2013.
-  L. Miranda, T. Vieira, D. Martinez, T. Lewiner, and A. W. Vieira, F. M. Campos, “Real-time gesture recognition from depth data through key poses learning and decision forests,” Brazilian Symposium of Computer Graphic and Image Processing, 25th SIBGRAPI: Conf. on Graphics, Patterns and Images, pp. 268-275, 2012.
-  G. Castellano, L. Kessous, and G. Caridakis, “Emotion Recognition through Multiple Modalities: Face, Body Gesture, Speech,” Affect and Emotion in Human-Computer Interaction, Lecture Notes in Computer Science, Vol.4868, pp. 92-103, Springer Berlin Heidelberg, 2008.
-  L. Kessous, G. Castellano, and G. Caridakis, “Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis,” J. on Multimodal User Interfaces, Vol.3, Issue 1-2, pp. 33-48, 2010.
-  M. Kazemifard, N. Ghasem-Aghaee, adn B. L. Koenig, T. I. Ören, “An emotion understanding framework for intelligent agents based on episodic and semantic memories,” Autonomous Agents and Multi-Agent Systems, Springer US, 2013.
-  Y. Yamazaki, Y. Hatakeyama, F. Dong, K. Nomoto, and K. Hirota, “Fuzzy Inference based Mentality Expression for Eye Robot in Affinity Pleasure-Arousal Space,” J. of Advanced Computational Intelligence and Intelligent Informatics (JACIII), Vol.12, No.3, pp. 304-313, 2008.
-  D. Matsumoto, “Cultural Similities and Differences in Display Rules,” Motivation and Emotion, Vol.14, No.3, pp. 195-214, 1990.
-  D. Matsumoto, “Culture and Emotional Expression. Understanding Culture: Theory, Research, and Application,” Psychology Press, pp. 263-279, 2009.
-  J. Russel, “A Circumplex model of affect,” J. of Psychology and Social Psychology, Vol.39, No.6, pp. 1161-1178, 1980.
-  J. Russel, T. Niit, and M. Lewicka, “A Cross-Cultural Study of a Circumplex Model of Affect,” J. of Personality and Social Psycology, Vol.57, No.5, pp. 848-856, 1989.
-  M. A. Livingston, J. Sebastian, Z. Ai, and J. Decker, “Performance Measurements for the Microsoft Kinect Skeleton,” Conf. Proc., Virtual Reality Short Papers and Posters IEEE, pp. 119-120, 2012.
-  Kinect libraries used in the coding: Audio library
Face Tracking library
Skeletal Tracking library
http://msdn.microsoft.com/en-us/us-en/library/jj131025.aspx [Accessed April, 2013].
-  F. Burkhardt, A. Paeschke, M. Rolfes, W. Sendlmeier, and B.Weiss, “A Database of German Emotional Speech,” Proc. Interspeech Lissabon, Portugal, pp. 1517-1520, 2005.
-  Z. Liu, M. Wu, D. Li, L. Chen, F. Dong, Y. Yamazaki, and K. Hirota, “Concept of Fuzzy Atmosfield for Representing Communication Atmosphere and its Application to Humans-Robots Interaction,” J. of Advanced Computational Intelligence and Intelligent Informatics (JACIII), Vol.17, No.1, pp. 3-17, 2013.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.