single-jc.php

JACIII Vol.17 No.1 pp. 3-17
doi: 10.20965/jaciii.2013.p0003
(2013)

Paper:

Concept of Fuzzy Atmosfield for Representing Communication Atmosphere and its Application to Humans-Robots Interaction

Zhen-Tao Liu*,**, Min Wu**, Dan-Yun Li**, Lue-Feng Chen*, Fang-Yan Dong*, Yoichi Yamazaki***, and Kaoru Hirota*

*Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, G3-49, 4259 Nagatsuta, Midori-ku, Yokohama, Kanagawa 226-8502, Japan

**School of Information Science and Engineering, Central South University, Yuelu Mountain, Changsha, Hunan 410083, China

***Department of Electrical, Electronic, and Information Engineering, Kanto Gakuin University, 1-50-1 Mutsuura-higashi, Kanazawa-ku, Yokohama, Kanagawa 236-8501, Japan

Received:
July 26, 2012
Accepted:
October 19, 2012
Published:
January 20, 2013
Keywords:
cognitive science, human-robot interaction, fuzzy logic, visualization
Abstract
The concept of Fuzzy Atmosfield (FA) is proposed to represent a communication atmosphere in society consisting of multiple individuals such as humans and/or robots, where the FA is characterized by 3D fuzzy cubic space with “Friendly-Hostile” (FH), “Lively-Calm” (LC), and “Casual-Formal” (CF) axes, and each state of the FA is visualized using shape-colorlength graphics. It is targeted to be a tool for identifying an atmosphere using quantitative analysis and graphical representation. By a humans-robots interaction experiment in which the FA is used to represent the real-time atmosphere created by four humans and five eye robots in a home party scenario, it shows that Pearson’s correlation coefficient values of 0.92, 0.86, and 0.72 for the FH, LC, and CF axes, respectively, indicate the correspondence between the proposed FA and results of questionnaires, and that subjective estimation of graphical representation of the FA achieves 84% accuracy for shape, 76% for color, and 58% for length. The FA is being extended to the representation of complex atmosphere generated by humans, robots, and background music, and part of results is also shown.
Cite this article as:
Z. Liu, M. Wu, D. Li, L. Chen, F. Dong, Y. Yamazaki, and K. Hirota, “Concept of Fuzzy Atmosfield for Representing Communication Atmosphere and its Application to Humans-Robots Interaction,” J. Adv. Comput. Intell. Intell. Inform., Vol.17 No.1, pp. 3-17, 2013.
Data files:
References
  1. [1] R.W. Picard, “Affective Computing: Challenges,” Int. J. of Human-Computer Studies, Vol.59, No.12, pp. 5564, 2003.
  2. [2] J. L. Burke, R. R. Murphy et al., “Final Report for the DARPA/NSF Interdisciplinary Study on HumanRobot Interaction,” IEEE Trans. on Systems, Man, and Cybernetics – Part C: Applications and Reviews, Vol.34, No.2, pp. 103-112, 2004.
  3. [3] K. Hirota and F.-Y. Dong, “Development of Mascot Robot System in NEDO project,” IEEE Int. Conf. on Intelligent Systems, Varna, Bulgaria, 2008.
  4. [4] Y. Yamazaki, H. A. Vu et al., “Gesture Recognition Using Combination of Acceleration Sensor and Images for Casual Communication Between Robots and Humans,” IEEE World Congress on Evolutionary Computation, Barcelona, Spain, 2010.
  5. [5] V. Akre, E. Falkum et al., “The Communication Atmosphere Between Physician Colleagues Competitive Perfectionism or Supportive Dialogue A Norwegian Study,” Social Science & Medicine, Vol.44, No.4, pp. 519-526, 1997.
  6. [6] T.M. Rutkowski and D. P.Mandic, “Modelling the Communication Atmosphere: A Human Centered Multimedia Approach to Evaluate Communicative Situations,” Artificial Intelligence for Human Computing, Springer-Verlag, Berlin, Vol.4451, pp. 155-169, 2007.
  7. [7] T. M. Rutkowski, K. Kakusho et al., “Evaluation of the Communication Atmosphere,” Knowledge-Based Intelligent Information and Engineering Systems, Springer-Verlag, Berlin, Vol.3213, pp. 364-370, 2004.
  8. [8] B. Anderson, “Affective Atmospheres,” Emotion, Space and Society, Vol.2, No.2, pp. 77-81, 2009.
  9. [9] Y. Yamazaki, Y. Hatakeyama et al., “Fuzzy Inference BasedMentality Expression for Eye Robot in Affinity Pleasure-Arousal Space,” J. of Advanced Computational Intelligence and Intelligent Informatics, Vol.12, No.3, pp. 304-313, 2008.
  10. [10] Z.-T. Liu, M. Wu et al., “Emotional States Based 3-D Fuzzy Atmosfield for Casual Communication Between Humans and Robots,” IEEE Int. Conf. on Fuzzy Systems, Taipei, Taiwan, 2011.
  11. [11] D. J. Stanley and J. P. Meyer, “Two-Dimensional Affective Space: A New Approach to Orienting the Axes,” Emotion, Vol.9, No.2, pp. 214-237, 2009.
  12. [12] C. Breazeal, “Emotion and Sociable Humanoid Robots,” Int. J. of Human-Computer Studies, Vol.59, No.12, pp. 119-155, 2003.
  13. [13] P. B. Byl, “A Six-Dimensional Paradigm for Generating Emotions in Virtual Characters,” Int. J. of Intelligent Games & Simulation Vol.2, No.2, pp. 72-79, 2003.
  14. [14] Z.-T. Liu, F.-Y. Dong et al., “Proposal of Fuzzy Atmosfield for Mood Expression of Human-Robot Communication,” Int. Symp. on Intelligent Systems, Tokyo, Japan, 2010.
  15. [15] S. Hein, “Feeling Words.”
    http://eqi.org/fw.htm
  16. [16] J. M. Lattin, J. D. Carroll et al., “Analyzing Multivariate Data,” Duxbury Press, CA, 2002.
  17. [17] P. Ahlgren, B. Jarneving et al., “Requirements for A Cocitation Similarity Measure, with Special Reference to Pearson’s Correlation Coefficient,” J. of the American Society for Information Science and Technology, Vol.54, No.6, pp. 550-560, 2003.
  18. [18] E. Cox, “Fuzzy Fundamentals,” IEEE Spectrum, Vol.29, No.10, pp. 58-61, 1992.
  19. [19] J. A. Russell, A. Weiss et al., “Affect Grid: A Singleitem Scale of Pleasure and Arousal,” J. of Personality and Social Psychology, Vol.57, No.3, pp. 493-502, 1989.
  20. [20] L. Sussman, “Verbal communication.”
    http://cobweb2.louisville.edu/faculty/regbruce/bruce/mgmtwebs/commun f98/Verbal.htm
  21. [21] M. Pavlova, A. A. Sokolov et al., “Perceived Dynamics of Static Images Enables Emotional Attribution,” Perception, Vol.34, pp. 1107-1116, 2005.
  22. [22] H. Gunes and M. Piccardi, “Bi-Modal Emotion Recognition from Expressive Face and Body Gestures,” J. of Network and Computer Applications, Vol.30, No.4, pp. 1334-1345, 2007.
  23. [23] N. Kaya and H. H. Epps, “Relationship Between Color and Emotion: A Study of College Students,” College Student Journal, Vol.38, pp. 396-405, 2004.
  24. [24] J. H. Xin, K. M. Cheng et al., “Cross-Regional Comparison of Colour Emotions Part I Quantitative Analysis,” Color Research & Application, Vol.29, No.6, pp. 451-457, 2004.
  25. [25] P. Ekman, “Are there basic emotions?,” Psychological Review, Vol.99, No.3, pp. 550-553, 1992.
  26. [26] S. Zhang, Y. Xu et al., “Analysis and Modeling of Affective Audio Visual Speech Based on PAD Emotion Space,” The 6th Int. Symp. on Chinese Spoken Language Processing, Kunming, China, 2008.
  27. [27] C. Breazeal, “Affective Interaction Between Humans and Robots,” Advances in Artificial Life, Springer-Verlag, Berlin, Vol.2159, pp. 582-591, 2001.
  28. [28] H. Miwa, T. Umetsu et al., “Robot Personalization Based on the Mental Dynamics,” IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Takamatsu, Japan, 2000.
  29. [29] M. Grimm, E. Mower et al., “Primitives-Based Evaluation and Estimation of Emotions in Speech,” Speech Communication, Vol.49, No.10-11, pp. 787-800, 2007.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 06, 2024