JRM Vol.23 No.3 pp. 451-457
doi: 10.20965/jrm.2011.p0451


Modulation of Musical Sound Clips for Robot’s Dynamic Emotional Expression

Eun-Sook Jee*, Chong Hui Kim**, and Hisato Kobayashi***

*Div. of Mechanical Engineering, KAIST, #2314, Building #N5, 373-1 KAIST, Guseong-dong, Yuseong-gu, Daejeon 305-701, Korea

**Human-Robot Interaction Research Center, KAIST, (Agency for Defense Development, Daejeon, Korea), 373-1 KAIST, Guseong-dong, Yuseong-gu, Daejeon 305-701,Korea

***Graduate School of Art and Technology, Hosei University, 4-7-1 Fujimi, Chiyoda-ku, Tokyo 102-8160, Japan

July 9, 2010
March 1, 2011
June 20, 2011
emotional expression, human-robot interaction, sound design, music composition

Sound is an important medium for human-robot interaction. Single sound or music clip is not enough to express delicate emotions, especially it is almost impossible to represent emotional changings. This paper tries to express different emotional levels of sounds and their transitions. In this paper, happiness, sadness, anger, and surprise are considered as a basic set of robots’ emotion. By using previous proposed nominal sound clips of the four emotions, this paper proposes a method to reproduce the different emotional levels of sounds by modulating their musical parameters ‘tempo,’ ‘pitch,’ and ‘volume.’ Basic experiments whether human subject can discern three different emotional intensity levels of the four emotions are carried out. By comparing the recognition rate, the proposed modulation works fairly well and at least shows possibility of letting humans identify three intensity levels of emotions. Since the modulation can be done by dynamically changing the three musical parameters of sound clip, our method can be expanded to dynamical changing of emotional sounds.

Cite this article as:
Eun-Sook Jee, Chong Hui Kim, and Hisato Kobayashi, “Modulation of Musical Sound Clips for Robot’s Dynamic Emotional Expression,” J. Robot. Mechatron., Vol.23, No.3, pp. 451-457, 2011.
Data files:
  1. [1] H. Miwa, K. Itoh, M. Matsumoto, M. Zecca, H. Takanobu, S. Roccella, M. C. Carrozza, P. Dario, and A. Takanishi, “Effective emotional expressions with emotion expression humanoid robot WE-4RII: Integration of humanoid robot hand RCH-1,” IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 2203-2208, 2004.
  2. [2] T. Nakanishi and T. Kitagawa, “Visualization of music impression in facial expression to represent emotion,” Proc. of Asia-Pacific conference on Conceptual Modeling, Vol.53, pp. 55-64, 2006.
  3. [3] E.-S. Jee, C. H. Kim, S.-Y. Park, and K.-W. Lee, “Composition of musical sound expressing an emotion of robot based on musical factors,” Proc. of the IEEE Int. Symposium on Robot and Human Interactive Communication, pp. 637-641, 2007.
  4. [4] M. Hattori, M. Tsuji, S. Tadokoro, T. Takamori, and K. Yamada, “An analysis and generation of bunraku puppet’s emotions based on linear structure of functional factors, emotional factors, and stochastic fluctuations for generation of humanoid robot’s actions with fertile emotions,” J. of Robotics and Mechatronics, Vol.11, No.5, pp. 393-398, 1999.
  5. [5] N. Kubota and S. Wakisaka, “An emotional model based on location-dependent memory for partner robots,” J. of Robotics and Mechatronics, Vol.21, No.3, pp. 317-323, 2009.
  6. [6] A. J. Blood, R. J. Zatorre, P. Bermudez, and A. C. Evans, “Emotional Responses to Pleasant and Unpleasant Music Correlate with Activity in Paralimbic Brain Regions,” Nature Neuroscience, Vol.2, No.4, pp. 382-387, 1999.
  7. [7] T. Baumgartner, K. Lutz, C. F. Schmidt, and L. Jancke, “The Emotional Power of Music: How Music Enhances the Feeling of Affective Picture,” Brain Research, Vol.1075, pp. 151-164, 2006.
  8. [8] P. N. Juslin and D. Vastfall, “Emotional Responses to Music: The Need to Consider Underlying Mechanisms,” Behavioral and Brain Sciences, Vol.31, pp. 556-621, 2008.
  9. [9] K. Hevner, “Expression in music: A discussion of experimental studies and theories,” Psychological Review, Vol.42, pp. 186-2004, 1935.
  10. [10] K. Hevner, “The affective character of the major and minor modes in music,” American J. of Psychology, Vol.47, No.4, pp. 103-118, 1935.
  11. [11] K. Hevner, “Experimental studies of the elements of expression in music,” American J. of Psychology, Vol.48, No.2, pp. 248-268, 1936.
  12. [12] K. Hevner, “The affective value of pitch and tempo in music,” American J. of Psychology, Vol.49, No.4, pp. 621-630, 1937.
  13. [13] P. N. Juslin, “Cue Utilization in Communication of Emotion in Music Performance: Relating Performance to Perception,” J. of Experimental Psychology, Vol.16, No.6, pp. 1797-1813, 2000.
  14. [14] A. Gabrielsson and E. Lindström, “The influence of musical structure on emotional expression,” in P. N. Juslin and J. A. Sloboda (Eds.), Music and Emotion: Theory and Research, Oxford University Press, New York, 2001.
  15. [15] P. N. Juslin and P. Laukka, “Communication of emotions in vocal expression and music performance: Different channels, same code?,” Psychological Bulletin, Vol.129, No.5, pp. 770-814, 2003.
  16. [16] E. Schubert, “Modeling perceived emotion with continuous musical features,” Music Perception, Vol.21, No.4, pp. 561-585, 2004.
  17. [17] J. F. Cohn and G. S. Kats, “Bimodal expression of emotion by face and voice,” Proc. of the Sixth ACM Int. Conf. on Multimedia, pp. 41-44, 1998.
  18. [18] S. R. Livingstone and W. F. Thompson, “The emergence of music from the theory of mind,” Musicae Scientiae, pp. 83-115, 2009.
  19. [19] O. Post and D. Huron, “Western classical music in the minor mode is slower (except in the romantic period),” Empirical Musicology Review, Vol.4, No.1, pp. 2-10, 2009.
  20. [20] E.-S. Jee, Y.-J. Cheong, C. H. Kim, D.-S. Kwon, and H. Kobayashi, “Sound Production for the Emotional Expression,” in A. Lazinica (Ed.), Advances in Human-Robot Interaction, IN-TECH, In-Press.
  21. [21] J. A. Russel, “A circumplex model of affect,” J. of Personality and Social Psychology, Vol.39, pp. 1161-1178, 1980.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Mar. 05, 2021