single-jc.php

JACIII Vol.17 No.3 pp. 362-370
doi: 10.20965/jaciii.2013.p0362
(2013)

Paper:

Discovering Emotion-Inducing Music Features Using EEG Signals

Rafael Cabredo*,**, Roberto Legaspi*, Paul Salvador Inventado*,**,
and Masayuki Numao*

*The Institute of Scientific and Industrial Research, Osaka University, 8-1 Mihogaoka, Ibaraki, Osaka 567-0047, Japan

**Center for Empathic Human-Computer Interactions, De La Salle University, 2401 Taft Avenue, Manila 1004, Philippines

Received:
October 11, 2012
Accepted:
January 7, 2013
Published:
May 20, 2013
Keywords:
music emotion recognition, machine learning, electroencephalograph
Abstract
Music induces different kinds of emotions in listeners. Previous research on music and emotions discovered that different music features can be used for classifying how certain music can induce emotions in an individual. We propose a method for collecting electroencephalograph (EEG) data from subjects listening to emotion-inducing music. The EEG data is used to continuously label high-level music features with continuous-valued emotion annotations using the emotion spectrum analysis method. The music features are extracted fromMIDI files using a windowing technique. We highlight the results of two emotion models for stress and relaxation which were constructed using C4.5. Evaluations of the models using 10-fold cross validation give promising results with an average relative absolute error of 6.54% using a window length of 38.4 seconds.
Cite this article as:
R. Cabredo, R. Legaspi, P. Inventado, and M. Numao, “Discovering Emotion-Inducing Music Features Using EEG Signals,” J. Adv. Comput. Intell. Intell. Inform., Vol.17 No.3, pp. 362-370, 2013.
Data files:
References
  1. [1] A. Gabrielsson and P. N. Juslin, “Emotional expression in music,” Handbook of affective sciences, R. J. Davidson, K. R. Scherer, and H. H. Goldsmith (Eds.), pp. 503-534, Oxford University Press, 2003.
  2. [2] P. Juslin and J. Sloboda, “Handbook of music and emotion: theory, research, applications,” Oxford University Press, 2010.
  3. [3] S. R. Livingstone, R. Muhlberger, A. R. Brown, and W. F. Thompson, “Changing musical emotion: A computational rule system for modifying score and performance,” Computer Music J., Vol.34, No.1, pp. 41-64, 2010.
  4. [4] E. Schubert, “Affective, Evaluative, and Collative Responses to Hated and Loved Music,” Psychology of Aesthetics Creativity and the Arts, Vol.4, No.1, pp. 36-46, 2010.
  5. [5] Y. E. Kim, E.M. Schmidt, R. Migneco, B. G. Morton, P. Richardson, J. Scott, J. A. Speck, and D. Turnbull, “Music Emotion Recognition: A State of the Art Review,” 11th Int. Society for MIR Conf., pp. 255-266, August 2010.
  6. [6] Y.-C. Lin, Y.-H. Yang, and H. H. Chen, “Exploiting online music tags for music emotion classification,” ACM Trans. on Multimedia Computing, Communications, and Applications, Vol.7S, No.1, pp. 1-16, 2011.
  7. [7] K. R. Scherer and M. R. Zentner, “Emotional Effects of Music: production rules,” Music and Emotion: Theory and Research, pp. 361-387, 2001.
  8. [8] K. Trohidis, G. Tsoumakas, G. Kalliris, and I. Vlahavas, “Multilabel classification of music into emotion,” Proc. of the Int. Conf. on MIR, Philadelphia, PA, 2008.
  9. [9] E.M. Schmidt and Y. E. Kim, “Modeling Musical Emotion Dynamics with Conditional Random Fields,” Proc. of 12th Int. Society for Music Information Retrieval Conf., pp. 777-782, 2011.
  10. [10] J. A. Speck, E.M. Schmidt, B. G. Morton, and Y. E. Kim, “A Comparative Study of Collaborative vs. Traditional Musical Mood Annotation,” Proc. of 12th Int. Society forMusic Information Retrieval Conf., pp. 549-554, 2011.
  11. [11] Y. P. Lin, C. H. Wang, T. L. Wu, S. K. Jeng, and J. H. Chen, “EEGbased emotion recognition in music listening: A comparison of schemes for multiclass support vector machine,” Proc. of Int. Conf. on Acoustics, Speech, and Signal Processing, pp. 489-492, 2009.
  12. [12] Y. Liu, O. Sourina, and M. K. Nguyen, “Real-time EEG-based Human Emotion Recognition and Visualization,” Proc. of the Int. Conf. on Cyberworlds, pp. 262-269, 2010.
  13. [13] J. Onton and S. Makeig, “High-frequency broadband modulation of electroencephalographic spectra,” Frontiers in Human Neurosciences, Vol.3, No.61, 2009.
  14. [14] T. Musha, Y. Terasaki, H. A. Haque, and G. A. Ivanitsky, “Feature extraction from EEGs associated with emotions,” J. of Artificial Life and Robotics, Vol.1, No.1, pp. 15-19, 1997.
  15. [15] C. McKay and I. Fujinaga, “jSymbolic: A feature extractor for MIDI files,” Proc. of the Int. Computer Music Conf., pp. 302-305, 2006.
  16. [16] J. R. Quinlan, “Learning With Continuous Classes,” Proc. of the 5th Australian Joint Conf. on Artificial Intelligence, pp. 343-348,World Scientific, 1992.
  17. [17] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, “The WEKA data mining software: an update,” SIGKDD Explor. Newsl., Vol.11, No.1, pp. 10-18, November 2009.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Nov. 04, 2024