Paper:
Discovering Emotion-Inducing Music Features Using EEG Signals
Rafael Cabredo*,**, Roberto Legaspi*, Paul Salvador Inventado*,**,
and Masayuki Numao*
*The Institute of Scientific and Industrial Research, Osaka University, 8-1 Mihogaoka, Ibaraki, Osaka 567-0047, Japan
**Center for Empathic Human-Computer Interactions, De La Salle University, 2401 Taft Avenue, Manila 1004, Philippines
- [1] A. Gabrielsson and P. N. Juslin, “Emotional expression in music,” Handbook of affective sciences, R. J. Davidson, K. R. Scherer, and H. H. Goldsmith (Eds.), pp. 503-534, Oxford University Press, 2003.
- [2] P. Juslin and J. Sloboda, “Handbook of music and emotion: theory, research, applications,” Oxford University Press, 2010.
- [3] S. R. Livingstone, R. Muhlberger, A. R. Brown, and W. F. Thompson, “Changing musical emotion: A computational rule system for modifying score and performance,” Computer Music J., Vol.34, No.1, pp. 41-64, 2010.
- [4] E. Schubert, “Affective, Evaluative, and Collative Responses to Hated and Loved Music,” Psychology of Aesthetics Creativity and the Arts, Vol.4, No.1, pp. 36-46, 2010.
- [5] Y. E. Kim, E.M. Schmidt, R. Migneco, B. G. Morton, P. Richardson, J. Scott, J. A. Speck, and D. Turnbull, “Music Emotion Recognition: A State of the Art Review,” 11th Int. Society for MIR Conf., pp. 255-266, August 2010.
- [6] Y.-C. Lin, Y.-H. Yang, and H. H. Chen, “Exploiting online music tags for music emotion classification,” ACM Trans. on Multimedia Computing, Communications, and Applications, Vol.7S, No.1, pp. 1-16, 2011.
- [7] K. R. Scherer and M. R. Zentner, “Emotional Effects of Music: production rules,” Music and Emotion: Theory and Research, pp. 361-387, 2001.
- [8] K. Trohidis, G. Tsoumakas, G. Kalliris, and I. Vlahavas, “Multilabel classification of music into emotion,” Proc. of the Int. Conf. on MIR, Philadelphia, PA, 2008.
- [9] E.M. Schmidt and Y. E. Kim, “Modeling Musical Emotion Dynamics with Conditional Random Fields,” Proc. of 12th Int. Society for Music Information Retrieval Conf., pp. 777-782, 2011.
- [10] J. A. Speck, E.M. Schmidt, B. G. Morton, and Y. E. Kim, “A Comparative Study of Collaborative vs. Traditional Musical Mood Annotation,” Proc. of 12th Int. Society forMusic Information Retrieval Conf., pp. 549-554, 2011.
- [11] Y. P. Lin, C. H. Wang, T. L. Wu, S. K. Jeng, and J. H. Chen, “EEGbased emotion recognition in music listening: A comparison of schemes for multiclass support vector machine,” Proc. of Int. Conf. on Acoustics, Speech, and Signal Processing, pp. 489-492, 2009.
- [12] Y. Liu, O. Sourina, and M. K. Nguyen, “Real-time EEG-based Human Emotion Recognition and Visualization,” Proc. of the Int. Conf. on Cyberworlds, pp. 262-269, 2010.
- [13] J. Onton and S. Makeig, “High-frequency broadband modulation of electroencephalographic spectra,” Frontiers in Human Neurosciences, Vol.3, No.61, 2009.
- [14] T. Musha, Y. Terasaki, H. A. Haque, and G. A. Ivanitsky, “Feature extraction from EEGs associated with emotions,” J. of Artificial Life and Robotics, Vol.1, No.1, pp. 15-19, 1997.
- [15] C. McKay and I. Fujinaga, “jSymbolic: A feature extractor for MIDI files,” Proc. of the Int. Computer Music Conf., pp. 302-305, 2006.
- [16] J. R. Quinlan, “Learning With Continuous Classes,” Proc. of the 5th Australian Joint Conf. on Artificial Intelligence, pp. 343-348,World Scientific, 1992.
- [17] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten, “The WEKA data mining software: an update,” SIGKDD Explor. Newsl., Vol.11, No.1, pp. 10-18, November 2009.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.