single-jc.php

JACIII Vol.23 No.2 pp. 274-281
doi: 10.20965/jaciii.2019.p0274
(2019)

Paper:

A Common Spatial Pattern and Wavelet Packet Decomposition Combined Method for EEG-Based Emotion Recognition

Jingxia Chen*,**,†, Dongmei Jiang*, and Yanning Zhang*

*School of Computer Science and Engineering, Northwestern Polytechnical University
Xi’an, Shaanxi 710072, China

**Department of Electrical and Information Engineering, Shaanxi University of Science and Technology
Xi’an, Shaanxi 710021, China

Corresponding author

Received:
June 1, 2018
Accepted:
July 24, 2018
Published:
March 20, 2019
Keywords:
EEG, common spatial pattern, wavelet packet decomposition, emotion recognition, SVM
Abstract

To effectively reduce the day-to-day fluctuations and differences in subjects’ brain electroencephalogram (EEG) signals and improve the accuracy and stability of EEG emotion classification, a new EEG feature extraction method based on common spatial pattern (CSP) and wavelet packet decomposition (WPD) is proposed. For the five-day emotion related EEG data of 12 subjects, the CSP algorithm is firstly used to project the raw EEG data into an optimal subspace to extract the discriminative features by maximizing the Kullback-Leibler (KL) divergences between the two categories of EEG data. Then the WPD algorithm is used to decompose the EEG signals into the related features in time-frequency domain. Finally, four state-of-the-art classifiers including Bagging tree, SVM, linear discriminant analysis and Bayesian linear discriminant analysis are used to make binary emotion classification. The experimental results show that with CSP spatial filtering, the emotion classification on the WPD features extracted with bior3.3 wavelet base gets the best accuracy of 0.862, which is 29.3% higher than that of the power spectral density (PSD) feature without CSP preprocessing, is 23% higher than that of the PSD feature with CSP preprocessing, is 1.9% higher than that of the WPD feature extracted with bior3.3 wavelet base without CSP preprocessing, and is 3.2% higher than that of the WPD feature extracted with the rbio6.8 wavelet base without CSP preprocessing. Our proposed method can effectively reduce the variance and non-stationary of the cross-day EEG signals, extract the emotion related features and improve the accuracy and stability of the cross-day EEG emotion classification. It is valuable for the development of robust emotional brain-computer interface applications.

Cite this article as:
J. Chen, D. Jiang, and Y. Zhang, “A Common Spatial Pattern and Wavelet Packet Decomposition Combined Method for EEG-Based Emotion Recognition,” J. Adv. Comput. Intell. Intell. Inform., Vol.23 No.2, pp. 274-281, 2019.
Data files:
References
  1. [1] C. H. Han, J. H. Lim, J. H. Lee, K. Kim, and C. H. Im, “Data driven user feedback: An improved neurofeedback strategy considering the interindividual variability of EEG features,” BioMed Research Int., Vol.2016, pp. 3939815, 2016.
  2. [2] J. Wagner and J. Kim, “From physiological signals to emotions: Implementing and comparing selected methods for feature extraction and classification,” IEEE Int. Conf. on Multimedia and Expo, pp. 940-943, 2005.
  3. [3] E. Basar, C. Basar-Eroglu, S. Karakas, and M. Schurmann, “Oscillatory brain theory: A new trend in neuroscience – The role of oscillatory processes in sensory and cognitive functions,” IEEE Eng. Med. Biol. Mag., Vol.18, No.3, pp. 56-66, 1999.
  4. [4] J. Coan and J. Allen, “Frontal EEG asymmetry as a moderator and mediator of emotion,” Biol. Psychol. Vol.67, pp. 7-50, 2004.
  5. [5] P. Petrantonakis and L. Hadjileontiadis, “A novel emotion elicitation index using frontal brain asymmetry for enhanced EEG-based emotion recognition,” IEEE Trans. Inf. Technol. Biomed., Vol.15, pp. 737-746, 2011.
  6. [6] X. Li, B. Hu, T. Zhu, J. Yan, and F. Zheng, “Towards affective learning with an EEG feedback approach,” the 1st ACM Int. Workshop on Multimedia Technologies for Distance Learning, pp. 33-38, 2009.
  7. [7] Y. Liu and O. Sourina, “Real-time subject-dependent EEGbased emotion recognition algorithm,” Lecture Notes in Computer Science, Vol.8490, pp. 199-223, 2014.
  8. [8] X. Jie, R. Cao, and L. Li, “Emotion recognition based on the sample entropy of EEG,” Bio-Medical Marerials and Engineering, Vol.24, No.1, pp. 1185-1192, 2014.
  9. [9] E. Kroupi, A. Yazdani, and T. Ebrahimi, “EEG correlates of different emotional states elicited during watching music videos,” Proc. of the 2011 Int. Conf. on Affective Conputing, pp. 457-466, 2011.
  10. [10] P. C. Petrantonakis and L. J. Hadjileontiadis, “Emotion recognition from EEG using higher order crossings,” IEEE Trans. on Information Technology in Biomedicine, Vol.14, No.2, pp. 186-197, 2010.
  11. [11] R. N. Duan, J. Y. Zhu, and B. L. Lu, “Differential entropy feature for EEG-based emotion classification,” Proc. of the 6th Int. IEEE EMBS Conf. on Neural Engineering, pp. 81-84, 2013.
  12. [12] Y. P. Lin, C. H. Wang, T. P. Jung, T. L. Wu, S. K. Jeng, J. R. Duann, et al., “EEG-based emotion recognition in music listening,” IEEE Trans. on Biomedical Engineering, Vol.57, No.7, pp. 1798-1806, 2010.
  13. [13] S. S. Uzun, S. Yildirim, and E. Yildirim, “Emotion primitives estimation from EEG signals using Hilbert Huang Transform,” Proc. of the IEEE-EMBS Int. Conf. on Biomedical and Health Informatics, pp. 224-227, 2012.
  14. [14] Z. Mohammadi, J. Frounchi, and M. Amiri, “Wavelet-based emotion recognition system using EEG signal,” Neural Computing and Applications, pp. 1-6, 2016.
  15. [15] V. Rozgic, S. N. Vitaladevuni, and R. Prasad, “Robust EEG emotion classification using segment level decision fusion,” Proc. of the 38th IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pp. 1286-1290, 2013.
  16. [16] R. Gupta, K. U. R. Laghari, and T. H. Falk, “Relevance vector classifier decision fusion and EEG graph-theoretic features for automatic affective state characterization,” Neuro Computing, Vol.174, pp. 875-884, 2016.
  17. [17] R. Jenke, A. Peer, and M. Buss, “Feature extraction and selection for emotion recognition from EEG,” IEEE Trans. on Affective Computing, Vol.5, No.3, pp. 327-339, 2014.
  18. [18] W. L. Zheng and B. L. Lu, “Investigating critical frequency bands and channels for eeg-based emotion recognition with deep neural networks,” IEEE Trans. on Autonomous Mental Development, Vol.7, No.3, pp. 162-175, 2015.
  19. [19] Y. Yang, Q. M. J. Wu, W. L. Zheng, and B. L. Lu, “EEG-based emotion recognition using hierarchical network with subnetwork nodes,” IEEE Trans. on Cognitive Developmental Systems, Vol.10, pp. 408-419, 2018.
  20. [20] M. Soleymani, S. Asghariesfeden, Y. Fu, and M. Pantic, “Analysis of EEG Signals and Facial Expressions for Continuous Emotion Detection,” IEEE Trans. on Affective Computing, Vol.7, No.1, pp. 17-28, 2017.
  21. [21] S. Alhagry, A. Aly, and A. Reda, “Emotion Recognition based on EEG using LSTM Recurrent Neural Network,” Int. J. of Advanced Computer Science & Applications, Vol.8, No.10, 2017.
  22. [22] S. Koelstra, C. Muhl, M. Soleymani, J. S. Lee, A. Yazdani, and T. Ebrahimi, “DEAP: A Database for Emotion Analysis Using Physiological Signals,” IEEE Trans. on Affective Computing, Vol.3, No.1, pp. 18-31, 2012.
  23. [23] Y. P. Lin and T. P Jung, “Exploring day-to-day variability in EEG-based emotion classification,” IEEE Int. Conf. on Systems, Man and Cybernetics, pp. 2226-2229, 2014.
  24. [24] P. K. Jao, Y. P. Lin, Y. H. Yang, and T. P. Jung, “Using robust principal component analysis to alleviate day-to-day variability in EEG based emotion classification,” Engineering in Medicine and Biology Society, Vol.2015, pp. 570-573, 2015.
  25. [25] Y. P. Lin, P. K. Jao, and Y. H. Yang, “Improving Cross-Day EEG-Based Emotion Classification Using Robust Principal Component Analysis,” Frontiers in Computational Neuroscience, Vol.64, pp. 1-11, 2017.
  26. [26] X. J. Zhang, W. Huang, L. Huang, and X. Cheng, “EEG Feature Extraction of Public Space Pattern Algorithm Combined with Empirical Mode Decomposition,” Computer Engineering and Applications, Vol.53, No.13, pp. 9-15, 2017.
  27. [27] L. Y. Wu, W. Lu, and N. Gao, “Comparison of feature extraction performance of motion imagery EEG signals based on CSP algorithm and wavelet packet analysis method,” J. of Biomedical Engineering Research, Vol.36, No.3, pp. 224-228, 2017.
  28. [28] M. Li, X. Q. Sun, and X. Wang, “Emotion recognition based on auto-regressive wavelet packet entropy feature fusion algorithm,” J. of Biomedical Engineering, Vol.6, pp. 831-836, 2017.
  29. [29] M. A. Li, M. Zhang, and Y. J. Sun, “Electroencephalogram feature extraction based on wavelet packet and deep belief network,” J. of Electronic Measurement and Instrument, Vol.32, No.1, pp. 111-118, 2018.
  30. [30] T. L. Wu and S. K. Jeng, “Probabilistic estimation of a novel music emotion model,” Proc. 14th Int. Multimedia Model. Conf., pp. 487-497, 2008.
  31. [31] D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, “Music and emotion: Electrophysiological correlates of the processing of pleasant and unpleasant music,” Psychophysiology, Vol.44, No.2, pp. 293-304, 2007.
  32. [32] F. Lotte and C. Guan, “Regularizing common spatial patterns to improve BCI designs: United theory and new algorithms,” IEEE Trans. Biomed. Eng., Vol.58, No.2, pp. 355-362, 2011.
  33. [33] Y. J. Wang, S. Gao, and X. Gao. “Common Spatial Pattern Method for Channel Selelction in Motor Imagery Based Brain-computer Interface,” Engineering in Medicine and Biology Society, Vol.2005, pp. 5392-5395, 2005.
  34. [34] M. M. Cheng, Z. H. Lu, and H. X. Wang, “Regularized common spatial patterns with subject-to-subject transfer of EEG signals,” Cognitive Neurodynamics, Vol.11, No.2, pp. 173-181, 2017.
  35. [35] S. W. Chuang, L. W. Ko, Y. P. Lin, R. S. Huang, T. P. Jung, and C. T. Lin, “Co-modulatory spectral changes in independent brain processes are correlated with task performance,” Neuroimage, Vol.62, pp. 1469-1477, 2012.
  36. [36] S. Mika, G. Rätsch, J. Weston, B. Schölkopf, and K. R. Müller, “Fisher discriminant analysis with kernels,” Neural Networks for Signal Processing IX, Vol.9, pp. 41-48, 1999.
  37. [37] U. Hoffmann, J. M. Vesin, T. Ebrahimi, and K. Diserens, “An efficient P300-based brain–computer interface for disabled subjects,” J. of Neuroscience Methods, Vol.167, pp. 115-125, 2008.
  38. [38] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, Vol.20, pp. 273-297, 1995.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024