Your browser doesn't support javascript.
loading
Emotional State Classification from MUSIC-Based Features of Multichannel EEG Signals.
Hossain, Sakib Abrar; Rahman, Md Asadur; Chakrabarty, Amitabha; Rashid, Mohd Abdur; Kuwana, Anna; Kobayashi, Haruo.
Afiliación
  • Hossain SA; Department of Computer Science and Engineering, Brac University, Dhaka 1212, Bangladesh.
  • Rahman MA; NSU Genome Research Institute, North South University, Dhaka 1229, Bangladesh.
  • Chakrabarty A; Military Institute of Science and Technology (MIST), Department of Biomedical Engineering, Dhaka 1216, Bangladesh.
  • Rashid MA; Department of Computer Science and Engineering, Brac University, Dhaka 1212, Bangladesh.
  • Kuwana A; Department of EEE, Noakhali Science and Technology University, Noakhali 3814, Bangladesh.
  • Kobayashi H; Division of Electronics and Informatics, Gunma University, 1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515, Japan.
Bioengineering (Basel) ; 10(1)2023 Jan 11.
Article en En | MEDLINE | ID: mdl-36671671
Electroencephalogram (EEG)-based emotion recognition is a computationally challenging issue in the field of medical data science that has interesting applications in cognitive state disclosure. Generally, EEG signals are classified from frequency-based features that are often extracted using non-parametric models such as Welch's power spectral density (PSD). These non-parametric methods are not computationally sound due to having complexity and extended run time. The main purpose of this work is to apply the multiple signal classification (MUSIC) model, a parametric-based frequency-spectrum-estimation technique to extract features from multichannel EEG signals for emotional state classification from the SEED dataset. The main challenge of using MUSIC in EEG feature extraction is to tune its parameters for getting the discriminative features from different classes, which is a significant contribution of this work. Another contribution is to show some flaws of this dataset for the first time that contributed to achieving high classification accuracy in previous research works. This work used MUSIC features to classify three emotional states and achieve 97% accuracy on average using an artificial neural network. The proposed MUSIC model optimizes a 95-96% run time compared with the conventional classical non-parametric technique (Welch's PSD) for feature extraction.
Palabras clave

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Bioengineering (Basel) Año: 2023 Tipo del documento: Article País de afiliación: Bangladesh

Texto completo: 1 Colección: 01-internacional Banco de datos: MEDLINE Idioma: En Revista: Bioengineering (Basel) Año: 2023 Tipo del documento: Article País de afiliación: Bangladesh