Your browser doesn't support javascript.
loading
A Novel Method for Sleep-Stage Classification Based on Sonification of Sleep Electroencephalogram Signals Using Wavelet Transform and Recurrent Neural Network.
Moradi, Foad; Mohammadi, Hiwa; Rezaei, Mohammad; Sariaslani, Payam; Razazian, Nazanin; Khazaie, Habibolah; Adeli, Hojjat.
Afiliación
  • Moradi F; Sleep Disorders Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran.
  • Mohammadi H; Electrical and Computer Engineering Faculty, Semnan University, Semnan, Iran.
  • Rezaei M; Department of Neurology, School of Medicine, Kermanshah University of Medical Sciences, Kermanshah, Iran.
  • Sariaslani P; Sleep Disorders Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran, hiwa.mohamadi@gmail.com.
  • Razazian N; Department of Neurology, School of Medicine, Kermanshah University of Medical Sciences, Kermanshah, Iran, hiwa.mohamadi@gmail.com.
  • Khazaie H; Clinical Research Development Center, Imam Reza Hospital, Kermanshah University of Medical Sciences, Kermanshah, Iran, hiwa.mohamadi@gmail.com.
  • Adeli H; Sleep Disorders Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran.
Eur Neurol ; 83(5): 468-486, 2020.
Article en En | MEDLINE | ID: mdl-33120386
INTRODUCTION: Visual sleep-stage scoring is a time-consuming technique that cannot extract the nonlinear characteristics of electroencephalogram (EEG). This article presents a novel method for sleep-stage differentiation based on sonification of sleep-EEG signals using wavelet transform and recurrent neural network (RNN). METHODS: Two RNNs were designed and trained separately based on a database of classical guitar pieces and Kurdish tanbur Makams using a long short-term memory model. Moreover, discrete wavelet transform and wavelet packet decomposition were used to determine the association between the EEG signals and musical pitches. Continuous wavelet transform was applied to extract musical beat-based features from the EEG. Then, the pretrained RNN was used to generate music. To test the proposed model, 11 sleep EEGs were mapped onto the guitar and tanbur frequency intervals and presented to the pretrained RNN. Next, the generated music was randomly presented to 2 neurologists. RESULTS: The proposed model classified the sleep stages with an accuracy of >81% for tanbur and more than 93% for guitar musical pieces. The inter-rater reliability measured by Cohen's kappa coefficient (κ) revealed good reliability for both tanbur (κ = 0.64, p < 0.001) and guitar musical pieces (κ = 0.85, p < 0.001). CONCLUSION: The present EEG sonification method leads to valid sleep staging by clinicians. The method could be used on various EEG databases for classification, differentiation, diagnosis, and treatment purposes. Real-time EEG sonification can be used as a feedback tool for replanning of neurophysiological functions for the management of many neurological and psychiatric disorders in the future.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Fases del Sueño / Redes Neurales de la Computación / Electroencefalografía / Análisis de Ondículas / Música Tipo de estudio: Prognostic_studies Límite: Adolescent / Adult / Female / Humans / Male / Middle aged Idioma: En Revista: Eur Neurol Año: 2020 Tipo del documento: Article País de afiliación: Irán

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Fases del Sueño / Redes Neurales de la Computación / Electroencefalografía / Análisis de Ondículas / Música Tipo de estudio: Prognostic_studies Límite: Adolescent / Adult / Female / Humans / Male / Middle aged Idioma: En Revista: Eur Neurol Año: 2020 Tipo del documento: Article País de afiliación: Irán
...