Your browser doesn't support javascript.
loading
A Novel Method for Sleep-Stage Classification Based on Sonification of Sleep Electroencephalogram Signals Using Wavelet Transform and Recurrent Neural Network.
Moradi, Foad; Mohammadi, Hiwa; Rezaei, Mohammad; Sariaslani, Payam; Razazian, Nazanin; Khazaie, Habibolah; Adeli, Hojjat.
Afiliação
  • Moradi F; Sleep Disorders Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran.
  • Mohammadi H; Electrical and Computer Engineering Faculty, Semnan University, Semnan, Iran.
  • Rezaei M; Department of Neurology, School of Medicine, Kermanshah University of Medical Sciences, Kermanshah, Iran.
  • Sariaslani P; Sleep Disorders Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran, hiwa.mohamadi@gmail.com.
  • Razazian N; Department of Neurology, School of Medicine, Kermanshah University of Medical Sciences, Kermanshah, Iran, hiwa.mohamadi@gmail.com.
  • Khazaie H; Clinical Research Development Center, Imam Reza Hospital, Kermanshah University of Medical Sciences, Kermanshah, Iran, hiwa.mohamadi@gmail.com.
  • Adeli H; Sleep Disorders Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran.
Eur Neurol ; 83(5): 468-486, 2020.
Article em En | MEDLINE | ID: mdl-33120386
ABSTRACT

INTRODUCTION:

Visual sleep-stage scoring is a time-consuming technique that cannot extract the nonlinear characteristics of electroencephalogram (EEG). This article presents a novel method for sleep-stage differentiation based on sonification of sleep-EEG signals using wavelet transform and recurrent neural network (RNN).

METHODS:

Two RNNs were designed and trained separately based on a database of classical guitar pieces and Kurdish tanbur Makams using a long short-term memory model. Moreover, discrete wavelet transform and wavelet packet decomposition were used to determine the association between the EEG signals and musical pitches. Continuous wavelet transform was applied to extract musical beat-based features from the EEG. Then, the pretrained RNN was used to generate music. To test the proposed model, 11 sleep EEGs were mapped onto the guitar and tanbur frequency intervals and presented to the pretrained RNN. Next, the generated music was randomly presented to 2 neurologists.

RESULTS:

The proposed model classified the sleep stages with an accuracy of >81% for tanbur and more than 93% for guitar musical pieces. The inter-rater reliability measured by Cohen's kappa coefficient (κ) revealed good reliability for both tanbur (κ = 0.64, p < 0.001) and guitar musical pieces (κ = 0.85, p < 0.001).

CONCLUSION:

The present EEG sonification method leads to valid sleep staging by clinicians. The method could be used on various EEG databases for classification, differentiation, diagnosis, and treatment purposes. Real-time EEG sonification can be used as a feedback tool for replanning of neurophysiological functions for the management of many neurological and psychiatric disorders in the future.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Fases do Sono / Redes Neurais de Computação / Eletroencefalografia / Análise de Ondaletas / Música Tipo de estudo: Prognostic_studies Limite: Adolescent / Adult / Female / Humans / Male / Middle aged Idioma: En Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Fases do Sono / Redes Neurais de Computação / Eletroencefalografia / Análise de Ondaletas / Música Tipo de estudo: Prognostic_studies Limite: Adolescent / Adult / Female / Humans / Male / Middle aged Idioma: En Ano de publicação: 2020 Tipo de documento: Article