Your browser doesn't support javascript.
loading
A novel deep learning model based on transformer and cross modality attention for classification of sleep stages.
Mostafaei, Sahar Hassanzadeh; Tanha, Jafar; Sharafkhaneh, Amir.
Afiliação
  • Mostafaei SH; Faculty of Electrical and Computer Engineering, University of Tabriz, P.O. Box 51666-16471, Tabriz, Iran. Electronic address: S.h.mostafaei@tabrizu.ac.ir.
  • Tanha J; Faculty of Electrical and Computer Engineering, University of Tabriz, P.O. Box 51666-16471, Tabriz, Iran. Electronic address: Tanha@tabrizu.ac.ir.
  • Sharafkhaneh A; Professor of Medicine, Section of Pulmonary, Critical Care and Sleep Medicine, Department of Medicine, Baylor College of Medicine, Houston, TX, USA. Electronic address: Amirs@bcm.edu.
J Biomed Inform ; 157: 104689, 2024 Jul 18.
Article em En | MEDLINE | ID: mdl-39029770
ABSTRACT
The classification of sleep stages is crucial for gaining insights into an individual's sleep patterns and identifying potential health issues. Employing several important physiological channels in different views, each providing a distinct perspective on sleep patterns, can have a great impact on the efficiency of the classification models. In the context of neural networks and deep learning models, transformers are very effective, especially when dealing with time series data, and have shown remarkable compatibility with sequential data analysis as physiological channels. On the other hand, cross-modality attention by integrating information from multiple views of the data enables to capture relationships among different modalities, allowing models to selectively focus on relevant information from each modality. In this paper, we introduce a novel deep-learning model based on transformer encoder-decoder and cross-modal attention for sleep stage classification. The proposed model processes information from various physiological channels with different modalities using the Sleep Heart Health Study Dataset (SHHS) data and leverages transformer encoders for feature extraction and cross-modal attention for effective integration to feed into the transformer decoder. The combination of these elements increased the accuracy of the model up to 91.33% in classifying five classes of sleep stages. Empirical evaluations demonstrated the model's superior performance compared to standalone approaches and other state-of-the-art techniques, showcasing the potential of combining transformer and cross-modal attention for improved sleep stage classification.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: J Biomed Inform Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: J Biomed Inform Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article