RESUMEN
In recent years, Graph Neural Networks (GNNs) based on deep learning techniques have achieved promising results in EEG-based depression detection tasks but still have some limitations. Firstly, most existing GNN-based methods use pre-computed graph adjacency matrices, which ignore the differences in brain networks between individuals. Additionally, methods based on graph-structured data do not consider the temporal dependency information of brain networks. To address these issues, we propose a deep learning algorithm that explores adaptive graph topologies and temporal graph networks for EEG-based depression detection. Specifically, we designed an Adaptive Graph Topology Generation (AGTG) module that can adaptively model the real-time connectivity of the brain networks, revealing differences between individuals. In addition, we designed a Graph Convolutional Gated Recurrent Unit (GCGRU) module to capture the temporal dynamical changes of brain networks. To further explore the differential features between depressed and healthy individuals, we adopt Graph Topology-based Max-Pooling (GTMP) module to extract graph representation vectors accurately. We conduct a comparative analysis with several advanced algorithms on both public and our own datasets. The results reveal that our final model achieves the highest Area Under the Receiver Operating Characteristic Curve (AUROC) on both datasets, with values of 83% and 99%, respectively. Furthermore, we perform extensive validation experiments demonstrating our proposed method's effectiveness and advantages. Finally, we present a comprehensive discussion on the differences in brain networks between healthy and depressed individuals based on the outputs of our final model's AGTG and GTMP modules.
RESUMEN
Electroencephalogram (EEG) plays an important role in studying brain function and human cognitive performance, and the recognition of EEG signals is vital to develop an automatic sleep staging system. However, due to the complex nonstationary characteristics and the individual difference between subjects, how to obtain the effective signal features of the EEG for practical application is still a challenging task. In this article, we investigate the EEG feature learning problem and propose a novel temporal feature learning method based on amplitude-time dual-view fusion for automatic sleep staging. First, we explore the feature extraction ability of convolutional neural networks for the EEG signal from the perspective of interpretability and construct two new representation signals for the raw EEG from the views of amplitude and time. Then, we extract the amplitude-time signal features that reflect the transformation between different sleep stages from the obtained representation signals by using conventional 1-D CNNs. Furthermore, a hybrid dilation convolution module is used to learn the long-term temporal dependency features of EEG signals, which can overcome the shortcoming that the small-scale convolution kernel can only learn the local signal variation information. Finally, we conduct attention-based feature fusion for the learned dual-view signal features to further improve sleep staging performance. To evaluate the performance of the proposed method, we test 30-s-epoch EEG signal samples for healthy subjects and subjects with mild sleep disorders. The experimental results from the most commonly used datasets show that the proposed method has better sleep staging performance and has the potential for the development and application of an EEG-based automatic sleep staging system.
RESUMEN
BACKGROUND AND OBJECTIVES: 12 leads electrocardiogram (ECG) are widely used to diagnose myocardial infarction (MI). Generally, the symptoms of MI can be reflected by waveforms in the heartbeat, and the contribution of different ECG leads to different types of MI is different. Therefore, it is significant to use the heartbeat waveform features and the lead relationship features for multi-category MI diagnosis. Moreover, the challenge of individual differences and lightweight algorithms also need to be further resolved and explored in the ECG automatic diagnosis system. METHODS: This paper presents a lightweight MI diagnosis system named multi-feature-branch lead attention neural network (MFB-LANN) via 12 leads ECG signals. It is designed based on the characteristics of the ECG lead. Specifically, 12 independent feature branches correspond to different leads, and each branch contains different convolutional layers to extract features in the heartbeat, then a novel attention module is developed named lead attention mechanism (LAM) to assign different weights to each feature branch. Finally all the weighted feature branches are fused for classification. Furthermore, to overcome individual differences, patient-specific scheme and active learning (AL) are used to train and update the model iteratively. RESULTS: Experimental results based on Physikalisch-Technische Bundesanstalt (PTB) database shows that the MFB-LANN achieved satisfactory results with accuracy of 99.63% based on 5-fold cross validation under the intra-patient scheme. The patient-specific experiment yielded an average accuracy of 96.99% compared to the state-of-the-art. By contrast, the model achieved acceptable results on the hybrid database (PTB and PTB-XL), especially achieving 94.19% accuracy after the update. Moreover, the system can complete the update process and real-time diagnosis on the ARM Cortex-A72 platform. CONCLUSIONS: Experiments show that the proposed method for MI diagnosis has more obvious advantages compared to other recent methods, and it has great potential to be applied to the mobile medical field.