RESUMO
The need for high-quality automated seizure detection algorithms based on electroencephalography (EEG) becomes ever more pressing with the increasing use of ambulatory and long-term EEG monitoring. Heterogeneity in validation methods of these algorithms influences the reported results and makes comprehensive evaluation and comparison challenging. This heterogeneity concerns in particular the choice of datasets, evaluation methodologies, and performance metrics. In this paper, we propose a unified framework designed to establish standardization in the validation of EEG-based seizure detection algorithms. Based on existing guidelines and recommendations, the framework introduces a set of recommendations and standards related to datasets, file formats, EEG data input content, seizure annotation input and output, cross-validation strategies, and performance metrics. We also propose the EEG 10-20 seizure detection benchmark, a machine-learning benchmark based on public datasets converted to a standardized format. This benchmark defines the machine-learning task as well as reporting metrics. We illustrate the use of the benchmark by evaluating a set of existing seizure detection algorithms. The SzCORE (Seizure Community Open-Source Research Evaluation) framework and benchmark are made publicly available along with an open-source software library to facilitate research use, while enabling rigorous evaluation of the clinical significance of the algorithms, fostering a collective effort to more optimally detect seizures to improve the lives of people with epilepsy.
RESUMO
OBJECTIVE: Long-term automatic detection of focal seizures remains one of the major challenges in epilepsy due to the unacceptably high number of false alarms from state-of-the-art methods. Our aim was to investigate to what extent a new patient-specific approach based on similarly occurring morphological electroencephalographic (EEG) signal patterns could be used to distinguish seizures from nonseizure events, as well as to estimate its maximum performance. METHODS: We evaluated our approach on >5500 h of long-term EEG recordings using two public datasets: the PhysioNet.org Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) Scalp EEG database and the EPILEPSIAE European epilepsy database. We visually identified a set of similarly occurring morphological patterns (seizure signature) seen simultaneously over two different EEG channels, and within two randomly selected seizures from each individual. The same seizure signature was then searched for in the entire recording from the same patient using dynamic time warping (DTW) as a similarity metric, with a threshold set to reflect the maximum sensitivity our algorithm could achieve without false alarm. RESULTS: At a DTW threshold providing no false alarm during the entire recordings, the mean seizure detection sensitivity across patients was 84%, including 96% for the CHB-MIT database and 74% for the European epilepsy database. A 100% sensitivity was reached in 50% of patients, including 79% from the CHB-MIT database and 27% from the European epilepsy database. The median latency from seizure onset to its detection was 17 ± 10 s, with 84% of seizures being detected within 40 s. SIGNIFICANCE: Personalized EEG signature combined with DTW appears to be a promising method to detect ictal events from a limited number of EEG channels with high sensitivity despite low rate of false alarms, high degree of interpretability, and low computational complexity, compatible with its future use in wearable devices.
RESUMO
The photoplethysmographic (PPG) signal is an unobtrusive blood pulsewave measure that has recently gained popularity in the context of the Internet of Things. Even though it is commonly used for heart rate detection, it has been lately employed on multimodal health and wellness monitoring applications. Unfortunately, this signal is prone to motion artifacts, making it almost useless in all situations where a person is not entirely at rest. To overcome this issue, we propose SPARE, a spectral peak recovery algorithm for PPG signals pulsewave reconstruction. Our solution exploits the local semiperiodicity of the pulsewave signal, together with the information about the cardiac rhythm provided by an available simultaneous ECG, to reconstruct its full waveform, even when affected by strong artifacts. The developed algorithm builds on state-of-the-art signal decomposition methods, and integrates novel techniques for signal reconstruction. Experimental results are reported both in the case of PPG signals acquired during physical activity and at rest, but corrupted in a systematic way by synthetic noise. The full PPG waveform reconstruction enables the identification of several health-related features from the signal, showing an improvement of up to 65% in the detection of different biomarkers from PPG signals affected by noise.
Assuntos
Fotopletismografia , Dispositivos Eletrônicos Vestíveis , Algoritmos , Artefatos , Frequência Cardíaca , Humanos , Processamento de Sinais Assistido por ComputadorRESUMO
Reliably detecting focal seizures without secondary generalization during daily life activities, chronically, using convenient portable or wearable devices, would offer patients with active epilepsy a number of potential benefits, such as providing more reliable seizure count to optimize treatment and seizure forecasting, and triggering alarms to promote safeguarding interventions. However, no generic solution is currently available to reach these objectives. A number of biosignals are sensitive to specific forms of focal seizures, in particular heart rate and its variability for seizures affecting the neurovegetative system, and accelerometry for those responsible for prominent motor activity. However, most studies demonstrate high rates of false detection or poor sensitivity, with only a minority of patients benefiting from acceptable levels of accuracy. To tackle this challenging issue, several lines of technological progress are envisioned, including multimodal biosensing with cross-modal analytics, a combination of embedded and distributed self-aware machine learning, and ultra-low-power design to enable appropriate autonomy of such sophisticated portable solutions.
Assuntos
Monitorização Ambulatorial/instrumentação , Monitorização Ambulatorial/métodos , Convulsões/diagnóstico , Dispositivos Eletrônicos Vestíveis , HumanosRESUMO
Smart Wireless Body Sensor Nodes (WBSNs) are a novel class of unobtrusive, battery-powered devices allowing the continuous monitoring and real-time interpretation of a subject's bio-signals, such as the electrocardiogram (ECG). These low-power platforms, while able to perform advanced signal processing to extract information on heart conditions, are usually constrained in terms of computational power and transmission bandwidth. It is therefore essential to identify in the early stages which parts of an ECG are critical for the diagnosis and, only in these cases, activate on demand more detailed and computationally intensive analysis algorithms. In this work, we present a comprehensive framework for real-time automatic classification of normal and abnormal heartbeats, targeting embedded and resource-constrained WBSNs. In particular, we provide a comparative analysis of different strategies to reduce the heartbeat representation dimensionality, and therefore the required computational effort. We then combine these techniques with a neuro-fuzzy classification strategy, which effectively discerns normal and pathological heartbeats with a minimal run time and memory overhead. We prove that, by performing a detailed analysis only on the heartbeats that our classifier identifies as abnormal, a WBSN system can drastically reduce its overall energy consumption. Finally, we assess the choice of neuro-fuzzy classification by comparing its performance and workload with respect to other state-of-the-art strategies. Experimental results using the MIT-BIH Arrhythmia database show energy savings of as much as 60% in the signal processing stage, and 63% in the subsequent wireless transmission, when a neuro-fuzzy classification structure is employed, coupled with a dimensionality reduction technique based on random projections.
Assuntos
Arritmias Cardíacas/diagnóstico , Redes de Comunicação de Computadores , Diagnóstico por Computador/métodos , Eletrocardiografia Ambulatorial/métodos , Reconhecimento Automatizado de Padrão/métodos , Tecnologia sem Fio , Arritmias Cardíacas/fisiopatologia , Diagnóstico Precoce , Lógica Fuzzy , Frequência Cardíaca , Humanos , Redes Neurais de Computação , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
Irregular sampling of time series in electronic health records (EHRs) is one of the main challenges for developing machine learning models. Additionally, the pattern of missing values in certain clinical variables is not at random but depends on the decisions of clinicians and the state of the patient. Point process is a mathematical framework for analyzing event sequence data consistent with irregular sampling patterns. Our model, TEE4EHR, is a transformer event encoder (TEE) with point process loss that encodes the pattern of laboratory tests in EHRs. The utility of our TEE has been investigated in various benchmark event sequence datasets. Additionally, we conduct experiments on two real-world EHR databases to provide a more comprehensive evaluation of our model. Firstly, in a self-supervised learning approach, the TEE is jointly learned with an existing attention-based deep neural network, which gives superior performance in negative log-likelihood and future event prediction. Besides, we propose an algorithm for aggregating attention weights to reveal the events' interactions. Secondly, we transfer and freeze the learned TEE to the downstream task for the outcome prediction, where it outperforms state-of-the-art models for handling irregularly sampled time series. Furthermore, our results demonstrate that our approach can improve representation learning in EHRs and be useful for clinical prediction tasks.
Assuntos
Registros Eletrônicos de Saúde , Humanos , Redes Neurais de Computação , Aprendizado de Máquina , Algoritmos , Bases de Dados Factuais , Aprendizado ProfundoRESUMO
Accurate extraction of heart rate from photoplethysmography (PPG) signals remains challenging due to motion artifacts and signal degradation. Although deep learning methods trained as a data-driven inference problem offer promising solutions, they often underutilize existing knowledge from the medical and signal processing community. In this paper, we address three shortcomings of deep learning models: motion artifact removal, degradation assessment, and physiologically plausible analysis of the PPG signal. We propose KID-PPG, a knowledge-informed deep learning model that integrates expert knowledge through adaptive linear filtering, deep probabilistic inference, and data augmentation. We evaluate KID-PPG on the PPGDalia dataset, achieving an average mean absolute error of 2.85 beats per minute, surpassing existing reproducible methods. Our results demonstrate a significant performance improvement in heart rate tracking through the incorporation of prior knowledge into deep learning models. This approach shows promise in enhancing various biomedical applications by incorporating existing expert knowledge in deep learning models.
RESUMO
Epilepsy is a highly prevalent chronic neurological disorder with great negative impact on patients' daily lives. Despite this there is still no adequate technological support to enable epilepsy detection and continuous outpatient monitoring in everyday life. Hyperdimensional (HD) computing is a promising method for epilepsy detection via wearable devices, characterized by a simpler learning process and lower memory requirements compared to other methods. In this work, we demonstrate additional avenues in which HD computing and the manner in which its models are built and stored can be used to better understand, compare and create more advanced machine learning models for epilepsy detection. These possibilities are not feasible with other state-of-the-art models, such as random forests or neural networks. We compare inter-subject model similarity of different classes (seizure and non-seizure), study the process of creating general models from personal ones, and finally posit a method of combining personal and general models to create hybrid models. This results in an improved epilepsy detection performance. We also tested knowledge transfer between models trained on two different datasets. The attained insights are highly interesting not only from an engineering perspective, to create better models for wearables, but also from a neurological perspective, to better understand individual epilepsy patterns.
Assuntos
Epilepsia , Dispositivos Eletrônicos Vestíveis , Humanos , Epilepsia/diagnóstico , Convulsões/diagnóstico , Redes Neurais de Computação , Aprendizado de Máquina , EletroencefalografiaRESUMO
The study of the functioning and responses of Antarctica to the current climate change scenario is a priority and a challenge for the scientific community aiming to predict and mitigate impacts at a regional and global scale. Due to the difficulty of obtaining aerial data in such extreme, remote, and difficult-to-reach region of the planet, the development of remote sensing techniques with Unmanned Aerial Vehicles (UAVs) has revolutionized polar research. ShetlandsUAVmetry comprises original datasets collected by UAVs during the Spanish Antarctic Campaign 2021-2022 (January to March 2022), along with the photogrammetric products resulting from their processing. It includes data recorded during twenty-eight distinct UAV flights at various study sites on Deception and Livingston islands (South Shetland Islands, Antarctica) and consists of a total of 15,691 high-resolution optical RGB captures. In addition, this dataset is accompanied by additional associated files that facilitate its use and accessibility. It is publicly accessible and can be downloaded from the figshare data repository.
RESUMO
BACKGROUND AND OBJECTIVE: Cough audio signal classification is a potentially useful tool in screening for respiratory disorders, such as COVID-19. Since it is dangerous to collect data from patients with contagious diseases, many research teams have turned to crowdsourcing to quickly gather cough sound data. The COUGHVID dataset enlisted expert physicians to annotate and diagnose the underlying diseases present in a limited number of recordings. However, this approach suffers from potential cough mislabeling, as well as disagreement between experts. METHODS: In this work, we use a semi-supervised learning (SSL) approach - based on audio signal processing tools and interpretable machine learning models - to improve the labeling consistency of the COUGHVID dataset for 1) COVID-19 versus healthy cough sound classification 2) distinguishing wet from dry coughs, and 3) assessing cough severity. First, we leverage SSL expert knowledge aggregation techniques to overcome the labeling inconsistencies and label sparsity in the dataset. Next, our SSL approach is used to identify a subsample of re-labeled COUGHVID audio samples that can be used to train or augment future cough classifiers. RESULTS: The consistency of the re-labeled COVID-19 and healthy data is demonstrated in that it exhibits a high degree of inter-class feature separability: 3x higher than that of the user-labeled data. Similarly, the SSL method increases this separability by 11.3x for cough type and 5.1x for severity classifications. Furthermore, the spectral differences in the user-labeled audio segments are amplified in the re-labeled data, resulting in significantly different power spectral densities between healthy and COVID-19 coughs in the 1-1.5 kHz range (p=1.2×10-64), which demonstrates both the increased consistency of the new dataset and its explainability from an acoustic perspective. Finally, we demonstrate how the re-labeled dataset can be used to train a COVID-19 classifier, achieving an AUC of 0.797. CONCLUSIONS: We propose a SSL expert knowledge aggregation technique for the field of cough sound classification for the first time, and demonstrate how it can be used to combine the medical knowledge of multiple experts in an explainable fashion, thus providing abundant, consistent data for cough classification tasks.
Assuntos
COVID-19 , Crowdsourcing , Humanos , Tosse/diagnóstico , COVID-19/diagnóstico , Acústica , AlgoritmosRESUMO
Epilepsy is a chronic neurological disorder that affects a significant portion of the human population and imposes serious risks in the daily life. Despite advances in machine learning and IoT, small, non-stigmatizing wearable devices for continuous monitoring and detection in outpatient environments are not yet widely available. Part of the reason is the complexity of epilepsy itself, including highly imbalanced data, multimodal nature, and very subject-specific signatures. However, another problem is the heterogeneity of methodological approaches in research, leading to slower progress, difficulty in comparing results, and low reproducibility. Therefore, this article identifies a wide range of methodological decisions that must be made and reported when training and evaluating the performance of epilepsy detection systems. We characterize the influence of individual choices using a typical ensemble random-forest model and the publicly available CHB-MIT database, providing a broader picture of each decision and giving good-practice recommendations, based on our experience, where possible.
Assuntos
Epilepsia , Dispositivos Eletrônicos Vestíveis , Humanos , Reprodutibilidade dos Testes , Eletroencefalografia/métodos , Convulsões/diagnóstico , Epilepsia/diagnósticoRESUMO
BACKGROUND AND OBJECTIVE: Event-based analog-to-digital converters allow for sparse bio-signal acquisition, enabling local sub-Nyquist sampling frequency. However, aggressive event selection can cause the loss of important bio-markers, not recoverable with standard interpolation techniques. In this work, we leverage the self-similarity of the electrocardiogram (ECG) signal to recover missing features in event-based sampled ECG signals, dynamically selecting patient-representative templates together with a novel dynamic time warping algorithm to infer the morphology of event-based sampled heartbeats. METHODS: We acquire a set of uniformly sampled heartbeats and use a graph-based clustering algorithm to define representative templates for the patient. Then, for each event-based sampled heartbeat, we select the morphologically nearest template, and we then reconstruct the heartbeat with piece-wise linear deformations of the selected template, according to a novel dynamic time warping algorithm that matches events to template segments. RESULTS: Synthetic tests on a standard normal sinus rhythm dataset, composed of approximately 1.8 million normal heartbeats, show a big leap in performance with respect to standard resampling techniques. In particular (when compared to classic linear resampling), we show an improvement in P-wave detection of up to 10 times, an improvement in T-wave detection of up to three times, and a 30% improvement in the dynamic time warping morphological distance. CONCLUSION: In this work, we have developed an event-based processing pipeline that leverages signal self-similarity to reconstruct event-based sampled ECG signals. Synthetic tests show clear advantages over classical resampling techniques.
Assuntos
Eletrocardiografia , Processamento de Sinais Assistido por Computador , Humanos , Eletrocardiografia/métodos , Arritmias Cardíacas , Algoritmos , Frequência CardíacaRESUMO
Counting the number of times a patient coughs per day is an essential biomarker in determining treatment efficacy for novel antitussive therapies and personalizing patient care. Automatic cough counting tools must provide accurate information, while running on a lightweight, portable device that protects the patient's privacy. Several devices and algorithms have been developed for cough counting, but many use only error-prone audio signals, rely on offline processing that compromises data privacy, or utilize processing and memory-intensive neural networks that require more hardware resources than can fit on a wearable device. Therefore, there is a need for wearable devices that employ multimodal sensors to perform accurate, privacy-preserving, automatic cough counting algorithms directly on the device in an edge Artificial Intelligence (edge-AI) fashion. To advance this research field, we contribute the first publicly accessible cough counting dataset of multimodal biosignals. The database contains nearly 4 hours of biosignal data, with both acoustic and kinematic modalities, covering 4,300 annotated cough events from 15 subjects. Furthermore, a variety of non-cough sounds and motion scenarios mimicking daily life activities are also present, which the research community can use to accelerate machine learning (ML) algorithm development. A technical validation of the dataset reveals that it represents a wide variety of signal-to-noise ratios, which can be expected in a real-life use case, as well as consistency across experimental trials. Finally, to demonstrate the usability of the dataset, we train a simple cough vs non-cough signal classifier that obtains a 91% sensitivity, 92% specificity, and 80% precision on unseen test subject data. Such edge-friendly AI algorithms have the potential to provide continuous ambulatory monitoring of the numerous chronic cough patients.
Assuntos
Inteligência Artificial , Tosse , Humanos , Tosse/diagnóstico , Algoritmos , Redes Neurais de Computação , SomRESUMO
OBJECTIVE: Continuous monitoring of biosignals via wearable sensors has quickly expanded in the medical and wellness fields. At rest, automatic detection of vital parameters is generally accurate. However, in conditions such as high-intensity exercise, sudden physiological changes occur to the signals, compromising the robustness of standard algorithms. METHODS: Our method, called BayeSlope, is based on unsupervised learning, Bayesian filtering, and non-linear normalization to enhance and correctly detect the R peaks according to their expected positions in the ECG. Furthermore, as BayeSlope is computationally heavy and can drain the device battery quickly, we propose an online design that adapts its robustness to sudden physiological changes, and its complexity to the heterogeneous resources of modern embedded platforms. This method combines BayeSlope with a lightweight algorithm, executed in cores with different capabilities, to reduce the energy consumption while preserving the accuracy. RESULTS: BayeSlope achieves an F1 score of 99.3% in experiments during intense cycling exercise with 20 subjects. Additionally, the online adaptive process achieves an F1 score of 99% across five different exercise intensities, with a total energy consumption of 1.55±0.54 mJ. CONCLUSION: We propose a highly accurate and robust method, and a complete energy-efficient implementation in a modern ultra-low-power embedded platform to improve R peak detection in challenging conditions, such as during high-intensity exercise. SIGNIFICANCE: The experiments show that BayeSlope outperforms state-of-the-art QRS detectors up to 8.4% in F1 score, while our online adaptive method can reach energy savings up to 38.7% on modern heterogeneous wearable platforms.
Assuntos
Processamento de Sinais Assistido por Computador , Dispositivos Eletrônicos Vestíveis , Humanos , Teorema de Bayes , Algoritmos , Eletrocardiografia/métodosRESUMO
Recent years have seen growing interest in leveraging deep learning models for monitoring epilepsy patients based on electroencephalographic (EEG) signals. However, these approaches often exhibit poor generalization when applied outside of the setting in which training data was collected. Furthermore, manual labeling of EEG signals is a time-consuming process requiring expert analysis, making fine-tuning patient-specific models to new settings a costly proposition. In this work, we propose the Maximum-Mean-Discrepancy Decoder (M2D2) for automatic temporal localization and labeling of seizures in long EEG recordings to assist medical experts. We show that M2D2 achieves 76.0% and 70.4% of F1-score for temporal localization when evaluated on EEG data gathered in a different clinical setting than the training data. The results demonstrate that M2D2 yields substantially higher generalization performance than other state-of-the-art deep learning-based approaches.
Assuntos
Epilepsia , Humanos , Convulsões , Eletroencefalografia/métodos , Encéfalo , AlgoritmosRESUMO
Wearable and unobtrusive monitoring and prediction of epileptic seizures has the potential to significantly increase the life quality of patients, but is still an unreached goal due to challenges of real-time detection and wearable devices design. Hyperdimensional (HD) computing has evolved in recent years as a new promising machine learning approach, especially when talking about wearable applications. But in the case of epilepsy detection, standard HD computing is not performing at the level of other state-of-the-art algorithms. This could be due to the inherent complexity of the seizures and their signatures in different biosignals, such as the electroencephalogram (EEG), the highly personalized nature, and the disbalance of seizure and non-seizure instances. In the literature, different strategies for improved learning of HD computing have been proposed, such as iterative (multi-pass) learning, multi-centroid learning and learning with sample weight ("OnlineHD"). Yet, most of them have not been tested on the challenging task of epileptic seizure detection, and it stays unclear whether they can increase the HD computing performance to the level of the current state-of-the-art algorithms for wearable devices, such as random forests. Thus, in this paper, we implement different learning strategies and assess their performance on an individual basis, or in combination, regarding detection performance and memory and computational requirements. Results show that the best-performing algorithm, which is a combination of multi-centroid and multi-pass, can indeed reach the performance of the random forest model on a highly unbalanced dataset imitating a real-life epileptic seizure detection application.
Assuntos
Epilepsia , Convulsões , Algoritmos , Eletroencefalografia/métodos , Epilepsia/diagnóstico , Humanos , Aprendizado de Máquina , Convulsões/diagnósticoRESUMO
Previous studies have demonstrated that, up to a certain degree, Convolutional Neural Networks (CNNs) can tolerate arithmetic approximations. Nonetheless, perturbations must be applied judiciously, to constrain their impact on accuracy. This is a challenging task, since the implementation of inexact operators is often decided at design time, when the application and its robustness profile are unknown, posing the risk of over-constraining or over-provisioning the hardware. Bridging this gap, we propose a two-phase strategy. Our framework first optimizes the target CNN model, reducing the bitwidth of weights and activations and enhancing error resiliency, so that inexact operations can be performed as frequently as possible. Then, it selectively assigns CNN layers to exact or inexact hardware based on a sensitivity metric. Our results show that, within a 5% accuracy degradation, our methodology, including a highly inexact multiplier design, can reduce the cost of MAC operations in CNN inference up to 83.6% compared to state-of-the-art optimized exact implementations.
RESUMO
Long-term monitoring of patients with epilepsy presents a challenging problem from the engineering perspective of real-time detection and wearable devices design. It requires new solutions that allow continuous unobstructed monitoring and reliable detection and prediction of seizures. A high variability in the electroencephalogram (EEG) patterns exists among people, brain states, and time instances during seizures, but also during non-seizure periods. This makes epileptic seizure detection very challenging, especially if data is grouped under only seizure (ictal) and non-seizure (inter-ictal) labels. Hyperdimensional (HD) computing, a novel machine learning approach, comes in as a promising tool. However, it has certain limitations when the data shows a high intra-class variability. Therefore, in this work, we propose a novel semi-supervised learning approach based on a multi-centroid HD computing. The multi-centroid approach allows to have several prototype vectors representing seizure and non-seizure states, which leads to significantly improved performance when compared to a simple single-centroid HD model. Further, real-life data imbalance poses an additional challenge and the performance reported on balanced subsets of data is likely to be overestimated. Thus, we test our multi-centroid approach with three different dataset balancing scenarios, showing that performance improvement is higher for the less balanced dataset. More specifically, up to 14% improvement is achieved on an unbalanced test set with 10 times more non-seizure than seizure data. At the same time, the total number of sub-classes is not significantly increased compared to the balanced dataset. Thus, the proposed multi-centroid approach can be an important element in achieving a high performance of epilepsy detection with real-life data balance or during online learning, where seizures are infrequent.
RESUMO
Epilepsy is one of the most prevalent paroxystic neurological disorders. It is characterized by the occurrence of spontaneous seizures. About 1 out of 3 patients have drug-resistant epilepsy, thus their seizures cannot be controlled by medication. Automatic detection of epileptic seizures can substantially improve the patient's quality of life. To achieve a high-quality model, we have to collect data from various patients in a central server. However, sending the patient's raw data to this central server puts patient privacy at risk and consumes a significant amount of energy. To address these challenges, in this work, we have designed and evaluated a standard federated learning framework in the context of epileptic seizure detection using a deep learning-based approach, which operates across a cluster of machines. We evaluated the accuracy and performance of our proposed approach on the NVIDIA Jetson Nano Developer Kit based on the EPILEPSIAE database, which is one of the largest public epilepsy datasets for seizure detection. Our proposed framework achieved a sensitivity of 81.25%, a specificity of 82.00%, and a geometric mean of 81.62%. It can be implemented on embedded platforms that complete the entire training process in 1.86 hours using 344.34 mAh energy on a single battery charge. We also studied a personalized variant of the federated learning, where each machine is responsible for training a deep neural network (DNN) to learn the discriminative electrocardiography (ECG) features of the epileptic seizures of the specific person monitored based on its local data. In this context, the DNN benefitted from a well-trained model without sharing the patient's raw data with a server or a central cloud repository. We observe in our results that personalized federated learning provides an increase in all the performance metric, with a sensitivity of 90.24%, a specificity of 91.58%, and a geometric mean of 90.90%.
Assuntos
Epilepsia , Qualidade de Vida , Algoritmos , Eletroencefalografia , Epilepsia/diagnóstico , Humanos , Redes Neurais de Computação , Convulsões/diagnósticoRESUMO
OBJECTIVE: Cognitive workload monitoring (CWM) can enhance human-machine interaction by supporting task execution assistance considering the operator's cognitive state. Therefore, we propose a machine learning design methodology and a data processing strategy to enable CWM on resource-constrained wearable devices. METHODS: Our CWM solution is built upon edge computing on a simple wearable system, with only four peripheral channels of electroencephalography (EEG). We assess our solution on experimental data from 24 volunteers. Moreover, to overcome the system's memory constraints, we adopt an optimization strategy for model size reduction and a multi-batch data processing scheme for optimizing RAM memory footprint. Finally, we implement our data processing strategy on a state-of-the-art wearable platform and assess its execution and system battery life. RESULTS: We achieve an accuracy of 74.5% and a 74.0% geometric mean between sensitivity and specificity for CWM classification on unseen data. Besides, the proposed model optimization strategy generates a 27.5x smaller model compared to the one generated with default parameters, and the multi-batch data processing scheme reduces RAM memory footprint by 14x compared to a single batch data processing. Finally, our algorithm uses only 1.28% of the available processing time, thus, allowing our system to achieve 28.5 hours of battery life. CONCLUSION: We provide a reliable and optimized CWM solution using wearable devices, enabling more than a day of operation on a single battery charge. SIGNIFICANCE: The proposed methodology enables real-time data processing on resource-constrained devices and supports real-time wearable monitoring based on EEG for applications as CWM in human-machine interaction.