Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE J Biomed Health Inform ; 27(2): 968-979, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36409802

RESUMO

Generative Adversarial Networks (GANs) are a revolutionary innovation in machine learning that enables the generation of artificial data. Artificial data synthesis is valuable especially in the medical field where it is difficult to collect and annotate real data due to privacy issues, limited access to experts, and cost. While adversarial training has led to significant breakthroughs in the computer vision field, biomedical research has not yet fully exploited the capabilities of generative models for data generation, and for more complex tasks such as biosignal modality transfer. We present a broad analysis on adversarial learning on biosignal data. Our study is the first in the machine learning community to focus on synthesizing 1D biosignal data using adversarial models. We consider three types of deep generative adversarial networks: a classical GAN, an adversarial AE, and a modality transfer GAN; individually designed for biosignal synthesis and modality transfer purposes. We evaluate these methods on multiple datasets for different biosignal modalites, including phonocardiogram (PCG), electrocardiogram (ECG), vectorcardiogram and 12-lead electrocardiogram. We follow subject-independent evaluation protocols, by evaluating the proposed models' performance on completely unseen data to demonstrate generalizability. We achieve superior results in generating biosignals, specifically in conditional generation, by synthesizing realistic samples while preserving domain-relevant characteristics. We also demonstrate insightful results in biosignal modality transfer that can generate expanded representations from fewer input-leads, ultimately making the clinical monitoring setting more convenient for the patient. Furthermore our longer duration ECGs generated, maintain clear ECG rhythmic regions, which has been proven using ad-hoc segmentation models.


Assuntos
Pesquisa Biomédica , Aprendizado Profundo , Humanos , Eletrocardiografia , Aprendizado de Máquina , Privacidade , Processamento de Imagem Assistida por Computador
2.
IEEE J Biomed Health Inform ; 26(2): 527-538, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34314363

RESUMO

Recently, researchers in the biomedical community have introduced deep learning-based epileptic seizure prediction models using electroencephalograms (EEGs) that can anticipate an epileptic seizure by differentiating between the pre-ictal and interictal stages of the subject's brain. Despite having the appearance of a typical anomaly detection task, this problem is complicated by subject-specific characteristics in EEG data. Therefore, studies that investigate seizure prediction widely employ subject-specific models. However, this approach is not suitable in situations where a target subject has limited (or no) data for training. Subject-independent models can address this issue by learning to predict seizures from multiple subjects, and therefore are of greater value in practice. In this study, we propose a subject-independent seizure predictor using Geometric Deep Learning (GDL). In the first stage of our GDL-based method we use graphs derived from physical connections in the EEG grid. We subsequently seek to synthesize subject-specific graphs using deep learning. The models proposed in both stages achieve state-of-the-art performance using a one-hour early seizure prediction window on two benchmark datasets (CHB-MIT-EEG: 95.38% with 23 subjects and Siena-EEG: 96.05% with 15 subjects). To the best of our knowledge, this is the first study that proposes synthesizing subject-specific graphs for seizure prediction. Furthermore, through model interpretation we outline how this method can potentially contribute towards Scalp EEG-based seizure localization.


Assuntos
Aprendizado Profundo , Algoritmos , Eletroencefalografia/métodos , Humanos , Couro Cabeludo , Convulsões/diagnóstico
3.
IEEE J Biomed Health Inform ; 25(6): 2162-2171, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-32997637

RESUMO

Traditionally, abnormal heart sound classification is framed as a three-stage process. The first stage involves segmenting the phonocardiogram to detect fundamental heart sounds; after which features are extracted and classification is performed. Some researchers in the field argue the segmentation step is an unwanted computational burden, whereas others embrace it as a prior step to feature extraction. When comparing accuracies achieved by studies that have segmented heart sounds before analysis with those who have overlooked that step, the question of whether to segment heart sounds before feature extraction is still open. In this study, we explicitly examine the importance of heart sound segmentation as a prior step for heart sound classification, and then seek to apply the obtained insights to propose a robust classifier for abnormal heart sound detection. Furthermore, recognizing the pressing need for explainable Artificial Intelligence (AI) models in the medical domain, we also unveil hidden representations learned by the classifier using model interpretation techniques. Experimental results demonstrate that the segmentation which can be learned by the model plays an essential role in abnormal heart sound classification. Our new classifier is also shown to be robust, stable and most importantly, explainable, with an accuracy of almost 100% on the widely used PhysioNet dataset.


Assuntos
Aprendizado Profundo , Processamento de Sinais Assistido por Computador , Algoritmos , Inteligência Artificial , Fonocardiografia
4.
IEEE Trans Biomed Eng ; 68(6): 1978-1989, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33338009

RESUMO

OBJECTIVE: When training machine learning models, we often assume that the training data and evaluation data are sampled from the same distribution. However, this assumption is violated when the model is evaluated on another unseen but similar database, even if that database contains the same classes. This problem is caused by domain-shift and can be solved using two approaches: domain adaptation and domain generalization. Simply, domain adaptation methods can access data from unseen domains during training; whereas in domain generalization, the unseen data is not available during training. Hence, domain generalization concerns models that perform well on inaccessible, domain-shifted data. METHOD: Our proposed domain generalization method represents an unseen domain using a set of known basis domains, afterwhich we classify the unseen domain using classifier fusion. To demonstrate our system, we employ a collection of heart sound databases that contain normal and abnormal sounds (classes). RESULTS: Our proposed classifier fusion method achieves accuracy gains of up to 16% for four completely unseen domains. CONCLUSION: Recognizing the complexity induced by the inherent temporal nature of biosignal data, the two-stage method proposed in this study is able to effectively simplify the whole process of domain generalization while demonstrating good results on unseen domains and the adopted basis domains. SIGNIFICANCE: To our best knowledge, this is the first study that investigates domain generalization for biosignal data. Our proposed learning strategy can be used to effectively learn domain-relevant features while being aware of the class differences in the data.


Assuntos
Ruídos Cardíacos , Aprendizado de Máquina , Bases de Dados Factuais
5.
Sensors (Basel) ; 19(20)2019 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-31623279

RESUMO

Recently, researchers in the area of biosensor based human emotion recognition have used different types of machine learning models for recognizing human emotions. However, most of them still lack the ability to recognize human emotions with higher classification accuracy incorporating a limited number of bio-sensors. In the domain of machine learning, ensemble learning methods have been successfully applied to solve different types of real-world machine learning problems which require improved classification accuracies. Emphasising on that, this research suggests an ensemble learning approach for developing a machine learning model that can recognize four major human emotions namely: anger; sadness; joy; and pleasure incorporating electrocardiogram (ECG) signals. As feature extraction methods, this analysis combines four ECG signal based techniques, namely: heart rate variability; empirical mode decomposition; with-in beat analysis; and frequency spectrum analysis. The first three feature extraction methods are well-known ECG based feature extraction techniques mentioned in the literature, and the fourth technique is a novel method proposed in this study. The machine learning procedure of this investigation evaluates the performance of a set of well-known ensemble learners for emotion classification and further improves the classification results using feature selection as a prior step to ensemble model training. Compared to the best performing single biosensor based model in the literature, the developed ensemble learner has the accuracy gain of 10.77%. Furthermore, the developed model outperforms most of the multiple biosensor based emotion recognition models with a significantly higher classification accuracy gain.


Assuntos
Eletrocardiografia , Emoções/fisiologia , Aprendizado de Máquina , Algoritmos , Eletroencefalografia , Frequência Cardíaca/fisiologia , Humanos , Processamento de Sinais Assistido por Computador , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...