Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
bioRxiv ; 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38562835

RESUMO

Deep learning methods are increasingly being applied to raw electroencephalogram (EEG) data. However, if these models are to be used in clinical or research contexts, methods to explain them must be developed, and if these models are to be used in research contexts, methods for combining explanations across large numbers of models must be developed to counteract the inherent randomness of existing training approaches. Model visualization-based explainability methods for EEG involve structuring a model architecture such that its extracted features can be characterized and have the potential to offer highly useful insights into the patterns that they uncover. Nevertheless, model visualization-based explainability methods have been underexplored within the context of multichannel EEG, and methods to combine their explanations across folds have not yet been developed. In this study, we present two novel convolutional neural network-based architectures and apply them for automated major depressive disorder diagnosis. Our models obtain slightly lower classification performance than a baseline architecture. However, across 50 training folds, they find that individuals with MDD exhibit higher ß power, potentially higher δ power, and higher brain-wide correlation that is most strongly represented within the right hemisphere. This study provides multiple key insights into MDD and represents a significant step forward for the domain of explainable deep learning applied to raw EEG. We hope that it will inspire future efforts that will eventually enable the development of explainable EEG deep learning models that can contribute both to clinical care and novel medical research discoveries.

2.
bioRxiv ; 2024 Feb 13.
Artigo em Inglês | MEDLINE | ID: mdl-38405889

RESUMO

The diagnosis of schizophrenia (SZ) can be challenging due to its diverse symptom presentation. As such, many studies have sought to identify diagnostic biomarkers of SZ using explainable machine learning methods. However, the generalizability of identified biomarkers in many machine learning-based studies is highly questionable given that most studies only analyze explanations from a small number of models. In this study, we present (1) a novel feature interaction-based explainability approach and (2) several new approaches for summarizing multi-model explanations. We implement our approach within the context of electroencephalogram (EEG) spectral power data. We further analyze both training and test set explanations with the goal of extracting generalizable insights from the models. Importantly, our analyses identify effects of SZ upon the α, ß, and θ frequency bands, the left hemisphere of the brain, and interhemispheric interactions across a majority of folds. We hope that our analysis will provide helpful insights into SZ and inspire the development of robust approaches for identifying neuropsychiatric disorder biomarkers from explainable machine learning models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA