Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo del documento
Publication year range
1.
Sensors (Basel) ; 21(16)2021 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-34451100

RESUMEN

PROBLEM: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented. MOTIVATION: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue. AIM: For this purpose, several approaches were used in tandem, in order to create a complete overview of the classificatios. METHODOLOGY: The techniques used included GradCAM, LIME, RISE, Squaregrid, and direct Gradient approaches (Vanilla, Smooth, Integrated). MAIN RESULTS: Among the deep neural networks architectures evaluated for this image classification task, VGG16 was shown to be most affected by biases towards spurious artifacts, while DenseNet was notably more robust against them. Further impacts: Results further show that small differences in validation accuracies can cause drastic changes in explanation heatmaps for DenseNet architectures, indicating that small changes in validation accuracy may have large impacts on the biases learned by the networks. Notably, it is important to notice that the strong performance metrics achieved by all these networks (Accuracy, F1 score, AUC all in the 80 to 90% range) could give users the erroneous impression that there is no bias. However, the analysis of the explanation heatmaps highlights the bias.


Asunto(s)
Inteligencia Artificial , COVID-19 , Sesgo , Humanos , SARS-CoV-2 , Tomografía Computarizada por Rayos X
2.
Sensors (Basel) ; 19(13)2019 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-31284419

RESUMEN

An application of explainable artificial intelligence on medical data is presented. There is an increasing demand in machine learning literature for such explainable models in health-related applications. This work aims to generate explanations on how a Convolutional Neural Network (CNN) detects tumor tissue in patches extracted from histology whole slide images. This is achieved using the "locally-interpretable model-agnostic explanations" methodology. Two publicly-available convolutional neural networks trained on the Patch Camelyon Benchmark are analyzed. Three common segmentation algorithms are compared for superpixel generation, and a fourth simpler parameter-free segmentation algorithm is proposed. The main characteristics of the explanations are discussed, as well as the key patterns identified in true positive predictions. The results are compared to medical annotations and literature and suggest that the CNN predictions follow at least some aspects of human expert knowledge.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Metástasis Linfática/patología , Redes Neurales de la Computación , Algoritmos , Aprendizaje Profundo , Humanos , Ganglios Linfáticos/patología , Modelos Biológicos
3.
Neural Netw ; 148: 1-12, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35045383

RESUMEN

A novel evolutionary approach for Explainable Artificial Intelligence is presented: the "Evolved Explanations" model (EvEx). This methodology combines Local Interpretable Model Agnostic Explanations (LIME) with Multi-Objective Genetic Algorithms to allow for automated segmentation parameter tuning in image classification tasks. In this case, the dataset studied is Patch-Camelyon, comprised of patches from pathology whole slide images. A publicly available Convolutional Neural Network (CNN) was trained on this dataset to provide a binary classification for presence/absence of lymph node metastatic tissue. In turn, the classifications are explained by means of evolving segmentations, seeking to optimize three evaluation goals simultaneously. The final explanation is computed as the mean of all explanations generated by Pareto front individuals, evolved by the developed genetic algorithm. To enhance reproducibility and traceability of the explanations, each of them was generated from several different seeds, randomly chosen. The observed results show remarkable agreement between different seeds. Despite the stochastic nature of LIME explanations, regions of high explanation weights proved to have good agreement in the heat maps, as computed by pixel-wise relative standard deviations. The found heat maps coincide with expert medical segmentations, which demonstrates that this methodology can find high quality explanations (according to the evaluation metrics), with the novel advantage of automated parameter fine tuning. These results give additional insight into the inner workings of neural network black box decision making for medical data.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Humanos , Metástasis Linfática , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda