Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Eur J Radiol ; 162: 110787, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37001254

RESUMO

Since recent achievements of Artificial Intelligence (AI) have proven significant success and promising results throughout many fields of application during the last decade, AI has also become an essential part of medical research. The improving data availability, coupled with advances in high-performance computing and innovative algorithms, has increased AI's potential in various aspects. Because AI rapidly reshapes research and promotes the development of personalized clinical care, alongside its implementation arises an urgent need for a deep understanding of its inner workings, especially in high-stake domains. However, such systems can be highly complex and opaque, limiting the possibility of an immediate understanding of the system's decisions. Regarding the medical field, a high impact is attributed to these decisions as physicians and patients can only fully trust AI systems when reasonably communicating the origin of their results, simultaneously enabling the identification of errors and biases. Explainable AI (XAI), becoming an increasingly important field of research in recent years, promotes the formulation of explainability methods and provides a rationale allowing users to comprehend the results generated by AI systems. In this paper, we investigate the application of XAI in medical imaging, addressing a broad audience, especially healthcare professionals. The content focuses on definitions and taxonomies, standard methods and approaches, advantages, limitations, and examples representing the current state of research regarding XAI in medical imaging. This paper focuses on saliency-based XAI methods, where the explanation can be provided directly on the input data (image) and which naturally are of special importance in medical imaging.


Assuntos
Inteligência Artificial , Médicos , Humanos , Algoritmos , Pessoal de Saúde
2.
Eur J Radiol ; 162: 110786, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36990051

RESUMO

Driven by recent advances in Artificial Intelligence (AI) and Computer Vision (CV), the implementation of AI systems in the medical domain increased correspondingly. This is especially true for the domain of medical imaging, in which the incorporation of AI aids several imaging-based tasks such as classification, segmentation, and registration. Moreover, AI reshapes medical research and contributes to the development of personalized clinical care. Consequently, alongside its extended implementation arises the need for an extensive understanding of AI systems and their inner workings, potentials, and limitations which the field of eXplainable AI (XAI) aims at. Because medical imaging is mainly associated with visual tasks, most explainability approaches incorporate saliency-based XAI methods. In contrast to that, in this article we would like to investigate the full potential of XAI methods in the field of medical imaging by specifically focusing on XAI techniques not relying on saliency, and providing diversified examples. We dedicate our investigation to a broad audience, but particularly healthcare professionals. Moreover, this work aims at establishing a common ground for cross-disciplinary understanding and exchange across disciplines between Deep Learning (DL) builders and healthcare professionals, which is why we aimed for a non-technical overview. Presented XAI methods are divided by a method's output representation into the following categories: Case-based explanations, textual explanations, and auxiliary explanations.


Assuntos
Inteligência Artificial , Pessoal de Saúde , Humanos
3.
Artif Intell Med ; 128: 102281, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35534140

RESUMO

Proximal femur fractures represent a major health concern, and substantially contribute to the morbidity of elderly. Correct classification and diagnosis of hip fractures has a significant impact on mortality, costs and hospital stay. In this paper, we present a method and empirical validation for automatic subclassification of proximal femur fractures and Dutch radiological report generation that does not rely on manually curated data. The fracture classification model was trained on 11,000 X-ray images obtained from 5000 electronic health records in a general hospital. To generate the Dutch reports, we first trained an embedding model on 20,000 radiological reports of pelvic region fractures, and used its embeddings in the report generation model. We trained the report generation model on the 5000 radiological reports associated with the fracture cases. Our report generation model is on par with state-of-the-art in terms of BLEU and ROUGE scores. This is promising, because in contrast to those earlier works, our approach does not require manual preprocessing of either images or the reports. This boosts the applicability of automatic clinical report generation in practice. A quantitative and qualitative user study among medical students found no significant difference in provenance of real and generated reports. A qualitative, in-depth clinical relevance study with medical domain experts showed that from a human perspective the quality of the generated reports approximates the quality of the original reports and highlights challenges in creating sufficiently detailed and versatile training data for automatic radiology report generation.


Assuntos
Fraturas do Quadril , Radiologia , Idoso , Fêmur , Fraturas do Quadril/diagnóstico por imagem , Humanos , Idioma , Radiografia
4.
Diagnostics (Basel) ; 12(1)2021 Dec 24.
Artigo em Inglês | MEDLINE | ID: mdl-35054207

RESUMO

Machine learning models have been successfully applied for analysis of skin images. However, due to the black box nature of such deep learning models, it is difficult to understand their underlying reasoning. This prevents a human from validating whether the model is right for the right reasons. Spurious correlations and other biases in data can cause a model to base its predictions on such artefacts rather than on the true relevant information. These learned shortcuts can in turn cause incorrect performance estimates and can result in unexpected outcomes when the model is applied in clinical practice. This study presents a method to detect and quantify this shortcut learning in trained classifiers for skin cancer diagnosis, since it is known that dermoscopy images can contain artefacts. Specifically, we train a standard VGG16-based skin cancer classifier on the public ISIC dataset, for which colour calibration charts (elliptical, coloured patches) occur only in benign images and not in malignant ones. Our methodology artificially inserts those patches and uses inpainting to automatically remove patches from images to assess the changes in predictions. We find that our standard classifier partly bases its predictions of benign images on the presence of such a coloured patch. More importantly, by artificially inserting coloured patches into malignant images, we show that shortcut learning results in a significant increase in misdiagnoses, making the classifier unreliable when used in clinical practice. With our results, we, therefore, want to increase awareness of the risks of using black box machine learning models trained on potentially biased datasets. Finally, we present a model-agnostic method to neutralise shortcut learning by removing the bias in the training dataset by exchanging coloured patches with benign skin tissue using image inpainting and re-training the classifier on this de-biased dataset.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA