Your browser doesn't support javascript.
loading
The role of saliency maps in enhancing ophthalmologists' trust in artificial intelligence models.
Wong, Carolyn Yu Tung; Antaki, Fares; Woodward-Court, Peter; Ong, Ariel Yuhan; Keane, Pearse A.
Afiliação
  • Wong CYT; Institute of Ophthalmology, University College London, London, United Kingdom.
  • Antaki F; Institute of Ophthalmology, University College London, London, United Kingdom.
  • Woodward-Court P; Institute of Ophthalmology, University College London, London, United Kingdom.
  • Ong AY; Institute of Ophthalmology, University College London, London, United Kingdom.
  • Keane PA; Institute of Ophthalmology, University College London, London, United Kingdom. Electronic address: p.keane@ucl.ac.uk.
Asia Pac J Ophthalmol (Phila) ; 13(4): 100087, 2024.
Article em En | MEDLINE | ID: mdl-39069106
ABSTRACT

PURPOSE:

Saliency maps (SM) allow clinicians to better understand the opaque decision-making process in artificial intelligence (AI) models by visualising the important features responsible for predictions. This ultimately improves interpretability and confidence. In this work, we review the use case for SMs, exploring their impact on clinicians' understanding and trust in AI models. We use the following ophthalmic conditions as examples (1) glaucoma, (2) myopia, (3) age-related macular degeneration, and (4) diabetic retinopathy.

METHOD:

A multi-field search on MEDLINE, Embase, and Web of Science was conducted using specific keywords. Only studies on the use of SMs in glaucoma, myopia, AMD, or DR were considered for inclusion.

RESULTS:

Findings reveal that SMs are often used to validate AI models and advocate for their adoption, potentially leading to biased claims. Overlooking the technical limitations of SMs, and the conductance of superficial assessments of their quality and relevance, was discerned. Uncertainties persist regarding the role of saliency maps in building trust in AI. It is crucial to enhance understanding of SMs' technical constraints and improve evaluation of their quality, impact, and suitability for specific tasks. Establishing a standardised framework for selecting and assessing SMs, as well as exploring their relationship with other reliability sources (e.g. safety and generalisability), is essential for enhancing clinicians' trust in AI.

CONCLUSION:

We conclude that SMs are not beneficial for interpretability and trust-building purposes in their current forms. Instead, SMs may confer benefits to model debugging, model performance enhancement, and hypothesis testing (e.g. novel biomarkers).
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Oftalmologistas Limite: Humans Idioma: En Revista: Asia Pac J Ophthalmol (Phila) Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Oftalmologistas Limite: Humans Idioma: En Revista: Asia Pac J Ophthalmol (Phila) Ano de publicação: 2024 Tipo de documento: Article