Your browser doesn't support javascript.
loading
A review of evaluation approaches for explainable AI with applications in cardiology.
Salih, Ahmed M; Galazzo, Ilaria Boscolo; Gkontra, Polyxeni; Rauseo, Elisa; Lee, Aaron Mark; Lekadir, Karim; Radeva, Petia; Petersen, Steffen E; Menegaz, Gloria.
Afiliação
  • Salih AM; William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, EC1M 6BQ UK.
  • Galazzo IB; Department of Population Health Sciences, University of Leicester, University Rd, Leicester, LE1 7RH UK.
  • Gkontra P; Department of Computer Science, University of Zakho, Duhok road, Zakho, Kurdistan Iraq.
  • Rauseo E; Department of Engineering for Innovative Medicine, University of Verona, S. Francesco, 22, 37129 Verona, Italy.
  • Lee AM; Artificial Intelligence in Medicine Lab (BCN-AIM), Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Gran Via de les Corts Catalanes, 585, 08007 Barcelona, Spain.
  • Lekadir K; William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, EC1M 6BQ UK.
  • Radeva P; William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, EC1M 6BQ UK.
  • Petersen SE; Artificial Intelligence in Medicine Lab (BCN-AIM), Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Gran Via de les Corts Catalanes, 585, 08007 Barcelona, Spain.
  • Menegaz G; Institució Catalana de Recerca i Estudis Avançats (ICREA), Passeig Lluís Companys 23, Barcelona, Spain.
Artif Intell Rev ; 57(9): 240, 2024.
Article em En | MEDLINE | ID: mdl-39132011
ABSTRACT
Explainable artificial intelligence (XAI) elucidates the decision-making process of complex AI models and is important in building trust in model predictions. XAI explanations themselves require evaluation as to accuracy and reasonableness and in the context of use of the underlying AI model. This review details the evaluation of XAI in cardiac AI applications and has found that, of the studies examined, 37% evaluated XAI quality using literature results, 11% used clinicians as domain-experts, 11% used proxies or statistical analysis, with the remaining 43% not assessing the XAI used at all. We aim to inspire additional studies within healthcare, urging researchers not only to apply XAI methods but to systematically assess the resulting explanations, as a step towards developing trustworthy and safe models. Supplementary Information The online version contains supplementary material available at 10.1007/s10462-024-10852-w.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Artif Intell Rev Ano de publicação: 2024 Tipo de documento: Article País de publicação: Reino Unido

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Artif Intell Rev Ano de publicação: 2024 Tipo de documento: Article País de publicação: Reino Unido