Your browser doesn't support javascript.
loading
A historical perspective of biomedical explainable AI research.
Malinverno, Luca; Barros, Vesna; Ghisoni, Francesco; Visonà, Giovanni; Kern, Roman; Nickel, Philip J; Ventura, Barbara Elvira; Simic, Ilija; Stryeck, Sarah; Manni, Francesca; Ferri, Cesar; Jean-Quartier, Claire; Genga, Laura; Schweikert, Gabriele; Lovric, Mario; Rosen-Zvi, Michal.
Afiliación
  • Malinverno L; Porini SRL, Via Cavour, 222074 Lomazzo, Italy.
  • Barros V; AI for Accelerated Healthcare & Life Sciences Discovery, IBM R&D Laboratories, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel.
  • Ghisoni F; The Hebrew University of Jerusalem, Ein Kerem Campus, 9112102, Jerusalem, Israel.
  • Visonà G; Porini SRL, Via Cavour, 222074 Lomazzo, Italy.
  • Kern R; Empirical Inference, Max-Planck Institute for Intelligent Systems, 72076 Tübingen, Germany.
  • Nickel PJ; Institute of Interactive Systems and Data Science, Graz University of Technology, Sandgasse 36/III, 8010 Graz, Austria.
  • Ventura BE; Know-Center GmbH, Sandgasse 36/4A 8010, Graz, Austria.
  • Simic I; Eindhoven University of Technology, 5135600 MB Eindhoven, The Netherlands.
  • Stryeck S; Porini SRL, Via Cavour, 222074 Lomazzo, Italy.
  • Manni F; Know-Center GmbH, Sandgasse 36/4A 8010, Graz, Austria.
  • Ferri C; Research Center Pharmaceutical Engineering GmbH, Inffeldgasse 138010 Graz, Austria.
  • Jean-Quartier C; Philips Research, HTC 4, 5656 AE Eindhoven, The Netherlands.
  • Genga L; VRAIN, Universitat Politècnica de València, Camino de Vera, s/n 46022 Valencia, Spain.
  • Schweikert G; Research Data Management, Graz University of Technology, Brockmanngasse 84, 8010 Graz, Austria.
  • Lovric M; Eindhoven University of Technology, 5135600 MB Eindhoven, The Netherlands.
  • Rosen-Zvi M; School of Life Sciences, University of Dundee, Dow Street, Dundee DD1 5EH, UK.
Patterns (N Y) ; 4(9): 100830, 2023 Sep 08.
Article en En | MEDLINE | ID: mdl-37720333
ABSTRACT
The black-box nature of most artificial intelligence (AI) models encourages the development of explainability methods to engender trust into the AI decision-making process. Such methods can be broadly categorized into two main types post hoc explanations and inherently interpretable algorithms. We aimed at analyzing the possible associations between COVID-19 and the push of explainable AI (XAI) to the forefront of biomedical research. We automatically extracted from the PubMed database biomedical XAI studies related to concepts of causality or explainability and manually labeled 1,603 papers with respect to XAI categories. To compare the trends pre- and post-COVID-19, we fit a change point detection model and evaluated significant changes in publication rates. We show that the advent of COVID-19 in the beginning of 2020 could be the driving factor behind an increased focus concerning XAI, playing a crucial role in accelerating an already evolving trend. Finally, we present a discussion with future societal use and impact of XAI technologies and potential future directions for those who pursue fostering clinical trust with interpretable machine learning models.
Palabras clave

Texto completo: 1 Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: Patterns (N Y) Año: 2023 Tipo del documento: Article País de afiliación: Italia

Texto completo: 1 Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: Patterns (N Y) Año: 2023 Tipo del documento: Article País de afiliación: Italia