Your browser doesn't support javascript.
loading
Effects of explainable artificial intelligence in neurology decision support.
Gombolay, Grace Y; Silva, Andrew; Schrum, Mariah; Gopalan, Nakul; Hallman-Cooper, Jamika; Dutt, Monideep; Gombolay, Matthew.
Afiliação
  • Gombolay GY; Department of Pediatrics, Division of Neurology, Children's Healthcare of Atlanta, Emory University School of Medicine, Atlanta, GA, USA.
  • Silva A; Georgia Institute of Technology, Atlanta, GA, USA.
  • Schrum M; Georgia Institute of Technology, Atlanta, GA, USA.
  • Gopalan N; Arizona State University, Tempe, AZ, USA.
  • Hallman-Cooper J; Department of Pediatrics, Division of Neurology, Children's Healthcare of Atlanta, Emory University School of Medicine, Atlanta, GA, USA.
  • Dutt M; Department of Pediatrics, Division of Neurology, Children's Healthcare of Atlanta, Emory University School of Medicine, Atlanta, GA, USA.
  • Gombolay M; Department of Pediatrics, Division of Neurology, Children's Healthcare of Atlanta, Emory University School of Medicine, Atlanta, GA, USA.
Ann Clin Transl Neurol ; 11(5): 1224-1235, 2024 May.
Article em En | MEDLINE | ID: mdl-38581138
ABSTRACT

OBJECTIVE:

Artificial intelligence (AI)-based decision support systems (DSS) are utilized in medicine but underlying decision-making processes are usually unknown. Explainable AI (xAI) techniques provide insight into DSS, but little is known on how to design xAI for clinicians. Here we investigate the impact of various xAI techniques on a clinician's interaction with an AI-based DSS in decision-making tasks as compared to a general population.

METHODS:

We conducted a randomized, blinded study in which members of the Child Neurology Society and American Academy of Neurology were compared to a general population. Participants received recommendations from a DSS via a random assignment of an xAI intervention (decision tree, crowd sourced agreement, case-based reasoning, probability scores, counterfactual reasoning, feature importance, templated language, and no explanations). Primary outcomes included test performance and perceived explainability, trust, and social competence of the DSS. Secondary outcomes included compliance, understandability, and agreement per question.

RESULTS:

We had 81 neurology participants with 284 in the general population. Decision trees were perceived as the more explainable by the medical versus general population (P < 0.01) and as more explainable than probability scores within the medical population (P < 0.001). Increasing neurology experience and perceived explainability degraded performance (P = 0.0214). Performance was not predicted by xAI method but by perceived explainability.

INTERPRETATION:

xAI methods have different impacts on a medical versus general population; thus, xAI is not uniformly beneficial, and there is no one-size-fits-all approach. Further user-centered xAI research targeting clinicians and to develop personalized DSS for clinicians is needed.
Assuntos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Sistemas de Apoio a Decisões Clínicas / Neurologia Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Sistemas de Apoio a Decisões Clínicas / Neurologia Idioma: En Ano de publicação: 2024 Tipo de documento: Article