Your browser doesn't support javascript.
loading
Leveraging explainable artificial intelligence to optimize clinical decision support.
Liu, Siru; McCoy, Allison B; Peterson, Josh F; Lasko, Thomas A; Sittig, Dean F; Nelson, Scott D; Andrews, Jennifer; Patterson, Lorraine; Cobb, Cheryl M; Mulherin, David; Morton, Colleen T; Wright, Adam.
Afiliação
  • Liu S; Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States.
  • McCoy AB; Department of Computer Science, Vanderbilt University, Nashville, TN 37212, United States.
  • Peterson JF; Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States.
  • Lasko TA; Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States.
  • Sittig DF; Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37203, United States.
  • Nelson SD; Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States.
  • Andrews J; Department of Computer Science, Vanderbilt University, Nashville, TN 37212, United States.
  • Patterson L; School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX 77030, United States.
  • Cobb CM; Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States.
  • Mulherin D; Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN 37203, United States.
  • Morton CT; Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37203, United States.
  • Wright A; HeathIT, Vanderbilt University Medical Center, Nashville, TN 37203, United States.
J Am Med Inform Assoc ; 31(4): 968-974, 2024 04 03.
Article em En | MEDLINE | ID: mdl-38383050
ABSTRACT

OBJECTIVE:

To develop and evaluate a data-driven process to generate suggestions for improving alert criteria using explainable artificial intelligence (XAI) approaches.

METHODS:

We extracted data on alerts generated from January 1, 2019 to December 31, 2020, at Vanderbilt University Medical Center. We developed machine learning models to predict user responses to alerts. We applied XAI techniques to generate global explanations and local explanations. We evaluated the generated suggestions by comparing with alert's historical change logs and stakeholder interviews. Suggestions that either matched (or partially matched) changes already made to the alert or were considered clinically correct were classified as helpful.

RESULTS:

The final dataset included 2 991 823 firings with 2689 features. Among the 5 machine learning models, the LightGBM model achieved the highest Area under the ROC Curve 0.919 [0.918, 0.920]. We identified 96 helpful suggestions. A total of 278 807 firings (9.3%) could have been eliminated. Some of the suggestions also revealed workflow and education issues.

CONCLUSION:

We developed a data-driven process to generate suggestions for improving alert criteria using XAI techniques. Our approach could identify improvements regarding clinical decision support (CDS) that might be overlooked or delayed in manual reviews. It also unveils a secondary purpose for the XAI to improve quality by discovering scenarios where CDS alerts are not accepted due to workflow, education, or staffing issues.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Sistemas de Apoio a Decisões Clínicas Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Sistemas de Apoio a Decisões Clínicas Limite: Humans Idioma: En Ano de publicação: 2024 Tipo de documento: Article