Your browser doesn't support javascript.
loading
Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.
Chen, Feng; Wang, Liqin; Hong, Julie; Jiang, Jiaqi; Zhou, Li.
Afiliação
  • Chen F; Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, United States.
  • Wang L; Department of Biomedical Informatics and Health Education, University of Washington, Seattle, WA 98105, United States.
  • Hong J; Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, United States.
  • Jiang J; Division of General Internal Medicine and Primary Care, Brigham and Women's Hospital, Boston, MA 02115, United States.
  • Zhou L; Wellesley High School, Wellesley, MA 02481, United States.
J Am Med Inform Assoc ; 31(5): 1172-1183, 2024 Apr 19.
Article em En | MEDLINE | ID: mdl-38520723
ABSTRACT

OBJECTIVES:

Leveraging artificial intelligence (AI) in conjunction with electronic health records (EHRs) holds transformative potential to improve healthcare. However, addressing bias in AI, which risks worsening healthcare disparities, cannot be overlooked. This study reviews methods to handle various biases in AI models developed using EHR data. MATERIALS AND

METHODS:

We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, analyzing articles from PubMed, Web of Science, and IEEE published between January 01, 2010 and December 17, 2023. The review identified key biases, outlined strategies for detecting and mitigating bias throughout the AI model development, and analyzed metrics for bias assessment.

RESULTS:

Of the 450 articles retrieved, 20 met our criteria, revealing 6 major bias types algorithmic, confounding, implicit, measurement, selection, and temporal. The AI models were primarily developed for predictive tasks, yet none have been deployed in real-world healthcare settings. Five studies concentrated on the detection of implicit and algorithmic biases employing fairness metrics like statistical parity, equal opportunity, and predictive equity. Fifteen studies proposed strategies for mitigating biases, especially targeting implicit and selection biases. These strategies, evaluated through both performance and fairness metrics, predominantly involved data collection and preprocessing techniques like resampling and reweighting.

DISCUSSION:

This review highlights evolving strategies to mitigate bias in EHR-based AI models, emphasizing the urgent need for both standardized and detailed reporting of the methodologies and systematic real-world testing and evaluation. Such measures are essential for gauging models' practical impact and fostering ethical AI that ensures fairness and equity in healthcare.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Viés / Registros Eletrônicos de Saúde Limite: Humans Idioma: En Revista: J Am Med Inform Assoc / J. am. med. inform. assoc / Journal of the american medical informatics association Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Viés / Registros Eletrônicos de Saúde Limite: Humans Idioma: En Revista: J Am Med Inform Assoc / J. am. med. inform. assoc / Journal of the american medical informatics association Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos