Your browser doesn't support javascript.
loading
A survey of recent methods for addressing AI fairness and bias in biomedicine.
Yang, Yifan; Lin, Mingquan; Zhao, Han; Peng, Yifan; Huang, Furong; Lu, Zhiyong.
Afiliação
  • Yang Y; National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD, USA; Department of Computer Science, University of Maryland, College Park, USA.
  • Lin M; Department of Population Health Sciences, Weill Cornell Medicine, NY, USA.
  • Zhao H; Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, IL, USA.
  • Peng Y; Department of Population Health Sciences, Weill Cornell Medicine, NY, USA.
  • Huang F; Department of Computer Science, University of Maryland, College Park, USA.
  • Lu Z; National Center for Biotechnology Information (NCBI), National Library of Medicine (NLM), National Institutes of Health (NIH), Bethesda, MD, USA. Electronic address: zhiyong.lu@nih.gov.
J Biomed Inform ; 154: 104646, 2024 Jun.
Article em En | MEDLINE | ID: mdl-38677633
ABSTRACT

OBJECTIVES:

Artificial intelligence (AI) systems have the potential to revolutionize clinical practices, including improving diagnostic accuracy and surgical decision-making, while also reducing costs and manpower. However, it is important to recognize that these systems may perpetuate social inequities or demonstrate biases, such as those based on race or gender. Such biases can occur before, during, or after the development of AI models, making it critical to understand and address potential biases to enable the accurate and reliable application of AI models in clinical settings. To mitigate bias concerns during model development, we surveyed recent publications on different debiasing methods in the fields of biomedical natural language processing (NLP) or computer vision (CV). Then we discussed the methods, such as data perturbation and adversarial learning, that have been applied in the biomedical domain to address bias.

METHODS:

We performed our literature search on PubMed, ACM digital library, and IEEE Xplore of relevant articles published between January 2018 and December 2023 using multiple combinations of keywords. We then filtered the result of 10,041 articles automatically with loose constraints, and manually inspected the abstracts of the remaining 890 articles to identify the 55 articles included in this review. Additional articles in the references are also included in this review. We discuss each method and compare its strengths and weaknesses. Finally, we review other potential methods from the general domain that could be applied to biomedicine to address bias and improve fairness.

RESULTS:

The bias of AIs in biomedicine can originate from multiple sources such as insufficient data, sampling bias and the use of health-irrelevant features or race-adjusted algorithms. Existing debiasing methods that focus on algorithms can be categorized into distributional or algorithmic. Distributional methods include data augmentation, data perturbation, data reweighting methods, and federated learning. Algorithmic approaches include unsupervised representation learning, adversarial learning, disentangled representation learning, loss-based methods and causality-based methods.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Processamento de Linguagem Natural / Inteligência Artificial / Viés Limite: Humans Idioma: En Revista: J Biomed Inform Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Processamento de Linguagem Natural / Inteligência Artificial / Viés Limite: Humans Idioma: En Revista: J Biomed Inform Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos