Your browser doesn't support javascript.
loading
Efficient adversarial debiasing with concept activation vector - Medical image case-studies.
Correa, Ramon; Pahwa, Khushbu; Patel, Bhavik; Vachon, Celine M; Gichoya, Judy W; Banerjee, Imon.
Afiliação
  • Correa R; Arizona State University, SCAI, Tempe, AZ, 85281, USA. Electronic address: rlcorrea@asu.edu.
  • Pahwa K; University of California Los Angeles, LA, USA.
  • Patel B; Arizona State University, SCAI, Tempe, AZ, 85281, USA; Mayo Clinic, Department of Radiology, Phoenix, AZ, 85054, USA. Electronic address: https://twitter.com/@bhavik_md.
  • Vachon CM; Mayo Clinic, Department of Quantitative Health Sciences, Rochester, 55905, USA.
  • Gichoya JW; Emory University, Department of Radiology, Atlanta, GA, 44106, USA. Electronic address: https://twitter.com/@judywawira.
  • Banerjee I; Arizona State University, SCAI, Tempe, AZ, 85281, USA; Mayo Clinic, Department of Radiology, Phoenix, AZ, 85054, USA. Electronic address: ibanerj7@asu.edu.
J Biomed Inform ; 149: 104548, 2024 Jan.
Article em En | MEDLINE | ID: mdl-38043883
ABSTRACT

BACKGROUND:

A major hurdle for the real time deployment of the AI models is ensuring trustworthiness of these models for the unseen population. More often than not, these complex models are black boxes in which promising results are generated. However, when scrutinized, these models begin to reveal implicit biases during the decision making, particularly for the minority subgroups.

METHOD:

We develop an efficient adversarial de-biasing approach with partial learning by incorporating the existing concept activation vectors (CAV) methodology, to reduce racial disparities while preserving the performance of the targeted task. CAV is originally a model interpretability technique which we adopted to identify convolution layers responsible for learning race and only fine-tune up to that layer instead of fine-tuning the complete network, limiting the drop in performance

RESULTS:

The methodology has been evaluated on two independent medical image case-studies - chest X-ray and mammograms, and we also performed external validation on a different racial population. On the external datasets for the chest X-ray use-case, debiased models (averaged AUC 0.87 ) outperformed the baseline convolution models (averaged AUC 0.57 ) as well as the models trained with the popular fine-tuning strategy (averaged AUC 0.81). Moreover, the mammogram models is debiased using a single dataset (white, black and Asian) and improved the performance on an external datasets (averaged AUC 0.8 to 0.86 ) with completely different population (primarily Hispanic patients).

CONCLUSION:

In this study, we demonstrated that the adversarial models trained only with internal data performed equally or often outperformed the standard fine-tuning strategy with data from an external setting. The adversarial training approach described can be applied regardless of predictor's model architecture, as long as the convolution model is trained using a gradient-based method. We release the training code with academic open-source license - https//github.com/ramon349/JBI2023_TCAV_debiasing.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Diagnóstico por Imagem / Grupos Raciais / Tomada de Decisão Clínica Limite: Humans Idioma: En Revista: J Biomed Inform Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Inteligência Artificial / Diagnóstico por Imagem / Grupos Raciais / Tomada de Decisão Clínica Limite: Humans Idioma: En Revista: J Biomed Inform Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article