Your browser doesn't support javascript.
loading
Mitigating machine learning bias between high income and low-middle income countries for enhanced model fairness and generalizability.
Yang, Jenny; Clifton, Lei; Dung, Nguyen Thanh; Phong, Nguyen Thanh; Yen, Lam Minh; Thy, Doan Bui Xuan; Soltan, Andrew A S; Thwaites, Louise; Clifton, David A.
Afiliación
  • Yang J; Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, England. jenny.yang@eng.ox.ac.uk.
  • Clifton L; Nuffield Department of Population Health, University of Oxford, Oxford, England.
  • Dung NT; Hospital for Tropical Diseases, Ho Chi Minh, Vietnam.
  • Phong NT; Hospital for Tropical Diseases, Ho Chi Minh, Vietnam.
  • Yen LM; Oxford University Clinical Research Unit, Ho Chi Minh, Vietnam.
  • Thy DBX; Oxford University Clinical Research Unit, Ho Chi Minh, Vietnam.
  • Soltan AAS; Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, England.
  • Thwaites L; Nuffield Department of Population Health, University of Oxford, Oxford, England.
  • Clifton DA; Oxford Cancer and Haematology Centre, Oxford University Hospitals NHS Foundation Trust, Ho Chi Minh, Vietnam.
Sci Rep ; 14(1): 13318, 2024 06 10.
Article en En | MEDLINE | ID: mdl-38858466
ABSTRACT
Collaborative efforts in artificial intelligence (AI) are increasingly common between high-income countries (HICs) and low- to middle-income countries (LMICs). Given the resource limitations often encountered by LMICs, collaboration becomes crucial for pooling resources, expertise, and knowledge. Despite the apparent advantages, ensuring the fairness and equity of these collaborative models is essential, especially considering the distinct differences between LMIC and HIC hospitals. In this study, we show that collaborative AI approaches can lead to divergent performance outcomes across HIC and LMIC settings, particularly in the presence of data imbalances. Through a real-world COVID-19 screening case study, we demonstrate that implementing algorithmic-level bias mitigation methods significantly improves outcome fairness between HIC and LMIC sites while maintaining high diagnostic sensitivity. We compare our results against previous benchmarks, utilizing datasets from four independent United Kingdom Hospitals and one Vietnamese hospital, representing HIC and LMIC settings, respectively.
Asunto(s)

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Países en Desarrollo / Aprendizaje Automático / COVID-19 País/Región como asunto: Asia / Europa Idioma: En Revista: Sci Rep Año: 2024 Tipo del documento: Article

Texto completo: 1 Base de datos: MEDLINE Asunto principal: Países en Desarrollo / Aprendizaje Automático / COVID-19 País/Región como asunto: Asia / Europa Idioma: En Revista: Sci Rep Año: 2024 Tipo del documento: Article