Your browser doesn't support javascript.
loading
Algorithmic fairness in computational medicine.
Xu, Jie; Xiao, Yunyu; Wang, Wendy Hui; Ning, Yue; Shenkman, Elizabeth A; Bian, Jiang; Wang, Fei.
Affiliation
  • Xu J; Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, USA; Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
  • Xiao Y; Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA.
  • Wang WH; Department of Computer Science, Stevens Institute of Technology, Hoboken, NJ, USA.
  • Ning Y; Department of Computer Science, Stevens Institute of Technology, Hoboken, NJ, USA.
  • Shenkman EA; Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, USA.
  • Bian J; Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, USA.
  • Wang F; Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA. Electronic address: few2001@med.cornell.edu.
EBioMedicine ; 84: 104250, 2022 Oct.
Article in En | MEDLINE | ID: mdl-36084616
ABSTRACT
Machine learning models are increasingly adopted for facilitating clinical decision-making. However, recent research has shown that machine learning techniques may result in potential biases when making decisions for people in different subgroups, which can lead to detrimental effects on the health and well-being of specific demographic groups such as vulnerable ethnic minorities. This problem, termed algorithmic bias, has been extensively studied in theoretical machine learning recently. However, the impact of algorithmic bias on medicine and methods to mitigate this bias remain topics of active discussion. This paper presents a comprehensive review of algorithmic fairness in the context of computational medicine, which aims at improving medicine with computational approaches. Specifically, we overview the different types of algorithmic bias, fairness quantification metrics, and bias mitigation methods, and summarize popular software libraries and tools for bias evaluation and mitigation, with the goal of providing reference and insights to researchers and practitioners in computational medicine.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Clinical Decision-Making / Machine Learning Type of study: Prognostic_studies Limits: Humans Language: En Journal: EBioMedicine Year: 2022 Document type: Article Affiliation country: Estados Unidos

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Clinical Decision-Making / Machine Learning Type of study: Prognostic_studies Limits: Humans Language: En Journal: EBioMedicine Year: 2022 Document type: Article Affiliation country: Estados Unidos