Este articulo es un Preprint
Los preprints son informes de investigación preliminares que no han sido certificados por revisión por pares. No deben considerarse para guiar la práctica clínica o los comportamientos relacionados con la salud y no deben publicarse en los medios como información establecida.
Los preprints publicados en línea permiten a los autores recibir comentarios rápidamente, y toda la comunidad científica puede evaluar de forma independiente el trabajo y responder adecuadamente. Estos comentarios se publican junto con los preprints para que cualquiera pueda leer y servir como una revisión pospublicación.
Algorithmic Fairness and Bias Mitigation for Clinical Machine Learning: Insights from Rapid COVID-19 Diagnosis by Adversarial Learning
Preprint
en Inglés
| medRxiv
| ID: ppmedrxiv-22268948
ABSTRACT
Machine learning is becoming increasingly prominent in healthcare. Although its benefits are clear, growing attention is being given to how machine learning may exacerbate existing biases and disparities. In this study, we introduce an adversarial training framework that is capable of mitigating biases that may have been acquired through data collection or magnified during model development. For example, if one class is over-presented or errors/inconsistencies in practice are reflected in the training data, then a model can be biased by these. To evaluate our adversarial training framework, we used the statistical definition of equalized odds. We evaluated our model for the task of rapidly predicting COVID-19 for patients presenting to hospital emergency departments, and aimed to mitigate regional (hospital) and ethnic biases present. We trained our framework on a large, real-world COVID-19 dataset and demonstrated that adversarial training demonstrably improves outcome fairness (with respect to equalized odds), while still achieving clinically-effective screening performances (NPV>0.98). We compared our method to the benchmark set by related previous work, and performed prospective and external validation on four independent hospital cohorts. Our method can be generalized to any outcomes, models, and definitions of fairness.
cc_by
Texto completo:
Disponible
Colección:
Preprints
Base de datos:
medRxiv
Tipo de estudio:
Cohort_studies
/
Estudio diagnóstico
/
Experimental_studies
/
Estudio observacional
/
Estudio pronóstico
Idioma:
Inglés
Año:
2022
Tipo del documento:
Preprint