Este artigo é um Preprint
Preprints são relatos preliminares de pesquisa que não foram certificados pela revisão por pares. Eles não devem ser considerados para orientar a prática clínica ou comportamentos relacionados à saúde e não devem ser publicados na mídia como informação estabelecida.
Preprints publicados online permitem que os autores recebam feedback rápido, e toda a comunidade científica pode avaliar o trabalho independentemente e responder adequadamente. Estes comentários são publicados juntamente com os preprints para qualquer pessoa ler e servir como uma avaliação pós-publicação.
Algorithmic Fairness and Bias Mitigation for Clinical Machine Learning: Insights from Rapid COVID-19 Diagnosis by Adversarial Learning
Preprint
em Inglês
| medRxiv
| ID: ppmedrxiv-22268948
ABSTRACT
Machine learning is becoming increasingly prominent in healthcare. Although its benefits are clear, growing attention is being given to how machine learning may exacerbate existing biases and disparities. In this study, we introduce an adversarial training framework that is capable of mitigating biases that may have been acquired through data collection or magnified during model development. For example, if one class is over-presented or errors/inconsistencies in practice are reflected in the training data, then a model can be biased by these. To evaluate our adversarial training framework, we used the statistical definition of equalized odds. We evaluated our model for the task of rapidly predicting COVID-19 for patients presenting to hospital emergency departments, and aimed to mitigate regional (hospital) and ethnic biases present. We trained our framework on a large, real-world COVID-19 dataset and demonstrated that adversarial training demonstrably improves outcome fairness (with respect to equalized odds), while still achieving clinically-effective screening performances (NPV>0.98). We compared our method to the benchmark set by related previous work, and performed prospective and external validation on four independent hospital cohorts. Our method can be generalized to any outcomes, models, and definitions of fairness.
cc_by
Texto completo:
Disponível
Coleções:
Preprints
Base de dados:
medRxiv
Tipo de estudo:
Cohort_studies
/
Estudo diagnóstico
/
Experimental_studies
/
Estudo observacional
/
Estudo prognóstico
Idioma:
Inglês
Ano de publicação:
2022
Tipo de documento:
Preprint