Your browser doesn't support javascript.
loading
Fairness and Accuracy Under Domain Generalization.
Pham, Thai-Hoang; Zhang, Xueru; Zhang, Ping.
Afiliação
  • Pham TH; The Ohio State University, Columbus, OH 43210, USA.
  • Zhang X; The Ohio State University, Columbus, OH 43210, USA.
  • Zhang P; The Ohio State University, Columbus, OH 43210, USA.
ArXiv ; 2023 Jan 30.
Article em En | MEDLINE | ID: mdl-37292471
ABSTRACT
As machine learning (ML) algorithms are increasingly used in high-stakes applications, concerns have arisen that they may be biased against certain social groups. Although many approaches have been proposed to make ML models fair, they typically rely on the assumption that data distributions in training and deployment are identical. Unfortunately, this is commonly violated in practice and a model that is fair during training may lead to an unexpected outcome during its deployment. Although the problem of designing robust ML models under dataset shifts has been widely studied, most existing works focus only on the transfer of accuracy. In this paper, we study the transfer of both fairness and accuracy under domain generalization where the data at test time may be sampled from never-before-seen domains. We first develop theoretical bounds on the unfairness and expected loss at deployment, and then derive sufficient conditions under which fairness and accuracy can be perfectly transferred via invariant representation learning. Guided by this, we design a learning algorithm such that fair ML models learned with training data still have high fairness and accuracy when deployment environments change. Experiments on real-world data validate the proposed algorithm. Model implementation is available at https//github.com/pth1993/FATDM.

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2023 Tipo de documento: Article