Your browser doesn't support javascript.
loading
FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret.
Lokhande, Vishnu Suresh; Akash, Aditya Kumar; Ravi, Sathya N; Singh, Vikas.
Afiliação
  • Lokhande VS; University of Wisconsin-Madison, Madison WI, USA.
  • Akash AK; University of Wisconsin-Madison, Madison WI, USA.
  • Ravi SN; University of Illinois at Chicago, Chicago IL, USA.
  • Singh V; University of Wisconsin-Madison, Madison WI, USA.
Comput Vis ECCV ; 12357: 365-381, 2020 Aug.
Article em En | MEDLINE | ID: mdl-33462570
ABSTRACT
Algorithmic decision making based on computer vision and machine learning methods continues to permeate our lives. But issues related to biases of these models and the extent to which they treat certain segments of the population unfairly, have led to legitimate concerns. There is agreement that because of biases in the datasets we present to the models, a fairness-oblivious training will lead to unfair models. An interesting topic is the study of mechanisms via which the de novo design or training of the model can be informed by fairness measures. Here, we study strategies to impose fairness concurrently while training the model. While many fairness based approaches in vision rely on training adversarial modules together with the primary classification/regression task, in an effort to remove the influence of the protected attribute or variable, we show how ideas based on well-known optimization concepts can provide a simpler alternative. In our proposal, imposing fairness just requires specifying the protected attribute and utilizing our routine. We provide a detailed technical analysis and present experiments demonstrating that various fairness measures can be reliably imposed on a number of training tasks in vision in a manner that is interpretable.

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2020 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2020 Tipo de documento: Article