arXiv Analytics

Sign in

arXiv:1806.11212 [cs.LG]AbstractReferencesReviewsResources

Proxy Fairness

Maya Gupta, Andrew Cotter, Mahdi Milani Fard, Serena Wang

Published 2018-06-28Version 1

We consider the problem of improving fairness when one lacks access to a dataset labeled with protected groups, making it difficult to take advantage of strategies that can improve fairness but require protected group labels, either at training or runtime. To address this, we investigate improving fairness metrics for proxy groups, and test whether doing so results in improved fairness for the true sensitive groups. Results on benchmark and real-world datasets demonstrate that such a proxy fairness strategy can work well in practice. However, we caution that the effectiveness likely depends on the choice of fairness metric, as well as how aligned the proxy groups are with the true protected groups in terms of the constrained model parameters.

Related articles: Most relevant | Search more
arXiv:1809.04737 [cs.LG] (Published 2018-09-13)
Fairness-aware Classification: Criterion, Convexity, and Bounds
arXiv:2201.09199 [cs.LG] (Published 2022-01-23)
Deep Learning on Attributed Sequences
arXiv:1506.06318 [cs.LG] (Published 2015-06-21)
Communication Efficient Distributed Agnostic Boosting