arXiv Analytics

Sign in

arXiv:2209.09592 [cs.LG]AbstractReferencesReviewsResources

Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation

Clara Rus, Jeffrey Luppes, Harrie Oosterhuis, Gido H. Schoenmacker

Published 2022-09-20Version 1

The goal of this work is to help mitigate the already existing gender wage gap by supplying unbiased job recommendations based on resumes from job seekers. We employ a generative adversarial network to remove gender bias from word2vec representations of 12M job vacancy texts and 900k resumes. Our results show that representations created from recruitment texts contain algorithmic bias and that this bias results in real-world consequences for recommendation systems. Without controlling for bias, women are recommended jobs with significantly lower salary in our data. With adversarially fair representations, this wage gap disappears, meaning that our debiased job recommendations reduce wage discrimination. We conclude that adversarial debiasing of word representations can increase real-world fairness of systems and thus may be part of the solution for creating fairness-aware recommendation systems.

Journal: RecSys in HR'22: The 2nd Workshop on Recommender Systems for Human Resources, in conjunction with the 16th ACM Conference on Recommender Systems, September 18-23, 2022, Seattle, USA
Categories: cs.LG
Related articles:
arXiv:2409.09894 [cs.LG] (Published 2024-09-15)
Estimating Wage Disparities Using Foundation Models
arXiv:2101.02831 [cs.LG] (Published 2021-01-08)
A Tale of Fairness Revisited: Beyond Adversarial Learning for Deep Neural Network Fairness