arXiv Analytics

Sign in

arXiv:2106.12576 [cs.LG]AbstractReferencesReviewsResources

DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?

Archit Uniyal, Rakshit Naidu, Sasikanth Kotti, Sahib Singh, Patrik Joslin Kenfack, Fatemehsadat Mireshghallah, Andrew Trask

Published 2021-06-22Version 1

Recent advances in differentially private deep learning have demonstrated that application of differential privacy, specifically the DP-SGD algorithm, has a disparate impact on different sub-groups in the population, which leads to a significantly high drop-in model utility for sub-populations that are under-represented (minorities), compared to well-represented ones. In this work, we aim to compare PATE, another mechanism for training deep learning models using differential privacy, with DP-SGD in terms of fairness. We show that PATE does have a disparate impact too, however, it is much less severe than DP-SGD. We draw insights from this observation on what might be promising directions in achieving better fairness-privacy trade-offs.

Related articles: Most relevant | Search more
arXiv:1905.12101 [cs.LG] (Published 2019-05-28)
Differential Privacy Has Disparate Impact on Model Accuracy
arXiv:2010.04327 [cs.LG] (Published 2020-10-09)
Bias and Variance of Post-processing in Differential Privacy
arXiv:2206.07737 [cs.LG] (Published 2022-06-15)
Disparate Impact in Differential Privacy from Gradient Misalignment