arXiv Analytics

Sign in

arXiv:2003.03699 [cs.LG]AbstractReferencesReviewsResources

Removing Disparate Impact of Differentially Private Stochastic Gradient Descent on Model Accuracy

Depeng Xu, Wei Du, Xintao Wu

Published 2020-03-08Version 1

When we enforce differential privacy in machine learning, the utility-privacy trade-off is different w.r.t. each group. Gradient clipping and random noise addition disproportionately affect underrepresented and complex classes and subgroups, which results in inequality in utility loss. In this work, we analyze the inequality in utility loss by differential privacy and propose a modified differentially private stochastic gradient descent (DPSGD), called DPSGD-F, to remove the potential disparate impact of differential privacy on the protected group. DPSGD-F adjusts the contribution of samples in a group depending on the group clipping bias such that differential privacy has no disparate impact on group utility. Our experimental evaluation shows how group sample size and group clipping bias affect the impact of differential privacy in DPSGD, and how adaptive clipping for each group helps to mitigate the disparate impact caused by differential privacy in DPSGD-F.

Related articles: Most relevant | Search more
arXiv:1905.12101 [cs.LG] (Published 2019-05-28)
Differential Privacy Has Disparate Impact on Model Accuracy
arXiv:2206.07737 [cs.LG] (Published 2022-06-15)
Disparate Impact in Differential Privacy from Gradient Misalignment
arXiv:2107.05457 [cs.LG] (Published 2021-07-12)
Improving the Algorithm of Deep Learning with Differential Privacy