arXiv:2003.03699 [cs.LG]AbstractReferencesReviewsResources
Removing Disparate Impact of Differentially Private Stochastic Gradient Descent on Model Accuracy
Published 2020-03-08Version 1
When we enforce differential privacy in machine learning, the utility-privacy trade-off is different w.r.t. each group. Gradient clipping and random noise addition disproportionately affect underrepresented and complex classes and subgroups, which results in inequality in utility loss. In this work, we analyze the inequality in utility loss by differential privacy and propose a modified differentially private stochastic gradient descent (DPSGD), called DPSGD-F, to remove the potential disparate impact of differential privacy on the protected group. DPSGD-F adjusts the contribution of samples in a group depending on the group clipping bias such that differential privacy has no disparate impact on group utility. Our experimental evaluation shows how group sample size and group clipping bias affect the impact of differential privacy in DPSGD, and how adaptive clipping for each group helps to mitigate the disparate impact caused by differential privacy in DPSGD-F.