arXiv Analytics

Sign in

arXiv:1905.12101 [cs.LG]AbstractReferencesReviewsResources

Differential Privacy Has Disparate Impact on Model Accuracy

Eugene Bagdasaryan, Vitaly Shmatikov

Published 2019-05-28Version 1

Differential privacy (DP) is a popular mechanism for training machine learning models with bounded leakage about the presence of specific points in the training data. The cost of differential privacy is a reduction in the model's accuracy. We demonstrate that this cost is not borne equally: accuracy of DP models drops much more for the underrepresented classes and subgroups. For example, a DP gender classification model exhibits much lower accuracy for black faces than for white faces. Critically, this gap is bigger in the DP model than in the non-DP model, i.e., if the original model is unfair, the unfairness becomes worse once DP is applied. We demonstrate this effect for a variety of tasks and models, including sentiment analysis of text and image classification. We then explain why DP training mechanisms such as gradient clipping and noise addition have disproportionate effect on the underrepresented and more complex subgroups, resulting in a disparate reduction of model accuracy.

Related articles: Most relevant | Search more
arXiv:2106.12576 [cs.LG] (Published 2021-06-22)
DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?
arXiv:2007.11524 [cs.LG] (Published 2020-07-22)
Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising
arXiv:2106.00474 [cs.LG] (Published 2021-06-01)
Gaussian Processes with Differential Privacy