arXiv Analytics

Sign in

arXiv:2107.05457 [cs.LG]AbstractReferencesReviewsResources

Improving the Algorithm of Deep Learning with Differential Privacy

Mehdi Amian

Published 2021-07-12Version 1

In this paper, an adjustment to the original differentially private stochastic gradient descent (DPSGD) algorithm for deep learning models is proposed. As a matter of motivation, to date, almost no state-of-the-art machine learning algorithm hires the existing privacy protecting components due to otherwise serious compromise in their utility despite the vital necessity. The idea in this study is natural and interpretable, contributing to improve the utility with respect to the state-of-the-art. Another property of the proposed technique is its simplicity which makes it again more natural and also more appropriate for real world and specially commercial applications. The intuition is to trim and balance out wild individual discrepancies for privacy reasons, and at the same time, to preserve relative individual differences for seeking performance. The idea proposed here can also be applied to the recurrent neural networks (RNN) to solve the gradient exploding problem. The algorithm is applied to benchmark datasets MNIST and CIFAR-10 for a classification task and the utility measure is calculated. The results outperformed the original work.

Related articles: Most relevant | Search more
arXiv:1712.04301 [cs.LG] (Published 2017-12-09)
Deep Learning for IoT Big Data and Streaming Analytics: A Survey
arXiv:1801.07648 [cs.LG] (Published 2018-01-23)
Clustering with Deep Learning: Taxonomy and New Methods
arXiv:1706.10239 [cs.LG] (Published 2017-06-30)
Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes