{ "id": "2107.05457", "version": "v1", "published": "2021-07-12T14:28:12.000Z", "updated": "2021-07-12T14:28:12.000Z", "title": "Improving the Algorithm of Deep Learning with Differential Privacy", "authors": [ "Mehdi Amian" ], "categories": [ "cs.LG", "cs.AI" ], "abstract": "In this paper, an adjustment to the original differentially private stochastic gradient descent (DPSGD) algorithm for deep learning models is proposed. As a matter of motivation, to date, almost no state-of-the-art machine learning algorithm hires the existing privacy protecting components due to otherwise serious compromise in their utility despite the vital necessity. The idea in this study is natural and interpretable, contributing to improve the utility with respect to the state-of-the-art. Another property of the proposed technique is its simplicity which makes it again more natural and also more appropriate for real world and specially commercial applications. The intuition is to trim and balance out wild individual discrepancies for privacy reasons, and at the same time, to preserve relative individual differences for seeking performance. The idea proposed here can also be applied to the recurrent neural networks (RNN) to solve the gradient exploding problem. The algorithm is applied to benchmark datasets MNIST and CIFAR-10 for a classification task and the utility measure is calculated. The results outperformed the original work.", "revisions": [ { "version": "v1", "updated": "2021-07-12T14:28:12.000Z" } ], "analyses": { "keywords": [ "differential privacy", "deep learning", "differentially private stochastic gradient descent", "original differentially private stochastic gradient", "state-of-the-art machine learning algorithm hires" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }