arXiv Analytics

Sign in

arXiv:1911.01812 [cs.LG]AbstractReferencesReviewsResources

Enhancing the Privacy of Federated Learning with Sketching

Zaoxing Liu, Tian Li, Virginia Smith, Vyas Sekar

Published 2019-11-05Version 1

In response to growing concerns about user privacy, federated learning has emerged as a promising tool to train statistical models over networks of devices while keeping data localized. Federated learning methods run training tasks directly on user devices and do not share the raw user data with third parties. However, current methods still share model updates, which may contain private information (e.g., one's weight and height), during the training process. Existing efforts that aim to improve the privacy of federated learning make compromises in one or more of the following key areas: performance (particularly communication cost), accuracy, or privacy. To better optimize these trade-offs, we propose that \textit{sketching algorithms} have a unique advantage in that they can provide both privacy and performance benefits while maintaining accuracy. We evaluate the feasibility of sketching-based federated learning with a prototype on three representative learning models. Our initial findings show that it is possible to provide strong privacy guarantees for federated learning without sacrificing performance or accuracy. Our work highlights that there exists a fundamental connection between privacy and communication in distributed settings, and suggests important open problems surrounding the theoretical understanding, methodology, and system design of practical, private federated learning.

Related articles: Most relevant | Search more
arXiv:1911.12560 [cs.LG] (Published 2019-11-28)
Free-riders in Federated Learning: Attacks and Defenses
arXiv:1910.07796 [cs.LG] (Published 2019-10-17)
Overcoming Forgetting in Federated Learning on Non-IID Data
arXiv:2009.06005 [cs.LG] (Published 2020-09-13)
FLaPS: Federated Learning and Privately Scaling