arXiv Analytics

Sign in

arXiv:1910.07796 [cs.LG]AbstractReferencesReviewsResources

Overcoming Forgetting in Federated Learning on Non-IID Data

Neta Shoham, Tomer Avidor, Aviv Keren, Nadav Israel, Daniel Benditkis, Liron Mor-Yosef, Itai Zeitak

Published 2019-10-17Version 1

We tackle the problem of Federated Learning in the non i.i.d. case, in which local models drift apart, inhibiting learning. Building on an analogy with Lifelong Learning, we adapt a solution for catastrophic forgetting to Federated Learning. We add a penalty term to the loss function, compelling all local models to converge to a shared optimum. We show that this can be done efficiently for communication (adding no further privacy risks), scaling with the number of nodes in the distributed setting. Our experiments show that this method is superior to competing ones for image recognition on the MNIST dataset.

Comments: Accepted to NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality
Categories: cs.LG, cs.CR, stat.ML
Related articles: Most relevant | Search more
arXiv:1911.12560 [cs.LG] (Published 2019-11-28)
Free-riders in Federated Learning: Attacks and Defenses
arXiv:2009.06005 [cs.LG] (Published 2020-09-13)
FLaPS: Federated Learning and Privately Scaling
arXiv:1911.01812 [cs.LG] (Published 2019-11-05)
Enhancing the Privacy of Federated Learning with Sketching