arXiv Analytics

Sign in

arXiv:2312.13602 [cs.LG]AbstractReferencesReviewsResources

Peer-to-Peer Learning + Consensus with Non-IID Data

Srinivasa Pranav, José M. F. Moura

Published 2023-12-21Version 1

Peer-to-peer deep learning algorithms are enabling distributed edge devices to collaboratively train deep neural networks without exchanging raw training data or relying on a central server. Peer-to-Peer Learning (P2PL) and other algorithms based on Distributed Local-Update Stochastic/mini-batch Gradient Descent (local DSGD) rely on interleaving epochs of training with distributed consensus steps. This process leads to model parameter drift/divergence amongst participating devices in both IID and non-IID settings. We observe that model drift results in significant oscillations in test performance evaluated after local training and consensus phases. We then identify factors that amplify performance oscillations and demonstrate that our novel approach, P2PL with Affinity, dampens test performance oscillations in non-IID settings without incurring any additional communication cost.

Comments: Asilomar Conference on Signals, Systems, and Computers 2023 Camera-Ready Version
Categories: cs.LG, cs.DC
Related articles: Most relevant | Search more
arXiv:2103.15947 [cs.LG] (Published 2021-03-29)
Federated Learning with Taskonomy for Non-IID Data
arXiv:1905.07210 [cs.LG] (Published 2019-05-17)
Hybrid-FL: Cooperative Learning Mechanism Using Non-IID Data in Wireless Networks
arXiv:2109.02396 [cs.LG] (Published 2021-09-06)
Byzantine-Robust Federated Learning via Credibility Assessment on Non-IID Data