{ "id": "2312.13602", "version": "v1", "published": "2023-12-21T06:28:02.000Z", "updated": "2023-12-21T06:28:02.000Z", "title": "Peer-to-Peer Learning + Consensus with Non-IID Data", "authors": [ "Srinivasa Pranav", "José M. F. Moura" ], "comment": "Asilomar Conference on Signals, Systems, and Computers 2023 Camera-Ready Version", "categories": [ "cs.LG", "cs.DC" ], "abstract": "Peer-to-peer deep learning algorithms are enabling distributed edge devices to collaboratively train deep neural networks without exchanging raw training data or relying on a central server. Peer-to-Peer Learning (P2PL) and other algorithms based on Distributed Local-Update Stochastic/mini-batch Gradient Descent (local DSGD) rely on interleaving epochs of training with distributed consensus steps. This process leads to model parameter drift/divergence amongst participating devices in both IID and non-IID settings. We observe that model drift results in significant oscillations in test performance evaluated after local training and consensus phases. We then identify factors that amplify performance oscillations and demonstrate that our novel approach, P2PL with Affinity, dampens test performance oscillations in non-IID settings without incurring any additional communication cost.", "revisions": [ { "version": "v1", "updated": "2023-12-21T06:28:02.000Z" } ], "analyses": { "keywords": [ "non-iid data", "peer-to-peer learning", "dampens test performance oscillations", "distributed local-update stochastic/mini-batch gradient descent", "non-iid settings" ], "tags": [ "conference paper" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }