arXiv Analytics

Sign in

arXiv:2410.00242 [cs.LG]AbstractReferencesReviewsResources

Quantized and Asynchronous Federated Learning

Tomas Ortega, Hamid Jafarkhani

Published 2024-09-30Version 1

Recent advances in federated learning have shown that asynchronous variants can be faster and more scalable than their synchronous counterparts. However, their design does not include quantization, which is necessary in practice to deal with the communication bottleneck. To bridge this gap, we develop a novel algorithm, Quantized Asynchronous Federated Learning (QAFeL), which introduces a hidden-state quantization scheme to avoid the error propagation caused by direct quantization. QAFeL also includes a buffer to aggregate client updates, ensuring scalability and compatibility with techniques such as secure aggregation. Furthermore, we prove that QAFeL achieves an $\mathcal{O}(1/\sqrt{T})$ ergodic convergence rate for stochastic gradient descent on non-convex objectives, which is the optimal order of complexity, without requiring bounded gradients or uniform client arrivals. We also prove that the cross-term error between staleness and quantization only affects the higher-order error terms. We validate our theoretical findings on standard benchmarks.

Related articles: Most relevant | Search more
arXiv:2308.00263 [cs.LG] (Published 2023-08-01)
Asynchronous Federated Learning with Bidirectional Quantized Communications and Buffered Aggregation
arXiv:1506.03662 [cs.LG] (Published 2015-06-11)
Neighborhood Watch: Stochastic Gradient Descent with Neighbors
arXiv:1411.1134 [cs.LG] (Published 2014-11-05)
Global Convergence of Stochastic Gradient Descent for Some Nonconvex Matrix Problems