arXiv Analytics

Sign in

arXiv:2007.09208 [cs.LG]AbstractReferencesReviewsResources

Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise

Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc Tran-Dinh, Phuong Ha Nguyen

Published 2020-07-17Version 1

The feasibility of federated learning is highly constrained by the server-clients infrastructure in terms of network communication. Most newly launched smartphones and IoT devices are equipped with GPUs or sufficient computing hardware to run powerful AI models. However, in case of the original synchronous federated learning, client devices suffer waiting times and regular communication between clients and server is required. This implies more sensitivity to local model training times and irregular or missed updates, hence, less or limited scalability to large numbers of clients and convergence rates measured in real time will suffer. We propose a new algorithm for asynchronous federated learning which eliminates waiting times and reduces overall network communication - we provide rigorous theoretical analysis for strongly convex objective functions and provide simulation results. By adding Gaussian noise we show how our algorithm can be made differentially private -- new theorems show how the aggregated added Gaussian noise is significantly reduced.

Related articles: Most relevant | Search more
arXiv:1912.07902 [cs.LG] (Published 2019-12-17)
Asynchronous Federated Learning with Differential Privacy for Edge Intelligence
arXiv:1901.09136 [cs.LG] (Published 2019-01-26)
Graphical-model based estimation and inference for differential privacy
arXiv:1905.12101 [cs.LG] (Published 2019-05-28)
Differential Privacy Has Disparate Impact on Model Accuracy