arXiv Analytics

Sign in

arXiv:2111.14655 [cs.LG]AbstractReferencesReviewsResources

FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization

Dezhong Yao, Wanning Pan, Michael J O'Neill, Yutong Dai, Yao Wan, Hai Jin, Lichao Sun

Published 2021-11-29, updated 2022-05-26Version 2

One underlying assumption of recent federated learning (FL) paradigms is that all local models usually share the same network architecture and size, which becomes impractical for devices with different hardware resources. A scalable federated learning framework should address the heterogeneity that clients have different computing capacities and communication capabilities. To this end, this paper proposes FedHM, a novel heterogeneous federated model compression framework, distributing the heterogeneous low-rank models to clients and then aggregating them into a full-rank model. Our solution enables the training of heterogeneous models with varying computational complexities and aggregates them into a single global model. Furthermore, FedHM significantly reduces the communication cost by using low-rank models. Extensive experimental results demonstrate that FedHM is superior in the performance and robustness of models of different sizes, compared with state-of-the-art heterogeneous FL methods under various FL settings. Additionally, the convergence guarantee of FL for heterogeneous devices is first theoretically analyzed.

Related articles: Most relevant | Search more
arXiv:1509.00061 [cs.LG] (Published 2015-08-31)
Value function approximation via low-rank models
arXiv:2404.13322 [cs.LG] (Published 2024-04-20)
MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities
arXiv:2103.16055 [cs.LG] (Published 2021-03-30)
1-Bit Compressive Sensing for Efficient Federated Learning Over the Air