arXiv Analytics

Sign in

arXiv:2306.13264 [cs.LG]AbstractReferencesReviewsResources

FedSelect: Customized Selection of Parameters for Fine-Tuning during Personalized Federated Learning

Rishub Tamirisa, John Won, Chengjun Lu, Ron Arel, Andy Zhou

Published 2023-06-23Version 1

Recent advancements in federated learning (FL) seek to increase client-level performance by fine-tuning client parameters on local data or personalizing architectures for the local task. Existing methods for such personalization either prune a global model or fine-tune a global model on a local client distribution. However, these existing methods either personalize at the expense of retaining important global knowledge, or predetermine network layers for fine-tuning, resulting in suboptimal storage of global knowledge within client models. Enlightened by the lottery ticket hypothesis, we first introduce a hypothesis for finding optimal client subnetworks to locally fine-tune while leaving the rest of the parameters frozen. We then propose a novel FL framework, FedSelect, using this procedure that directly personalizes both client subnetwork structure and parameters, via the simultaneous discovery of optimal parameters for personalization and the rest of parameters for global aggregation during training. We show that this method achieves promising results on CIFAR-10.

Comments: We are still expanding this work
Journal: International Workshop on Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities in Conjunction with ICML 2023
Categories: cs.LG, cs.AI
Related articles: Most relevant | Search more
arXiv:2404.02478 [cs.LG] (Published 2024-04-03)
FedSelect: Personalized Federated Learning with Customized Selection of Parameters for Fine-Tuning
arXiv:2209.05148 [cs.LG] (Published 2022-09-12)
Personalized Federated Learning with Communication Compression
arXiv:2111.09360 [cs.LG] (Published 2021-11-17, updated 2022-03-17)
Personalized Federated Learning through Local Memorization