Personalized Federated Learning for Gradient Alignment
arXiv:2605.02143v1 Announce Type: new
Abstract: Personalized federated learning (pFL) aims to adapt models to client specific data distributions, yet it often fails to reliably preserve personalized information. Local training is hindered by high variance gradients induced by limited and heterogeneous client data, while aggregation further distorts client specific optimization directions. To address these challenges, we propose pFLAlign, a gradient alignment framework to maintain client specific information during both local training and aggregation. pFLAlign consists of two complementary mechanisms: one adapts local gradient directions to reduce variance during client side optimization, and the other mitigates aggregation induced distortion by realigning the global model with each client's personalized direction. Theoretically, we derive pFLAlign from a PAC Bayesian analysis, which reveals how personalized gradient alignment preserves client specific information. Our experiments and ablation studies show that pFLAlign consistently improves personalization performance and training stability, achieving state of the art results.