FedRef: Bayesian Fine-Tuning using a Reference Model to Mitigate Catastrophic Forgetting for Heterogeneous Federated Learning

arXiv:2506.23210v5 Announce Type: replace-cross Abstract: Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy. However, data and system heterogeneity often cause catastrophic forgetting and unbounded drift in model updates, leading to degraded predictive performance and increased client-side computation. To address these challenges, we propose FedRef, a Bayesian fine-tuning method that leverages a reference model constructed from previous global models. FedRef integrates a MAP-based regularization term that calibrates global model updates toward a temporally aggregated reference model, thereby mitigating catastrophic forgetting and improving update stability. Unlike prior approaches, FedRef performs all fine-tuning operations on the server side, reducing client-side computational overhead while maintaining effective global optimization. Experiments on image classification (FEMNIST, CINIC-10) and medical image segmentation (FeTS2022) demonstrate that FedRef achieves superior predictive performance and faster convergence under heterogeneous, non-IID settings, while significantly lowering client-side computation compared with existing methods. These results highlight FedRef as an efficient and robust optimization framework for real-world FL scenarios.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top