DFedReweighting: A Unified Framework for Objective-Oriented Reweighting in Decentralized Federated Learning
arXiv:2512.12022v2 Announce Type: replace
Abstract: Decentralized federated learning (DFL) has emerged as a promising paradigm that enables multiple clients to collaboratively train machine learning models through iterative rounds of local training, communication, and aggregation, without relying on a central server. Nevertheless, DFL systems continue to face a range of challenges, including fairness and Byantine robustness. To address these challenges, we propose \textbf{DFedReweighting}, a unified aggregation framework that achieves diverse learning objectives in DFL via objective-oriented reweighting at the final step of each learning round. Specifically, for each client, the framework first evaluates a target performance metric (TPM) on a compact auxiliary dataset constructed from local data, yielding preliminary aggregation weights, which are subsequently refined by a customized reweighting strategy (CRS) to produce the final aggregation weights. Theoretically, we prove that an appropriate TPM-CRS combination guarantees linear convergence for general $L$-smoothand strongly convex functions. Empirical results consistently demonstrate that \textbf{DFedReweighting} significantly improves fairness and robustness against Byzantine attacks across diverse settings. Two multi-objective examples, spanning tasks across and within clients, further establish that a broad range of desired learning objectives can be accommodated by appropriately designing the TPM and CRS. Our code is available at https://github.com/KaichuangZhang/DFedReweighting.