RESFL: An Uncertainty-Aware Framework for Responsible Federated Learning by Balancing Privacy, Fairness and Utility
arXiv:2503.16251v2 Announce Type: replace
Abstract: Federated Learning (FL) has gained prominence in machine learning applications across critical domains by enabling collaborative model training without centralized data aggregation. However, FL frameworks that protect privacy often sacrifice fairness and reliability. Differential privacy can reduce data leakage, but it may also obscure sensitive attributes needed for bias correction, thereby worsening performance gaps across demographic groups. This work studies the privacy-fairness trade-off in FL-based object detection and introduces RESFL, an integrated framework that jointly improves both objectives. RESFL combines adversarial privacy disentanglement with uncertainty-guided fairness-aware aggregation. The adversarial component uses a gradient reversal layer to suppress sensitive attribute information, reducing privacy risks while preserving fairness-relevant structure. The uncertainty-aware aggregation component uses an evidential neural network to adaptively weight client updates, prioritizing contributions with lower fairness disparities and higher confidence. This produces robust and equitable FL model updates. Experiments in high-stakes autonomous vehicle settings show that RESFL achieves high mAP on FACET and CARLA, reduces membership-inference attack success by 37%, reduces the equality-of-opportunity gap by 17% relative to the FedAvg baseline, and maintains stronger adversarial robustness. Although evaluated in autonomous driving, RESFL is domain-agnostic and can be applied to a broad range of application domains beyond this setting.