Adversarial Update-Based Federated Unlearning for Poisoned Model Recovery
arXiv:2605.02110v1 Announce Type: new
Abstract: Federated learning (FL) is vulnerable to poisoning attacks, where malicious clients upload manipulated updates to degrade the performance of the global model. Although detection methods can identify and …