A Mimetic Detector for Adversarial Image Perturbations
arXiv:2605.11492v2 Announce Type: replace
Abstract: Adversarial attacks fool deep image classifiers by adding tiny, almost invisible noise patterns to a clean image. The standard $\ell^\infty$-bounded attacks (FGSM, PGD, and the $\ell^\infty$ variant …