Laundering AI Authority with Adversarial Examples
arXiv:2605.04261v1 Announce Type: cross
Abstract: Vision-language models (VLMs) are increasingly deployed as trusted authorities — fact-checking images on social media, comparing products, and moderating content. Users implicitly trust that these sys…