cs.CV

Adversarial Attacks Against MLLMs via Progressive Resolution Processing and Adaptive Feature Alignment

arXiv:2605.09902v1 Announce Type: new
Abstract: Adversarial perturbations can mislead Multimodal Large Language Models (MLLMs) recognize a benign image as a specific target object, posing serious risks in safety-critical scenarios such as autonomous d…