cs.AI, cs.CV

Adversarial Prompt Injection Attack on Multimodal Large Language Models

arXiv:2603.29418v1 Announce Type: new
Abstract: Although multimodal large language models (MLLMs) are increasingly deployed in real-world applications, their instruction-following behavior leaves them vulnerable to prompt injection attacks. Existing p…