cs.AI, cs.CR

Bypassing Prompt Injection Detectors through Evasive Injections

arXiv:2602.00750v2 Announce Type: replace-cross
Abstract: Large language models (LLMs) are increasingly used in interactive and retrieval-augmented systems, but they remain vulnerable to prompt injection attacks, where injected secondary prompts force…