CALYREX: Cross-Attention LaYeR EXtended Transformers for System Prompt Anchoring
arXiv:2605.09737v1 Announce Type: new
Abstract: Modern large language models (LLMs) rely on system prompts to establish behavioral constraints and safety rules. Standard causal self-attention treats privileged instructions and untrusted user content w…