cs.AI, cs.LG

LLM-VA: Resolving the Jailbreak-Overrefusal Trade-off via Vector Alignment

arXiv:2601.19487v2 Announce Type: replace-cross
Abstract: Safety-aligned LLMs suffer from two failure modes: jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector steering methods adjust the magnitude of ans…