Beyond Context: Large Language Models’ Failure to Grasp Users’ Intent
arXiv:2512.21110v3 Announce Type: replace
Abstract: Current Large Language Models (LLMs) safety approaches focus on explicitly harmful content while overlooking a critical vulnerability: the inability to understand context and recognize user intent. T…