Steering the Verifiability of Multimodal AI Hallucinations
arXiv:2604.06714v1 Announce Type: new
Abstract: AI applications driven by multimodal large language models (MLLMs) are prone to hallucinations and pose considerable risks to human users. Crucially, such hallucinations are not equally problematic: some…