Detecting Hallucinations in SpeechLLMs at Inference Time Using Attention Maps
arXiv:2604.19565v1 Announce Type: cross
Abstract: Hallucinations in Speech Large Language Models (SpeechLLMs) pose significant risks, yet existing detection methods typically rely on gold-standard outputs that are costly or impractical to obtain. More…