cs.AI, cs.CV, cs.LG

Attention-space Contrastive Guidance for Efficient Hallucination Mitigation in LVLMs

arXiv:2601.13707v2 Announce Type: replace-cross
Abstract: Hallucinations in large vision–language models (LVLMs) often arise when language priors dominate over visual evidence, leading to object misidentification and visually inconsistent description…