Aligning with Your Own Voice: Self-Corrected Preference Learning for Hallucination Mitigation in LVLMs
arXiv:2604.24395v1 Announce Type: new
Abstract: Large Vision-Language Models (LVLMs) frequently suffer from hallucinations. Existing preference learning-based approaches largely rely on proprietary models to construct preference datasets. We identify …