cs.AI, cs.CL, cs.CV

Decoding by Perturbation: Mitigating MLLM Hallucinations via Dynamic Textual Perturbation

arXiv:2604.12424v1 Announce Type: cross
Abstract: Multimodal Large Language Models frequently suffer from inference hallucinations, partially stemming from language priors dominating visual evidence. Existing training-free mitigation methods either pe…