cs.AI, cs.CL, cs.CR, cs.LG

LLM Ghostbusters: Surgical Hallucination Suppression via Adaptive Unlearning

arXiv:2605.01047v1 Announce Type: cross
Abstract: Hallucinations, outputs that sound plausible but are factually incorrect, remain an open challenge for deployed LLMs. In code generation, models frequently hallucinate non-existent software packages, r…