cs.CL

MARCH: Multi-Agent Reinforced Self-Check for LLM Hallucination

arXiv:2603.24579v1 Announce Type: new
Abstract: Hallucination remains a critical bottleneck for large language models (LLMs), undermining their reliability in real-world applications, especially in Retrieval-Augmented Generation (RAG) systems. While e…

Scroll to Top