cs.CL

Understanding New-Knowledge-Induced Factual Hallucinations in LLMs: Analysis and Interpretation

arXiv:2511.02626v3 Announce Type: replace
Abstract: Prior works have shown that fine-tuning on new knowledge can induce factual hallucinations in large language models (LLMs), leading to incorrect outputs when evaluated on previously known information…