Low-Cost Black-Box Detection of LLM Hallucinations via Dynamical System Prediction
arXiv:2605.05134v1 Announce Type: new
Abstract: Large Language Models (LLMs) frequently generate plausible but non-factual content, a phenomenon known as hallucination. While existing detection methods typically rely on computationally expensive sampl…