Learning Uncertainty from Sequential Internal Dispersion in Large Language Models
arXiv:2604.15741v1 Announce Type: cross
Abstract: Uncertainty estimation is a promising approach to detect hallucinations in large language models (LLMs). Recent approaches commonly depend on model internal states to estimate uncertainty. However, the…