Anchored Confabulation: Partial Evidence Non-Monotonically Amplifies Confident Hallucination in LLMs
arXiv:2604.25931v1 Announce Type: new
Abstract: We identify a previously unknown calibration property of large language models: providing one confirmed intermediate fact toward a multi-step reasoning chain increases the model’s confident-wrong-answer …