When Agents Say One Thing and Do Another: Validating Elicited Beliefs from LLMs

arXiv:2602.06286v2 Announce Type: replace Abstract: Large language models (LLMs) are increasingly deployed in high-stakes settings where good decisions require forming beliefs over the probability of unknown outcomes. However, it is unclear whether LLMs act as if they hold coherent beliefs when making decisions, or if so, how we could validate models' reports of such beliefs. We propose a decision-theoretic framework that elicits both probability judgments and decisions from an agent and tests their mutual consistency. Formally, our methods characterize whether it is possible for the actions to be produced by a ``near-rational" decision maker who holds the elicited probability as their true belief. We show that, perhaps surprisingly, this formalization implies empirically testable conditions even without any assumption about the agent's utility function. Applying our framework to stylized clinical diagnosis tasks, we find that models' reported beliefs are demonstrably imperfect summaries of the information revealed in their decisions, but that the discrepancies are small for the strongest models.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top