Benchmarking World-Model Learning with Environment-Level Queries
arXiv:2510.19788v4 Announce Type: replace
Abstract: World models are central to building AI agents capable of flexible reasoning and planning. Yet current evaluations (i) test only properties measurable from observed interactions, such as next-frame prediction or task return, and (ii) do not test whether a learned model supports diverse queries about the environment. In contrast, humans build $\textit{general-purpose}$ models that can answer many different questions about an environment$\unicode{x2014}$including questions that require understanding global structure and counterfactual consequences.
We propose $\textit{WorldTest}$: a protocol for evaluating whether agents learn models that support multiple $\textit{environment-level queries}\unicode{x2014}$questions whose answers depend on properties of the full environment, not just observed trajectories. Individually, these queries can target properties (e.g., reachability or the effects of interventions) that no single rollout distribution determines. Collectively, they assess model generality across query types. We instantiate WorldTest as $\textit{AutumnBench}$, a benchmark of 43 interactive grid-world environments and 129 tasks across three query families for both humans and learning agents. Experiments with 517 human participants and five frontier models show that humans substantially outperform these models, a gap we attribute to differences in exploration and belief updating. AutumnBench provides a framework for evaluating world-model learning in grid-world environments with environment-level queries, and WorldTest provides a template for extending such evaluations to richer domains.