Language Models Learn Universal Representations of Numbers and Here’s Why You Should Care

arXiv:2510.26285v2 Announce Type: replace-cross Abstract: Prior work has shown that large language models (LLMs) often converge to accurate input embedding for numbers, based on sinusoidal representations. In this work, we quantify that these representations are in fact strikingly systematic, to the point of being almost perfectly universal: different LLM families develop equivalent sinusoidal structures, and number representations are broadly interchangeable in a large swathe of experimental setups. We show that properly factoring in this characteristic is crucial when it comes to assessing how accurately LLMs encode numeric and other ordinal information, and that mechanistically enhancing this sinusoidality can also lead to reductions of LLMs' arithmetic errors.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top