The Truth Lies Somewhere in the Middle (of the Generated Tokens)

arXiv:2605.09969v1 Announce Type: cross Abstract: How should hidden states generated autoregressively be collapsed into a representation that reflects a language model's internal state? Despite tokens being generated under causal masking, we find that mean pooling across their hidden states yields more semantic representations than any individual token alone. We quantify this through kernel alignment to reference spaces in language, vision, and protein domains. The improvement through mean pooling is consistent with information being distributed across generated tokens rather than localized to a single position. Furthermore, representations derived from generated tokens outperform those from prompt tokens, and alignment across generation reveals interpretable dynamics in model behavior.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top