Embedding of Low-Dimensional Sensory Dynamics in Recurrent Networks: Implications for the Geometry of Neural Representation
arXiv:2601.19019v2 Announce Type: replace-cross
Abstract: Neural population activity in sensory cortex is organized on low-dimensional manifolds, but why such manifolds arise and what determines their geometry remain unclear. We model cortical populations as recurrent circuits driven by low-dimensional regular sensory dynamics (circles, tori). Combining generalized synchronization and delay-embedding theory, we show that contracting recurrent networks generically develop smooth internal manifolds embedding the sensory dynamics. The dimensional requirement is modest: N>2d suffices, where d is the intrinsic sensory dimension (compatible with Whitney and Takens bounds). We prove a prediction-separation result linking representational geometry to predictive performance without assuming contraction: accurate prediction forces state separation up to a resolution set by prediction error, yielding categorical boundaries, metameric equivalence, and discrimination thresholds. Numerical experiments with trained tanh RNNs recover ring- and torus-shaped hidden manifolds; state separation improves sharply at the 2d+1 threshold. Training pushes networks beyond strict contraction, yet embedding persists, indicating sufficient but not necessary conditions. These results provide a mechanistic account of why sensory manifolds emerge in recurrent circuits and how prediction constrains their resolution.