When Does Content-Based Routing Work? Representation Requirements for Selective Attention in Hybrid Sequence Models

arXiv:2603.20997v2 Announce Type: replace Abstract: We identify a routing paradox in hybrid sequence models: content-based routing - deciding which tokens deserve expensive attention - requires pairwise computation, and this requirement is inescapable. Through 20+ controlled experiments across three tasks, multiple scales (200K to 1.4B parameters), and 15+ routing mechanisms, we map the routing landscape exhaustively. Every system that achieves high routing precision does so through pairwise token comparison. Every mechanism that avoids pairwise computation fails: recurrent models (Mamba-1.4B: 29%), memory banks (12%), bandits (0.7-3.6%), contrastive pretraining (1.6%), and 12 other approaches all cluster at 1-29%. Routing needs two ingredients: (1) per-token representations with bidirectional context and (2) pairwise token comparison. Bidirectional Mamba (O(n)) + pairwise comparison achieves 99.5%; replacing the full pairwise router with rank-1 projection improves this to 99.7%. Adding one bidirectional layer to frozen Pythia-1B recovers 99.4% routing. Six different O(n) preprocessing mechanisms (bidirectional Mamba, Perceiver inducing points, causal attention with E2E training, sparse attention, bidirectional attention, rank-1 projection) all succeed; global mean pooling (1.9%) and Fourier mixing (0.9%) fail. The routing signal occupies a ~34-dimensional latent subspace, invisible to cosine similarity. Non-learned indices (Bloom filter: 90.9%; BM25: 82.7%) bypass the bottleneck for exact/keyword matching. Combining O(n) bidirectional Mamba with rank-1 pairwise projection yields 99.7% routing at linear inference cost.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top