Short window attention enables long-term memorization
arXiv:2509.24552v3 Announce Type: replace-cross
Abstract: Recent works show that hybrid architectures combining local sliding window attention layers and global attention layers outperform either of these architectures taken separately. However, the impact of the window length and the interplay between local layers and global layers remain under-studied. In this work, we first analyze the interaction between short and long term memory by considering SWAX: a hybrid architecture consisting of sliding-window attention and xLSTM linear RNN layers.
A counter-intuitive finding is that larger sliding windows hurts the long-context performance. In fact, short window attention encourages the model to better train the long-term memory of the xLSTM as it cannot rely on the local softmax attention mechanism for long context-retrieval. We also validate our findings on local-global architectures alternating short window and full attention layers: the short layers should be small in order not to hinder the usefulness of the long layers.
However, employing too small sliding windows is detrimental even for short-context tasks, which could be solved with information from moderately larger sliding windows otherwise. Therefore, we train hybrid architectures by stochastically changing the sliding window size, forcing the model to leverage both the short term window and the long-term memory. Training with stochastic window sizes significantly outperforms regular window attention both on short and long-context problems.