RPNT: Robust Pre-trained Neural Transformer — A Pathway for Generalized Motor Decoding
arXiv:2601.17641v2 Announce Type: replace
Abstract: Brain motor decoding aims to interpret and translate neural activity into behaviors. Decoding models should generalize across variations, such as recordings from different brain sites, experimental sessions, behavior types, and subjects, will be critical for real-world applications. Current decoding models only partially address these challenges. In this work, we develop a pretrained neural transformer model, RPNT - Robust Pretrained Neural Transformer, designed to achieve robust generalization through pretraining, which in turn enables effective finetuning for downstream motor decoding tasks. We achieved the proposed RPNT architecture by systematically investigating which transformer building blocks could be suitable for neural spike activity modeling, since components from models developed for other modalities, such as text and images, do not transfer directly to neural data. The final RPNT architecture incorporates three unique enabling components: 1) Multidimensional rotary positional embedding to aggregate experimental metadata such as site coordinates, session ids and behavior types; 2) Context-based attention mechanism via convolution kernels operating on global attention to learn local temporal structures for handling non-stationarity of neural population activity; 3) Robust self-supervised learning objective with stochastic causal masking strategies and contrastive representations. We pretrained two versions of RPNT on distinct datasets that present significant generalization challenges: a) Multi-session, multi-task, and multi-subject microelectrode benchmark; b) Multi-site recordings using high-density Neuropixel 1.0 probes from many cortical locations. After pretraining, we evaluated RPNT generalization on cross-session, cross-type, cross-subject, and cross-site downstream behavior decoding tasks. Our RPNT consistently outperforms the existing decoding models on these tasks.