ChronoSpike: An Adaptive Spiking Graph Neural Network for Dynamic Graphs
arXiv:2602.01124v3 Announce Type: replace
Abstract: Dynamic graph representation learning requires capturing both structural relations and temporal evolution, yet existing approaches face a core trade-off: attention-based methods offer expressiveness at $O(T^2)$ complexity, while recurrent architectures suffer from gradient pathologies and dense state storage. Spiking neural networks provide event-driven efficiency but are constrained by sequential propagation, binary information loss, and local aggregation that lacks global context. We propose ChronoSpike, an adaptive spiking graph neural network that integrates learnable LIF neurons with per-channel membrane dynamics, multi-head spatially-attentive aggregation over continuous features, and a lightweight Transformer temporal encoder. This design enables fine-grained local modeling and long-range dependency capture with $O(T \cdot d)$ activation/state memory and an additional $O(T^2)$ per-node attention term that remains small for the horizons evaluated here. ChronoSpike outperforms twelve state-of-the-art baselines on three large benchmarks by $2.0$% Macro-F1 and $2.4$% Micro-F1 on average while achieving $3-10\times$ faster training than recurrent methods with a constant 105K-parameter budget independent of graph size. We provide theoretical guarantees for membrane potential boundedness, gradient flow stability under contraction factor $\rho<1$, and BIBO stability; interpretability analyses reveal heterogeneous temporal receptive fields and a learned primacy effect with $83-88$% sparsity.