S2O: Early Stopping for Sparse Attention via Online Permutation
arXiv:2602.22575v2 Announce Type: replace
Abstract: Attention scales quadratically with sequence length, fundamentally limiting long-context inference. Existing block-granularity sparsification can reduce latency, but coarse blocks impose an intrinsic…