cs.AI, cs.LG

Dispatch-Aware Ragged Attention for Pruned Vision Transformers

arXiv:2604.15408v2 Announce Type: replace
Abstract: Token pruning methods for Vision Transformers (ViTs) promise quadratic reductions in attention FLOPs by dropping uninformative patches. Yet standard variable-length attention APIs — including FlashA…