cs.AI, cs.LG

VFA: Relieving Vector Operations in Flash Attention with Global Maximum Pre-computation

arXiv:2604.12798v1 Announce Type: cross
Abstract: FlashAttention-style online softmax enables exact attention computation with linear memory by streaming score tiles through on-chip memory and maintaining a running maximum and normalizer. However, as …