QFlash: Bridging Quantization and Memory Efficiency in Vision Transformer Attention
arXiv:2604.25306v1 Announce Type: new
Abstract: FlashAttention improves efficiency through tiling, but its online softmax still relies on floating-point arithmetic for numerical stability, making full quantization difficult. We identify three main obs…