VFA: Relieving Vector Operations in Flash Attention with Global Maximum Pre-computation

📰 ArXiv cs.AI

arXiv:2604.12798v1 Announce Type: cross Abstract: FlashAttention-style online softmax enables exact attention computation with linear memory by streaming score tiles through on-chip memory and maintaining a running maximum and normalizer. However, as attention kernels approach peak tensor-core/cube-core throughput on modern accelerators, non-matmul components of online softmax -- especially per-tile rowmax and rowsum reductions and rescale chains -- can become vector or SIMD limited and dominate

Published 15 Apr 2026
Read full paper → ← Back to Reads