QFlash: Bridging Quantization and Memory Efficiency in Vision Transformer Attention
📰 ArXiv cs.AI
arXiv:2604.25306v1 Announce Type: cross Abstract: FlashAttention improves efficiency through tiling, but its online softmax still relies on floating-point arithmetic for numerical stability, making full quantization difficult. We identify three main obstacles to integer-only FlashAttention: (1) scale explosion during tile-wise accumulation, (2) inefficient shift-based exponential operations on GPUs, and (3) quantization granularity constraints requiring uniform scales for integer comparison. To
DeepCamp AI