KV Packet: Recomputation-Free Context-Independent KV Caching for LLMs
📰 ArXiv cs.AI
arXiv:2604.13226v1 Announce Type: cross Abstract: Large Language Models (LLMs) rely heavily on Key-Value (KV) caching to minimize inference latency. However, standard KV caches are context-dependent: reusing a cached document in a new context requires recomputing KV states to account for shifts in attention distribution. Existing solutions such as CacheBlend, EPIC, and SAM-KV mitigate this issue by selectively recomputing a subset of tokens; however, they still incur non-negligible computational
DeepCamp AI