CASK: Core-Aware Selective KV Compression for Reasoning Traces

📰 ArXiv cs.AI

arXiv:2604.10900v1 Announce Type: new Abstract: In large language models performing long-form reasoning, the KV cache grows rapidly with decode length, creating bottlenecks in memory and inference stability. Existing reasoning-oriented KV compression has mostly followed an eviction-centered view: estimate token importance more accurately, then discard lower-ranked entries. Our analysis suggests that scorer refinement alone often fails to substantially reorganize the actual keep-set and may there

Published 14 Apr 2026
Read full paper → ← Back to Reads