Haiku to Opus in Just 10 bits: LLMs Unlock Massive Compression Gains

📰 ArXiv cs.AI

arXiv:2604.02343v1 Announce Type: cross Abstract: We study the compression of LLM-generated text across lossless and lossy regimes, characterizing a compression-compute frontier where more compression is possible at the cost of more compute. For lossless compression, domain-adapted LoRA adapters can improve LLM-based arithmetic coding by 2x over compression with the base LLM alone. For lossy compression, prompting a model for a succinct rewrite then applying arithmetic coding can achieve compres

Published 6 Apr 2026
Read full paper → ← Back to News