Frequency Matters: Fast Model-Agnostic Data Curation for Pruning and Quantization

📰 ArXiv cs.AI

arXiv:2603.16105v2 Announce Type: replace-cross Abstract: Post-training model compression is essential for enhancing the portability of Large Language Models (LLMs) while preserving their performance. While several compression approaches have been proposed, less emphasis has been placed on selecting the most suitable set of data (the so-called \emph{calibration data}) for finding the compressed model configuration. The choice of calibration data is a critical step in preserving model capabilitie

Published 8 Apr 2026
Read full paper → ← Back to News