Provable Post-Training Quantization: Theoretical Analysis of OPTQ and Qronos

📰 ArXiv cs.AI

arXiv:2508.04853v2 Announce Type: replace-cross Abstract: Post-training quantization (PTQ) has become a crucial tool for reducing the memory and compute costs of modern deep neural networks, including large language models (LLMs). Among PTQ algorithms, the OPTQ framework-also known as GPTQ-has emerged as a leading method due to its computational efficiency and strong empirical performance. Despite its widespread adoption, however, OPTQ lacks rigorous quantitative theoretical guarantees. This pap

Published 13 Apr 2026
Read full paper → ← Back to Reads