Incompressible Knowledge Probes: Estimating Black-Box LLM Parameter Counts via Factual Capacity
📰 ArXiv cs.AI
arXiv:2604.24827v1 Announce Type: cross Abstract: Closed-source frontier labs do not disclose parameter counts, and the standard alternative -- inference economics -- carries $2\times$+ uncertainty from hardware, batching, and serving-stack assumptions external to the model. We exploit a tighter intrinsic bound: storing $F$ facts requires at least $F/$(bits per parameter) weights, so measuring how much a model \emph{knows} lower-bounds how many parameters it \emph{has}. We introduce \textbf{Inco
DeepCamp AI