The Randomness Floor: Measuring Intrinsic Non-Randomness in Language Model Token Distributions
📰 ArXiv cs.AI
arXiv:2604.22771v1 Announce Type: cross Abstract: Language models cannot be random. This paper introduces Entropic Deviation (ED), the normalised KL divergence between a model's token distribution and the uniform distribution, and measures it systematically across 31,200 generations spanning seven models, two architectures (transformer and state space), nine prompt categories, three temperatures, and five languages. Under semantically neutral prompts (empty strings, random characters, nonsense s
DeepCamp AI