Your Model Diversity, Not Method, Determines Reasoning Strategy
📰 ArXiv cs.AI
arXiv:2604.10827v1 Announce Type: new Abstract: Compute scaling for LLM reasoning requires allocating budget between exploring solution approaches ($breadth$) and refining promising solutions ($depth$). Most methods implicitly trade off one for the other, yet why a given trade-off works remains unclear, and validation on a single model obscures the role of the model itself. We argue that $\textbf{the optimal strategy depends on the model's diversity profile, the spread of probability mass across
DeepCamp AI