Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search

📰 ArXiv cs.AI

arXiv:2509.25835v4 Announce Type: replace Abstract: Test-time scaling improves large language models (LLMs) on long-horizon reasoning tasks by allocating more compute at inference. LLM inference via tree search (LITS) achieves strong performance but is highly inefficient. We propose Chain-in-Tree (CiT), a plug-in framework that decides when to branch during search instead of expanding at every step. CiT introduces lightweight Branching Necessity (BN) evaluations, including BN-DP (direct promptin

Published 13 Apr 2026
Read full paper → ← Back to Reads