Decocted Experience Improves Test-Time Inference in LLM Agents

📰 ArXiv cs.AI

arXiv:2604.04373v1 Announce Type: new Abstract: There is growing interest in improving LLMs without updating model parameters. One well-established direction is test-time scaling, where increased inference-time computation (e.g., longer reasoning, sampling, or search) is used to improve performance. However, for complex reasoning and agentic tasks, naively scaling test-time compute can substantially increase cost and still lead to wasted budget on suboptimal exploration. In this paper, we explor

Published 7 Apr 2026
Read full paper → ← Back to News