When Thoughts Meet Facts: Reusable Reasoning for Long-Context LMs

📰 ArXiv cs.AI

arXiv:2510.07499v2 Announce Type: replace-cross Abstract: Recent Long-Context Language Models (LCLMs) can process hundreds of thousands of tokens in a single prompt, enabling new opportunities for knowledge-intensive multi-hop reasoning by integrating large sets of retrieved documents or, in some cases, directly all necessary information. However, simply feeding more documents into the context window fails to capture how evidence should be connected. We address this gap with thought templates, w

Published 29 Apr 2026
Read full paper → ← Back to Reads