RecaLLM: Addressing the Lost-in-Thought Phenomenon with Explicit In-Context Retrieval
📰 ArXiv cs.AI
arXiv:2604.09494v1 Announce Type: cross Abstract: We propose RecaLLM, a set of reasoning language models post-trained to make effective use of long-context information. In-context retrieval, which identifies relevant evidence from context, and reasoning are deeply intertwined: retrieval supports reasoning, while reasoning often determines what must be retrieved. However, their interaction remains largely underexplored. In preliminary experiments on several open-source LLMs, we observe that in-co
DeepCamp AI