When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression

📰 ArXiv cs.AI

arXiv:2604.03557v1 Announce Type: new Abstract: Reasoning hallucinations in large language models (LLMs) often appear as fluent yet unsupported conclusions that violate either the given context or underlying factual knowledge. Although such failures are widely observed, the mechanisms by which decoder-only Transformers produce them remain poorly understood. We model next-token prediction as a graph search process over an underlying graph, where entities correspond to nodes and learned transition

Published 7 Apr 2026
Read full paper → ← Back to News