Self-Awareness before Action: Mitigating Logical Inertia via Proactive Cognitive Awareness

📰 ArXiv cs.AI

arXiv:2604.20413v1 Announce Type: new Abstract: Large language models perform well on many reasoning tasks, yet they often lack awareness of whether their current knowledge or reasoning state is complete. In non-interactive puzzle settings, the narrative is fixed and the underlying structure is hidden; once a model forms an early hypothesis under incomplete premises, it can propagate that error throughout the reasoning process, leading to unstable conclusions. To address this issue, we propose S

Published 23 Apr 2026
Read full paper → ← Back to Reads