Human-like Working Memory Interference in Large Language Models

📰 ArXiv cs.AI

arXiv:2604.09670v1 Announce Type: cross Abstract: Intelligent systems must maintain and manipulate task-relevant information online to adapt to dynamic environments and changing goals. This capacity, known as working memory, is fundamental to human reasoning and intelligence. Despite having on the order of 100 billion neurons, both biological and artificial systems exhibit limitations in working memory. This raises a key question: why do large language models (LLMs) show such limitations, given

Published 14 Apr 2026
Read full paper → ← Back to Reads