If an LLM Were a Character, Would It Know Its Own Story? Evaluating Lifelong Learning in LLMs

📰 ArXiv cs.AI

arXiv:2503.23514v2 Announce Type: replace-cross Abstract: Large language models (LLMs) can carry out human-like dialogue, but unlike humans, they are stateless due to the superposition property. However, during multi-turn, multi-agent interactions, LLMs begin to exhibit consistent, character-like behaviors, hinting at a form of emergent lifelong learning. Despite this, existing benchmarks often fail to capture these dynamics, primarily focusing on static, open-ended evaluations. To address this

Published 14 Apr 2026
Read full paper → ← Back to Reads