LPC-SM: Local Predictive Coding and Sparse Memory for Long-Context Language Modeling

📰 ArXiv cs.AI

arXiv:2604.03263v1 Announce Type: cross Abstract: Most current long-context language models still rely on attention to handle both local interaction and long-range state, which leaves relatively little room to test alternative decompositions of sequence modeling. We propose LPC-SM, a hybrid autoregressive architecture that separates local attention, persistent memory, predictive correction, and run-time control within the same block, and we use Orthogonal Novelty Transport (ONT) to govern slow-m

Published 7 Apr 2026
Read full paper → ← Back to News