LLM as Attention-Informed NTM and Topic Modeling as long-input Generation: Interpretability and long-Context Capability

📰 ArXiv cs.AI

arXiv:2510.03174v2 Announce Type: replace-cross Abstract: Topic modeling aims to produce interpretable topic representations and topic--document correspondences from corpora, but classical neural topic models (NTMs) remain constrained by limited representation assumptions and semantic abstraction ability. We study LLM-based topic modeling from both white-box and black-box perspectives. For white-box LLMs, we propose an attention-informed framework that recovers interpretable structures analogous

Published 15 Apr 2026
Read full paper → ← Back to Reads