Introspective Diffusion Language Models

📰 ArXiv cs.AI

arXiv:2604.11035v1 Announce Type: new Abstract: Diffusion language models promise parallel generation, yet still lag behind autoregressive (AR) models in quality. We stem this gap to a failure of introspective consistency: AR models agree with their own generations, while DLMs often do not. We define the introspective acceptance rate, which measures whether a model accepts its previously generated tokens. This reveals why AR training has a structural advantage: causal masking and logit shifting

Published 14 Apr 2026
Read full paper → ← Back to Reads