Rethinking On-Policy Distillation of Large Language Models: Phenomenology, Mechanism, and Recipe
📰 ArXiv cs.AI
arXiv:2604.13016v1 Announce Type: cross Abstract: On-policy distillation (OPD) has become a core technique in the post-training of large language models, yet its training dynamics remain poorly understood. This paper provides a systematic investigation of OPD dynamics and mechanisms. We first identify that two conditions govern whether OPD succeeds or fails: (i) the student and teacher should share compatible thinking patterns; and (ii) even with consistent thinking patterns and higher scores, t
DeepCamp AI