Tracing GRPO's Biased Objective Back to DeepSeek Math

Deep Learning with Yacine · Intermediate ·🛡️ AI Safety & Ethics ·2mo ago
Zichen Liu, author of Dr. GRPO, walks through where the length normalization term in the standard GRPO formulation originates — the DeepSeek Math paper's equation and the common implementation choice of averaging loss over the token axis instead of summing. This biased formulation propagated through follow-up papers and major open-source libraries like TRL, OpenRLHF, and verl. amazing man wow.
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Up next
Guiding the AI disruption to the Good Place
Microsoft Research
Watch →