Not All Rollouts are Useful: Down-Sampling Rollouts in LLM Reinforcement Learning
📰 ArXiv cs.AI
arXiv:2504.13818v4 Announce Type: replace-cross Abstract: Reinforcement learning with verifiable rewards (RLVR) has emerged as the leading approach for enhancing reasoning capabilities in large language models. However, it faces a fundamental compute and memory asymmetry: rollout generation is embarrassingly parallel and memory-light, whereas policy updates are communication-heavy and memory-intensive. To address this, we introduce PODS (Policy Optimization with Down-Sampling), which decouples r
DeepCamp AI