RL for Agents Workshop - Deep Dive on Training Agents with RL and Open Source
Reinforcement learning is becoming central to agentic systems, but moving from RL for LLMs to RL for agents introduces a new set of challenges: environments, rollouts, tool use, inference bottlenecks, reward design, and evaluating multi-step behavior in the real world.
In this live Hugging Face workshop, we bring together researchers and builders working on the frontier of RL for agents. The session will feature short talks followed by a discussion on what is working today, where open methods still fall short, and what comes next.
Speakers include:
- Lewis Tunstall, Hugging Face
- Will Brown, Prime Intellect
- Ofir Press, Princeton University
- Alex Zhang, MIT CSAIL
additional guests TBA
Topics include:
- training agents with open source tools
- scaling RL for language agents
- multi-step verification and reward design
- benchmarking agent capability beyond static tasks
- recursive reasoning and new agent architectures
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: RL Foundations
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
We Built the Missing Trust Layer for AI Agent Payments
Dev.to AI
Can Claude Skills Save Us From The Smartphone?
Dev.to · Mxolisi Masuku
AI as a Junior Platform Engineer: How I "Onboard" Coding Agents
Dev.to · Yogesh VK
Claims Economics with AI: Cutting Cost per Settlement in Insurance Ops
Medium · AI
🎓
Tutor Explanation
DeepCamp AI