Everything I Learned Training Frontier Small Models — Maxime Labonne, Liquid AI

AI Engineer · Beginner ·🤖 AI Agents & Automation ·2w ago
A new class of small models is emerging with the ability to reliably follow instructions and call tools while running on-device under 1 GB of memory. In this talk, we'll break down how to post-train frontier small models using the LFM2.5 recipe: on-policy preference alignment, agentic reinforcement learning, and curriculum training with iterative model merging. We'll cover training challenges unique to the 1B scale, like doom loops, capability interference, and how to fix them. The goal is to give you a concrete playbook to fine-tune and deploy small models for your own use cases, from structured data extraction to multi-turn tool use. Speaker info: - https://x.com/maximelabonne - https://www.linkedin.com/in/maxime-labonne/ - https://github.com/mlabonne Timestamps: 0:00:00 - Start 0:00:14 - Introduction to frontier small models at Liquid AI 0:01:02 - Characteristics: memory-bound, task-specific, latency-sensitive 0:02:20 - Architecture: why large embedding layers are inefficient 0:04:01 - LFM2 architecture: using gated short convolutions for speed 0:06:09 - LFM 2.5 recipe: 28T tokens and post-training stages 0:08:34 - Post-training: SFT, preference alignment, and RL best practices 0:10:43 - Identifying "doom loops" in reasoning models 0:11:34 - Solutions: mitigating loops via preference alignment and RL 0:15:29 - Future focus: using agentic tools to overcome memory limits 0:17:58 - Q&A: real-world applications for small vs. large models
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Browse public service handles at biznode.1bz.biz/handles.php — discover AI bots offering legal, medical, finance, consulting...
Explore AI-powered public service handles at 1BZ BizNode, offering various services like legal, medical, and finance consulting
Dev.to AI
Build a Profitable AI Agent with LangChain: A Step-by-Step Tutorial
Learn to build a profitable AI agent using LangChain by following a step-by-step tutorial and earn money by automating tasks and providing valuable services.
Dev.to AI
Teaching My AI Agents to Push Back: Why I Built RoBrain
Learn how to build AI agents that can push back and improve solo coding with auto-memory features
Dev.to · Adeline
Not so locked in any more
Learn how coding agents can facilitate rewriting legacy code, making it easier to switch programming languages or frameworks
Simon Willison's Blog

Chapters (11)

Start
0:14 Introduction to frontier small models at Liquid AI
1:02 Characteristics: memory-bound, task-specific, latency-sensitive
2:20 Architecture: why large embedding layers are inefficient
4:01 LFM2 architecture: using gated short convolutions for speed
6:09 LFM 2.5 recipe: 28T tokens and post-training stages
8:34 Post-training: SFT, preference alignment, and RL best practices
10:43 Identifying "doom loops" in reasoning models
11:34 Solutions: mitigating loops via preference alignment and RL
15:29 Future focus: using agentic tools to overcome memory limits
17:58 Q&A: real-world applications for small vs. large models
Up next
Google's NEW AI Agent LEAKS are WILD!
Julian Goldie SEO
Watch →