Why Mixture-of-Experts Took 30 Years to Take Off
Mixture-of-Experts (MoE) models weren’t invented yesterday — they were proposed in 1991 by Jacobs, Jordan, Nowlan, and Hinton.
So why did they sit on the sidelines for 30 years… and why are they suddenly powering today’s largest AI models?
In this conversation, Daria Soboleva, Head Research Scientist at Cerebras, walks through the history of MoEs.
You’ll learn:
Why early MoEs were theoretically brilliant but impossible to run
How hardware limitations (not ideas) stalled progress for decades
Why dense models have now hit a scaling wall
How MoEs introduce sparsity in the most compute-efficient way
Why GPU-era routing constraints created redundant experts
How load balancing trades hardware utilization for weaker specialization
What next-generation MoEs look like when hardware stops fighting the model
Most modern MoEs are still shaped by old GPU constraints, forcing researchers to compromise on expert specialization. Daria explains why this leads to redundancy, inflated deployment costs, and the rise of expert merging and pruning techniques — and how removing those constraints unlocks the MoEs researchers actually wanted to build.
At Cerebras, MoE models are trained without expert parallelism, allowing experts to specialize naturally and efficiently on a single wafer-scale device.
If you want to see these ideas in practice, check out MoE 101 by Cerebras — where the team publishes real training configurations and production-level results.
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: Reading ML Papers
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
The ABCs of reading medical research and review papers these days
Medium · LLM
#1 DevLog Meta-research: I Got Tired of Tab Chaos While Reading Research Papers.
Dev.to AI
How to Set Up a Karpathy-Style Wiki for Your Research Field
Medium · AI
The Non-Optimality of Scientific Knowledge: Path Dependence, Lock-In, and The Local Minimum Trap
ArXiv cs.AI
🎓
Tutor Explanation
DeepCamp AI