Scaling Test Time Compute to Multi-Agent Civilizations — Noam Brown, OpenAI

Latent Space · Advanced ·🤖 AI Agents & Automation ·10mo ago
Solving Poker and Diplomacy, Debating RL+Reasoning with Ilya, what's *wrong* with the System 1/2 analogy, and where Test-Time Compute hits a wall Timestamps 00:00 Intro – Diplomacy, Cicero & World Championship 02:00 Reverse Centaur: How AI Improved Noam’s Human Play 05:00 Turing Test Failures in Chat: Hallucinations & Steerability 07:30 Reasoning Models & Fast vs. Slow Thinking Paradigm 11:00 System 1 vs. System 2 in Visual Tasks (GeoGuessr, Tic-Tac-Toe) 14:00 The Deep Research Existence Proof for Unverifiable Domains 17:30 Harnesses, Tool Use, and Fragility in AI Agents 21:00 The Case Against Over-Reliance on Scaffolds and Routers 24:00 Reinforcement Fine-Tuning and Long-Term Model Adaptability 28:00 Ilya’s Bet on Reasoning and the O-Series Breakthrough 34:00 Noam’s Dev Stack: Codex, Windsurf & AGI Moments 38:00 Building Better AI Developers: Memory, Reuse, and PR Reviews 41:00 Multi-Agent Intelligence and the “AI Civilization” Hypothesis 44:30 Implicit World Models and Theory of Mind Through Scaling 48:00 Why Self-Play Breaks Down Beyond Go and Chess 54:00 Designing Better Benchmarks for Fuzzy Tasks 57:30 The Real Limits of Test-Time Compute: Cost vs. Time 1:00:30 Data Efficiency Gaps Between Humans and LLMs 1:03:00 Training Pipeline: Pretraining, Midtraining, Posttraining 1:05:00 Games as Research Proving Grounds: Poker, MTG, Stratego 1:10:00 Closing Thoughts – Five-Year View and Open Research Directions Chapters 00:00:00 Intro & Guest Welcome 00:00:33 Diplomacy AI & Cicero Insights 00:03:49 AI Safety, Language Models, and Steerability 00:05:23 O Series Models: Progress and Benchmarks 00:08:53 Reasoning Paradigm: Thinking Fast and Slow in AI 00:14:02 Design Questions: Harnesses, Tools, and Test Time Compute 00:20:32 Reinforcement Fine-tuning & Model Specialization 00:21:52 The Rise of Reasoning Models at OpenAI 00:29:33 Data Efficiency in Machine Learning 00:33:21 Coding & AI: Codex, Workflows, and Developer Experience 00:41:38 Multi-Age
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Playlist

Uploads from Latent Space · Latent Space · 0 of 60

← Previous Next →
1 Ep 18: Petaflops to the People — with George Hotz of tinycorp
Ep 18: Petaflops to the People — with George Hotz of tinycorp
Latent Space
2 FlashAttention-2: Making Transformers 800% faster AND exact
FlashAttention-2: Making Transformers 800% faster AND exact
Latent Space
3 RWKV: Reinventing RNNs for the Transformer Era
RWKV: Reinventing RNNs for the Transformer Era
Latent Space
4 Generating your AI Media Empire - with Youssef Rizk of Wondercraft.ai
Generating your AI Media Empire - with Youssef Rizk of Wondercraft.ai
Latent Space
5 RAG is a hack - with Jerry Liu of LlamaIndex
RAG is a hack - with Jerry Liu of LlamaIndex
Latent Space
6 The End of Finetuning — with Jeremy Howard of Fast.ai
The End of Finetuning — with Jeremy Howard of Fast.ai
Latent Space
7 Why AI Agents Don't Work (yet) - with Kanjun Qiu of Imbue
Why AI Agents Don't Work (yet) - with Kanjun Qiu of Imbue
Latent Space
8 Powering your Copilot for Data - with Artem Keydunov from Cube.dev
Powering your Copilot for Data - with Artem Keydunov from Cube.dev
Latent Space
9 Beating GPT-4 with Open Source Models - with Michael Royzen of Phind
Beating GPT-4 with Open Source Models - with Michael Royzen of Phind
Latent Space
10 The State of Silicon and the GPU Poors - with Dylan Patel of SemiAnalysis
The State of Silicon and the GPU Poors - with Dylan Patel of SemiAnalysis
Latent Space
11 The "Normsky" architecture for AI coding agents — with Beyang Liu + Steve Yegge of SourceGraph
The "Normsky" architecture for AI coding agents — with Beyang Liu + Steve Yegge of SourceGraph
Latent Space
12 The AI-First Graphics Editor - with Suhail Doshi of Playground AI
The AI-First Graphics Editor - with Suhail Doshi of Playground AI
Latent Space
13 The Accidental AI Canvas - with Steve Ruiz of tldraw
The Accidental AI Canvas - with Steve Ruiz of tldraw
Latent Space
14 The Origin and Future of RLHF: the secret ingredient for ChatGPT - with Nathan Lambert
The Origin and Future of RLHF: the secret ingredient for ChatGPT - with Nathan Lambert
Latent Space
15 The Four Wars of the AI Stack - Dec 2023 Recap
The Four Wars of the AI Stack - Dec 2023 Recap
Latent Space
16 The State of AI in production — with David Hsu of Retool
The State of AI in production — with David Hsu of Retool
Latent Space
17 Building an open AI company - with Ce and Vipul of Together AI
Building an open AI company - with Ce and Vipul of Together AI
Latent Space
18 Truly Serverless Infra for AI Engineers - with Erik Bernhardsson of Modal
Truly Serverless Infra for AI Engineers - with Erik Bernhardsson of Modal
Latent Space
19 A Brief History of the Open Source AI Hacker - with Ben Firshman of Replicate
A Brief History of the Open Source AI Hacker - with Ben Firshman of Replicate
Latent Space
20 Open Source AI is AI we can Trust — with Soumith Chintala of Meta AI
Open Source AI is AI we can Trust — with Soumith Chintala of Meta AI
Latent Space
21 Making Transformers Sing - with Mikey Shulman of Suno
Making Transformers Sing - with Mikey Shulman of Suno
Latent Space
22 A Comprehensive Overview of Large Language Models - Latent Space Paper Club
A Comprehensive Overview of Large Language Models - Latent Space Paper Club
Latent Space
23 Why Google failed to make GPT-3 -- with David Luan of Adept
Why Google failed to make GPT-3 -- with David Luan of Adept
Latent Space
24 Personal AI Meetup - Bee, BasedHardware, LangChain LangFriend, Deepgram EmilyAI
Personal AI Meetup - Bee, BasedHardware, LangChain LangFriend, Deepgram EmilyAI
Latent Space
25 Supervise the Process of AI Research — with Jungwon Byun and Andreas Stuhlmüller of Elicit
Supervise the Process of AI Research — with Jungwon Byun and Andreas Stuhlmüller of Elicit
Latent Space
26 Breaking down the OG GPT Paper by Alec Radford
Breaking down the OG GPT Paper by Alec Radford
Latent Space
27 High Agency Pydantic over VC Backed Frameworks — with Jason Liu of Instructor
High Agency Pydantic over VC Backed Frameworks — with Jason Liu of Instructor
Latent Space
28 This World Does Not Exist — Joscha Bach, Karan Malhotra, Rob Haisfield (WorldSim, WebSim, Liquid AI)
This World Does Not Exist — Joscha Bach, Karan Malhotra, Rob Haisfield (WorldSim, WebSim, Liquid AI)
Latent Space
29 LLM Asia Paper Club Survey Round
LLM Asia Paper Club Survey Round
Latent Space
30 How to train a Million Context LLM — with Mark Huang of Gradient.ai
How to train a Million Context LLM — with Mark Huang of Gradient.ai
Latent Space
31 How AI is Eating Finance - with Mike Conover of Brightwave
How AI is Eating Finance - with Mike Conover of Brightwave
Latent Space
32 How To Hire AI Engineers (ft. James Brady and Adam Wiggins of Elicit)
How To Hire AI Engineers (ft. James Brady and Adam Wiggins of Elicit)
Latent Space
33 State of the Art: Training 70B LLMs on 10,000 H100 clusters
State of the Art: Training 70B LLMs on 10,000 H100 clusters
Latent Space
34 The 10,000x Yolo Researcher Metagame — with Yi Tay of Reka
The 10,000x Yolo Researcher Metagame — with Yi Tay of Reka
Latent Space
35 Training Llama 2, 3 & 4: The Path to Open Source AGI — with Thomas Scialom of Meta AI
Training Llama 2, 3 & 4: The Path to Open Source AGI — with Thomas Scialom of Meta AI
Latent Space
36 [LLM Paper Club] Llama 3.1 Paper: The Llama Family of Models
[LLM Paper Club] Llama 3.1 Paper: The Llama Family of Models
Latent Space
37 Synthetic data + tool use for LLM improvements 🦙
Synthetic data + tool use for LLM improvements 🦙
Latent Space
38 RLHF vs SFT to break out of local maxima 📈
RLHF vs SFT to break out of local maxima 📈
Latent Space
39 The Winds of AI Winter (Q2 Four Wars of the AI Stack Recap)
The Winds of AI Winter (Q2 Four Wars of the AI Stack Recap)
Latent Space
40 Segment Anything 2: Memory + Vision = Object Permanence — with Nikhila Ravi and Joseph Nelson
Segment Anything 2: Memory + Vision = Object Permanence — with Nikhila Ravi and Joseph Nelson
Latent Space
41 Answer.ai & AI Magic with Jeremy Howard
Answer.ai & AI Magic with Jeremy Howard
Latent Space
42 Is finetuning GPT4o worth it?
Is finetuning GPT4o worth it?
Latent Space
43 Personal benchmarks vs HumanEval - with Nicholas Carlini of DeepMind
Personal benchmarks vs HumanEval - with Nicholas Carlini of DeepMind
Latent Space
44 Building AGI with OpenAI's Structured Outputs API
Building AGI with OpenAI's Structured Outputs API
Latent Space
45 Q* for model distillation 🍓
Q* for model distillation 🍓
Latent Space
46 Finetuning LoRAs on BILLIONS of tokens 🤖
Finetuning LoRAs on BILLIONS of tokens 🤖
Latent Space
47 Cursor UX team is CRACKED 💻
Cursor UX team is CRACKED 💻
Latent Space
48 Choosing the BEST OpenAI model 🏆
Choosing the BEST OpenAI model 🏆
Latent Space
49 How will OpenAI voice mode change API design?
How will OpenAI voice mode change API design?
Latent Space
50 STEALING OpenAI models data 🥷
STEALING OpenAI models data 🥷
Latent Space
51 [Paper Club] 🍓 On Reasoning: Q-STaR and Friends!
[Paper Club] 🍓 On Reasoning: Q-STaR and Friends!
Latent Space
52 [Paper Club] Writing in the Margins: Chunked Prefill KV Caching for Long Context Retrieval
[Paper Club] Writing in the Margins: Chunked Prefill KV Caching for Long Context Retrieval
Latent Space
53 The Ultimate Guide to Prompting - with Sander Schulhoff from LearnPrompting.org
The Ultimate Guide to Prompting - with Sander Schulhoff from LearnPrompting.org
Latent Space
54 llm.c's Origin and the Future of LLM Compilers - Andrej Karpathy at CUDA MODE
llm.c's Origin and the Future of LLM Compilers - Andrej Karpathy at CUDA MODE
Latent Space
55 Prompt Engineer is NOT a job 📝
Prompt Engineer is NOT a job 📝
Latent Space
56 Prompt Mining LLMs for better prompts ⛏️
Prompt Mining LLMs for better prompts ⛏️
Latent Space
57 The six pillars of few-shot prompting 🔧
The six pillars of few-shot prompting 🔧
Latent Space
58 Language Agents: From Reasoning to Acting — with Shunyu Yao of OpenAI, Harrison Chase of LangGraph
Language Agents: From Reasoning to Acting — with Shunyu Yao of OpenAI, Harrison Chase of LangGraph
Latent Space
59 [Paper Club] Who Validates the Validators? Aligning LLM-Judges with Humans (w/ Eugene Yan)
[Paper Club] Who Validates the Validators? Aligning LLM-Judges with Humans (w/ Eugene Yan)
Latent Space
60 Can you separate intelligence and knowledge?
Can you separate intelligence and knowledge?
Latent Space

Related AI Lessons

The Context Layer: Why Enterprise AI Agents Fail Without It — and What It Actually Takes to Fix That
Enterprise AI agents often fail due to lack of context, but understanding the four-layer context problem can help fix this issue
Dev.to · Swapnil Chougule
Comparing 6 AI Routers Is a Mistake — Until You Define ‘Survived’
Evaluating AI routers requires a clear definition of success criteria, as comparing them without context is misleading
Medium · AI
Comparing 6 AI Routers Is a Mistake — Until You Define ‘Survived’
Evaluating AI routers requires defining survival metrics, as a simple comparison of 6 AI routers can be misleading
Medium · Programming
What if an AI continued thinking even after you closed the chat?
Explore the concept of AI systems that continue thinking after a conversation ends and its implications
Dev.to · Stell

Chapters (32)

Intro – Diplomacy, Cicero & World Championship
2:00 Reverse Centaur: How AI Improved Noam’s Human Play
5:00 Turing Test Failures in Chat: Hallucinations & Steerability
7:30 Reasoning Models & Fast vs. Slow Thinking Paradigm
11:00 System 1 vs. System 2 in Visual Tasks (GeoGuessr, Tic-Tac-Toe)
14:00 The Deep Research Existence Proof for Unverifiable Domains
17:30 Harnesses, Tool Use, and Fragility in AI Agents
21:00 The Case Against Over-Reliance on Scaffolds and Routers
24:00 Reinforcement Fine-Tuning and Long-Term Model Adaptability
28:00 Ilya’s Bet on Reasoning and the O-Series Breakthrough
34:00 Noam’s Dev Stack: Codex, Windsurf & AGI Moments
38:00 Building Better AI Developers: Memory, Reuse, and PR Reviews
41:00 Multi-Agent Intelligence and the “AI Civilization” Hypothesis
44:30 Implicit World Models and Theory of Mind Through Scaling
48:00 Why Self-Play Breaks Down Beyond Go and Chess
54:00 Designing Better Benchmarks for Fuzzy Tasks
57:30 The Real Limits of Test-Time Compute: Cost vs. Time
1:00:30 Data Efficiency Gaps Between Humans and LLMs
1:03:00 Training Pipeline: Pretraining, Midtraining, Posttraining
1:05:00 Games as Research Proving Grounds: Poker, MTG, Stratego
1:10:00 Closing Thoughts – Five-Year View and Open Research Directions
Intro & Guest Welcome
0:33 Diplomacy AI & Cicero Insights
3:49 AI Safety, Language Models, and Steerability
5:23 O Series Models: Progress and Benchmarks
8:53 Reasoning Paradigm: Thinking Fast and Slow in AI
14:02 Design Questions: Harnesses, Tools, and Test Time Compute
20:32 Reinforcement Fine-tuning & Model Specialization
21:52 The Rise of Reasoning Models at OpenAI
29:33 Data Efficiency in Machine Learning
33:21 Coding & AI: Codex, Workflows, and Developer Experience
41:38 Multi-Age
Up next
Combine Skills and MCP to Close the Context Gap — Pedro Rodrigues, Supabase
AI Engineer
Watch →