Everything Gets Rebuilt: The New AI Agent Stack | Harrison Chase, LangChain

The MAD Podcast with Matt Turck · Beginner ·🤖 AI Agents & Automation ·1mo ago
Harrison Chase, co-founder and CEO of LangChain, joins the MAD Podcast to explain why everything in AI is getting rebuilt. As agents evolve from simple prompt-based systems into software that can plan, use tools, write code, manage files, and remember things over time, the real frontier is shifting from the model itself to the stack around the model. In this conversation, we go deep on harnesses, subagents, filesystems, sandboxes, observability, memory, and the new infrastructure required to make AI agents actually work in the real world. Harrison Chase LinkedIn - https://www.linkedin.com/in/harrison-chase-961287118 X/Twitter - https://x.com/hwchase17 LangChain Website - https://www.langchain.com X/Twitter - https://x.com/LangChain Matt Turck (Managing Director) Blog - https://mattturck.com LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://twitter.com/mattturck FirstMark Website - https://firstmark.com X/Twitter - https://twitter.com/FirstMarkCap Listen on: Spotify - https://open.spotify.com/show/7yLATDSaFvgJG80ACcRJtq Apple - https://podcasts.apple.com/us/podcast/the-mad-podcast-with-matt-turck/id1686238724 00:00 Intro - meet Harrison Chase 01:32 What changed in agents over the last year 03:57 Why coding agents are ahead 06:26 Do models commoditize the framework layer? 08:27 Harnesses, in plain English 10:11 Why system prompts matter so much 13:11 The upside — and downside — of subagents 15:31 Why a useful agent needs a filesystem 18:13 The core primitives of modern agents 19:12 Skills: the new primitive 20:19 What context compaction actually means 23:02 How memory works in agents 25:16 One mega-agent or many specialized agents? 27:46 Has MCP won? 29:38 Why agents need sandboxes 32:35 How sandboxes help with security 33:32 How Harrison Chase started LangChain 37:24 LangChain vs LangGraph vs Deep Agents 40:17 Why observability matters more for agents 41:48 Evals, no-code, and continuous improvement 44:41 What LangChain is building next 45:29 Wh
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Doubt Solver Camera App — Snap Any Question Get Instant AI Solution
Learn how to evaluate AI tutor apps like Doubt Solver Camera App, which uses camera input to read handwritten questions and provides instant AI solutions, and what features to look for before paying.
Dev.to AI
Two AI Models Passed a Full Corporate Network Attack Simulation
Two AI models, Claude Mythos and GPT-5.5, successfully passed a 32-step corporate network attack simulation, demonstrating their capabilities in cybersecurity
Medium · Machine Learning
Two AI Models Passed a Full Corporate Network Attack Simulation
Two AI models, Claude Mythos and GPT-5.5, passed a 32-step corporate network attack simulation, demonstrating their capabilities in cybersecurity
Medium · Cybersecurity
Forkline: Building AI runners for engineering teams
Learn to build AI runners for engineering teams to streamline development workflows
Dev.to · Alexander Gil Casas

Chapters (22)

Intro - meet Harrison Chase
1:32 What changed in agents over the last year
3:57 Why coding agents are ahead
6:26 Do models commoditize the framework layer?
8:27 Harnesses, in plain English
10:11 Why system prompts matter so much
13:11 The upside — and downside — of subagents
15:31 Why a useful agent needs a filesystem
18:13 The core primitives of modern agents
19:12 Skills: the new primitive
20:19 What context compaction actually means
23:02 How memory works in agents
25:16 One mega-agent or many specialized agents?
27:46 Has MCP won?
29:38 Why agents need sandboxes
32:35 How sandboxes help with security
33:32 How Harrison Chase started LangChain
37:24 LangChain vs LangGraph vs Deep Agents
40:17 Why observability matters more for agents
41:48 Evals, no-code, and continuous improvement
44:41 What LangChain is building next
45:29 Wh
Up next
Introducing Agent Experiments
ElevenLabs
Watch →