How Clay manages 300M agent runs a month with LangSmith
Clay's Head of AI Jeff Barg sat down with LangChain Co-Founder & CEO Harrison Chase to discuss how his team uses LangSmith as mission-critical infrastructure for observability, evals, and the agent development lifecycle.
Watch the full video to learn:
• What putting agents in production really looks like as you scale up to hundreds of thousands or millions of runs.
• How to think about agent quality at scale, and why Clay focuses on quality, throughput, and cost.
• How LangSmith helped Clay go from no visibility on inference spend to 99.5% cost reconciliation across providers.
• What's next for agents, and advice for teams scaling from zero to one.
0:00 How Clay thinks about AI: find, close, and grow
1:09 From chat completions wrapper to Claygent
2:02 The three agent categories powering Clay today
2:34 Running 300 million agent runs a month
3:20 How agent complexity changed Clay's dev process
4:06 How Clay measures quality: evals, deterministic checks, and LLM-as-a-judge
4:52 Staying model-agnostic with a metaprompter tool
6:01 How LangSmith fits into the agent development workflow
7:09 Why you can't catch everything before production
8:00 Tracing from day zero: the iteration process
8:35 Why Clay chose LangSmith over building in-house
9:27 Connecting a custom agent harness to LangSmith
9:44 The LangSmith features that matter most at scale
10:44 Who at Clay uses LangSmith (and how support uses it too)
11:12 Quantifying LangSmith's impact: cost reconciliation at 99.5%
12:18 How agents in production are changing — and what LangSmith needs next
13:15 Subagents, traces, and the future of self-healing workflows
15:06 Advice for teams scaling agents from zero to one
15:29 Agent memory: what's worked, what hasn't, and what's next
17:02 Closing thoughts
Extra resources:
- Learn about LangSmith: https://www.langchain.com/langsmith-platform
- Customer stories: https://www.langchain.com/customers
- Subscribe for more: https://www.youtube.com/@LangChain
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: Tool Use & Function Calling
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
Browse public service handles at biznode.1bz.biz/handles.php — discover AI bots offering legal, medical, finance, consulting...
Dev.to AI
Build a Profitable AI Agent with LangChain: A Step-by-Step Tutorial
Dev.to AI
Teaching My AI Agents to Push Back: Why I Built RoBrain
Dev.to · Adeline
Not so locked in any more
Simon Willison's Blog
Chapters (20)
How Clay thinks about AI: find, close, and grow
1:09
From chat completions wrapper to Claygent
2:02
The three agent categories powering Clay today
2:34
Running 300 million agent runs a month
3:20
How agent complexity changed Clay's dev process
4:06
How Clay measures quality: evals, deterministic checks, and LLM-as-a-judge
4:52
Staying model-agnostic with a metaprompter tool
6:01
How LangSmith fits into the agent development workflow
7:09
Why you can't catch everything before production
8:00
Tracing from day zero: the iteration process
8:35
Why Clay chose LangSmith over building in-house
9:27
Connecting a custom agent harness to LangSmith
9:44
The LangSmith features that matter most at scale
10:44
Who at Clay uses LangSmith (and how support uses it too)
11:12
Quantifying LangSmith's impact: cost reconciliation at 99.5%
12:18
How agents in production are changing — and what LangSmith needs next
13:15
Subagents, traces, and the future of self-healing workflows
15:06
Advice for teams scaling agents from zero to one
15:29
Agent memory: what's worked, what hasn't, and what's next
17:02
Closing thoughts
🎓
Tutor Explanation
DeepCamp AI