AI Agents in the Enterprise - Trust, Governance & MCP Security l Dev in the details
Why Enterprise AI Agents Fail in Production: Trust, Risk, Observability + MCP
Enterprises are moving beyond “LLMs for search” and into agentic workflows—LLMs stitched to tools that can take action. But once you move past the demo, reality hits: quality, security, governance, observability, and remediation become the real blockers.
In this episode of Dev in the Details, Dev Rishi sits down with Travis Addair (Predibase co-founder & former Uber Deep Learning Infra lead) to unpack what’s actually stopping Global 2000 companies from scaling AI agents—and what the market needs to deliver next.
We …
Watch on YouTube ↗
(saves to browser)
Chapters (11)
From LLMs → agents → “AI employees”
2:05
What changed in enterprise adoption (100+ customer convos)
5:10
Why agents break after the demo (the 80/20 trap)
8:20
Trust problems: quality, safety, control, “blast radius”
12:10
The enterprise playbook: observability → policy → validation → remediation
16:05
Why governance committees slow adoption (and why they have to)19:40 MCP explain
22:25
MCP security risks: tokens, scopes, over-permissioning
26:10
Remote MCP + OAuth + registries: what will become standard
30:05
Agent identity: delegated identity + just-in-time access
33:40
The future of “trusted AI”: a Workday for agents
36:15
What enterprise leaders should do next
DeepCamp AI