Stop AI Agents From SQL Injecting Your Database

MLOps.community · Intermediate ·🔐 Cybersecurity ·6d ago
Skills: Defensive AI90%
Averi Kitsch, Staff Software Engineer at Google and tech lead for MCP Toolbox for Databases (13,500+ GitHub stars, 100+ contributors, 40+ data sources), breaks down what her team has learned from over 20 million MCP tool calls per month against Google Cloud databases — and why most agent setups are one prompt away from leaking your customers' data. This is a deeply practical talk on AI agent security. Averi walks through Simon Willison's "lethal trifecta" (private data + untrusted content + the ability to communicate back to the user), shows a real confused deputy attack against a ticketing-system agent, and then walks through the 4-step evolution every database tool should go through to reach a zero-trust posture where the agent never sees credentials, never writes raw SQL, and never touches PII. Topics covered: - Three patterns of database tools observed at 20M+ requests/month: control plane (admin) tools, natural language to SQL, and structured SQL tools - Build-time vs runtime tools: why developer-assistance MCP servers dominate today, but runtime production tools have 10x the volume coming - The lethal trifecta in plain English, and why "your data is only as secure as your agent" - A real-world confused deputy attack: how a ticket comment can exfiltrate every employee's salary - Application-controlled vs model-controlled architecture for agent data access - The three identities every agent system needs to separate: user, application workload identity, and agent - Agent parameters (untrusted prompt-derived inputs) vs application parameters (factual constraints) - The 4-step evolution of a secure database tool: from fully model-controlled to configurable sources, to custom semantic tools, to bound and authenticated parameters - Why prepared statements with strict typing kill SQL injection at the tool layer - How to attach OpenID tokens to tool calls so the agent never sees user identity - Q&A: parameterized secure views as a path to letting agents answer the ha
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Up next
Data Management and Privacy Practices for Professionals
Coursera
Watch →