OpenAI Board Member Zico Kolter on the Real Risks of Frontier AI

The MAD Podcast with Matt Turck · Beginner ·🛡️ AI Safety & Ethics ·6h ago
What actually happens before a frontier AI model gets released — and who decides whether it is safe enough? In this episode of The MAD Podcast, Matt Turck sits down with Zico Kolter — OpenAI board member, Head of the Machine Learning Department at Carnegie Mellon, and co-founder of Gray Swan — for a deep conversation on the real risks of frontier AI. They discuss how OpenAI’s safety oversight works before major model releases, why more powerful models do not automatically become safer, how jailbreaks and prompt injection expose real weaknesses in AI systems, why AI agents dramatically expand the attack surface, and where frontier AI is headed next. A clear, practical discussion on OpenAI, AI safety, AI security, AI agents, frontier models, red teaming, reinforcement learning, and the future of AI governance. Zico Kolter Website - https://zicokolter.com LinkedIn - https://www.linkedin.com/in/zico-kolter-560382a4 X/Twitter - https://x.com/zicokolter The Machine Learning Department at Carnegie Mellon University Website - https://www.ml.cmu.edu/ X/Twitter - https://x.com/mldcmu Matt Turck (Managing Director) Blog - https://mattturck.com LinkedIn - https://www.linkedin.com/in/turck/ X/Twitter - https://x.com/mattturck FirstMark Website - https://firstmark.com X/Twitter - https://x.com/FirstMarkCap Listen on: Spotify - https://open.spotify.com/show/7yLATDSaFvgJG80ACcRJtq Apple - https://podcasts.apple.com/us/podcast/the-mad-podcast-with-matt-turck/id1686238724 00:00 Intro 01:32 OpenAI board role and Safety & Security Committee 03:53 How OpenAI reviews major model releases 05:33 OpenAI’s preparedness framework explained 09:46 Are frontier AI models getting safer? 12:33 Why AI safety does not come from scale 15:23 The four categories of AI risk 19:38 Doomerism vs accelerationism in AI 24:11 The six-month AI pause debate 26:20 AI safety as a global effort 28:04 How Zico Kolter got into machine learning 31:05 OpenAI in the early days 34:14 Why Carnegie Mellon became an
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Behind the Scenes Hardening Firefox with Claude Mythos Preview
Learn how Mozilla used Claude Mythos to identify and fix hundreds of vulnerabilities in Firefox, improving browser security
Simon Willison's Blog
AI Alignment Might Be Optimizing the Wrong Objective
AI alignment might be optimizing the wrong objective, highlighting the need to redefine what alignment means and how it's achieved
Medium · AI
AI Alignment Might Be Optimizing the Wrong Objective
AI alignment might be optimizing the wrong objective, highlighting the need to redefine what alignment means and how it's achieved
Medium · Machine Learning
Cognitive Surrender: how much thinking should leaders outsource to AI?
Learn how leaders can effectively balance AI-driven insights with human judgment to avoid cognitive surrender
Medium · Data Science

Chapters (13)

Intro
1:32 OpenAI board role and Safety & Security Committee
3:53 How OpenAI reviews major model releases
5:33 OpenAI’s preparedness framework explained
9:46 Are frontier AI models getting safer?
12:33 Why AI safety does not come from scale
15:23 The four categories of AI risk
19:38 Doomerism vs accelerationism in AI
24:11 The six-month AI pause debate
26:20 AI safety as a global effort
28:04 How Zico Kolter got into machine learning
31:05 OpenAI in the early days
34:14 Why Carnegie Mellon became an
Up next
Why you can’t love all animals and still eat meat
Vox
Watch →