OpenAI Board Member Zico Kolter on the Real Risks of Frontier AI
Skills:
AI Alignment Basics90%
What actually happens before a frontier AI model gets released — and who decides whether it is safe enough? In this episode of The MAD Podcast, Matt Turck sits down with Zico Kolter — OpenAI board member, Head of the Machine Learning Department at Carnegie Mellon, and co-founder of Gray Swan — for a deep conversation on the real risks of frontier AI. They discuss how OpenAI’s safety oversight works before major model releases, why more powerful models do not automatically become safer, how jailbreaks and prompt injection expose real weaknesses in AI systems, why AI agents dramatically expand the attack surface, and where frontier AI is headed next. A clear, practical discussion on OpenAI, AI safety, AI security, AI agents, frontier models, red teaming, reinforcement learning, and the future of AI governance.
Zico Kolter
Website - https://zicokolter.com
LinkedIn - https://www.linkedin.com/in/zico-kolter-560382a4
X/Twitter - https://x.com/zicokolter
The Machine Learning Department at Carnegie Mellon University
Website - https://www.ml.cmu.edu/
X/Twitter - https://x.com/mldcmu
Matt Turck (Managing Director)
Blog - https://mattturck.com
LinkedIn - https://www.linkedin.com/in/turck/
X/Twitter - https://x.com/mattturck
FirstMark
Website - https://firstmark.com
X/Twitter - https://x.com/FirstMarkCap
Listen on:
Spotify - https://open.spotify.com/show/7yLATDSaFvgJG80ACcRJtq
Apple - https://podcasts.apple.com/us/podcast/the-mad-podcast-with-matt-turck/id1686238724
00:00 Intro
01:32 OpenAI board role and Safety & Security Committee
03:53 How OpenAI reviews major model releases
05:33 OpenAI’s preparedness framework explained
09:46 Are frontier AI models getting safer?
12:33 Why AI safety does not come from scale
15:23 The four categories of AI risk
19:38 Doomerism vs accelerationism in AI
24:11 The six-month AI pause debate
26:20 AI safety as a global effort
28:04 How Zico Kolter got into machine learning
31:05 OpenAI in the early days
34:14 Why Carnegie Mellon became an
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: AI Alignment Basics
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
Behind the Scenes Hardening Firefox with Claude Mythos Preview
Simon Willison's Blog
AI Alignment Might Be Optimizing the Wrong Objective
Medium · AI
AI Alignment Might Be Optimizing the Wrong Objective
Medium · Machine Learning
Cognitive Surrender: how much thinking should leaders outsource to AI?
Medium · Data Science
Chapters (13)
Intro
1:32
OpenAI board role and Safety & Security Committee
3:53
How OpenAI reviews major model releases
5:33
OpenAI’s preparedness framework explained
9:46
Are frontier AI models getting safer?
12:33
Why AI safety does not come from scale
15:23
The four categories of AI risk
19:38
Doomerism vs accelerationism in AI
24:11
The six-month AI pause debate
26:20
AI safety as a global effort
28:04
How Zico Kolter got into machine learning
31:05
OpenAI in the early days
34:14
Why Carnegie Mellon became an
🎓
Tutor Explanation
DeepCamp AI