Are We Building Superintelligence Backwards?
Sara Saab, VP of Product at Prolific, challenges our assumptions about AI alignment by comparing it to human moral development. Just as we don't expect humans to be born with perfect predetermined morality, why should we expect it from AI? She explores building backwards from AGI and the emerging ecosystem of human-machine oversight.
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: AI Alignment Basics
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
Operational continuity is not governability.
Medium · Deep Learning
AI gave North Korean hackers a $600 million month. DeFi is still working out how to respond.
The Next Web AI
The Fallacy of Vibe-Driven Development: A Critical Look at AI Scaling
Dev.to · Aneesha Prasannan
New Jersey’s 2026 AI Push
Dev.to AI
🎓
Tutor Explanation
DeepCamp AI