AI Safety Gridworlds
Got an AI safety idea? Now you can test it out! A recent paper from DeepMind sets out some environments for evaluating the safety of AI systems, and the code is on GitHub.
The Computerphile video: https://www.youtube.com/watch?v=eElfR_BnL5k
The EXTRA BITS video, with more detail: https://www.youtube.com/watch?v=py5VRagG6t8
The paper: https://arxiv.org/pdf/1711.09883.pdf
The GitHub repos: https://github.com/deepmind/ai-safety-gridworlds
https://www.patreon.com/robertskmiles
With thanks to my wonderful Patreon supporters:
- Jason Hise
- Steef
- Cooper Lawton
- Jason Strack
- Chad Jones
- Stefan Skiles
- Jordan Medina
- Manuel Weichselbaum
- Scott Worley
- JJ Hepboin
- Alex Flint
- Justin Courtright
- James McCuen
- Richárd Nagyfi
- Ville Ahlgren
- Alec Johnson
- Simon Strandgaard
- Joshua Richardson
- Jonatan R
- Michael Greve
- The Guru Of Vision
- Fabrizio Pisani
- Alexander Hartvig Nielsen
- Volodymyr
- David Tjäder
- Paul Mason
- Ben Scanlon
- Julius Brash
- Mike Bird
- Tom O'Connor
- Gunnar Guðvarðarson
- Shevis Johnson
- Erik de Bruijn
- Robin Green
- Alexei Vasilkov
- Maksym Taran
- Laura Olds
- Jon Halliday
- Robert Werner
- Paul Hobbs
- Jeroen De Dauw
- Enrico Ros
- Tim Neilson
- Eric Scammell
- christopher dasenbrock
- Igor Keller
- William Hendley
- DGJono
- robertvanduursen
- Scott Stevens
- Michael Ore
- Dmitri Afanasjev
- Brian Sandberg
- Einar Ueland
- Marcel Ward
- Andrew Weir
- Taylor Smith
- Ben Archer
- Scott McCarthy
- Kabs Kabs
- Phil
- Tendayi Mawushe
- Gabriel Behm
- Anne Kohlbrenner
- Jake Fish
- Bjorn Nyblad
- Jussi Männistö
- Mr Fantastic
- Matanya Loewenthal
- Wr4thon
- Dave Tapley
- Archy de Berker
- Kevin
- Marc Pauly
- Joshua Pratt
- Andy Kobre
- Brian Gillespie
- Martin Wind
- Peggy Youell
- Poker Chen
- pmilian
- Kees
- Darko Sperac
- Paul Moffat
- Jelle Langen
- Lars Scholz
- Anders Öhrt
- Lupuleasa Ionuț
- Marco Tiraboschi
- Peter Kjeld Andersen
- Michael Kuhinica
- Fraser Cain
- Robin Scharf
- Oren Milman
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
Playlist
Uploads from Robert Miles AI Safety · Robert Miles AI Safety · 19 of 47
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
▶
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
Predicting AI: RIP Prof. Hubert Dreyfus
Robert Miles AI Safety
Respectability
Robert Miles AI Safety
Are AI Risks like Nuclear Risks?
Robert Miles AI Safety
Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1
Robert Miles AI Safety
Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.5
Robert Miles AI Safety
Empowerment: Concrete Problems in AI Safety part 2
Robert Miles AI Safety
Why Not Just: Raise AI Like Kids?
Robert Miles AI Safety
Reward Hacking: Concrete Problems in AI Safety Part 3
Robert Miles AI Safety
The other "Killer Robot Arms Race" Elon Musk should worry about
Robert Miles AI Safety
Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.5
Robert Miles AI Safety
What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 4
Robert Miles AI Safety
What can AGI do? I/O and Speed
Robert Miles AI Safety
AI learns to Create ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #1
Robert Miles AI Safety
AI Safety at EAGlobal2017 Conference
Robert Miles AI Safety
Scalable Supervision: Concrete Problems in AI Safety Part 5
Robert Miles AI Safety
Superintelligence Mod for Civilization V
Robert Miles AI Safety
Why Would AI Want to do Bad Things? Instrumental Convergence
Robert Miles AI Safety
Experts' Predictions about the Future of AI
Robert Miles AI Safety
AI Safety Gridworlds
Robert Miles AI Safety
Friend or Foe? AI Safety Gridworlds extra bit
Robert Miles AI Safety
Safe Exploration: Concrete Problems in AI Safety Part 6
Robert Miles AI Safety
Why Not Just: Think of AGI Like a Corporation?
Robert Miles AI Safety
How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification
Robert Miles AI Safety
Is AI Safety a Pascal's Mugging?
Robert Miles AI Safety
AI That Doesn't Try Too Hard - Maximizers and Satisficers
Robert Miles AI Safety
Training AI Without Writing A Reward Function, with Reward Modelling
Robert Miles AI Safety
9 Examples of Specification Gaming
Robert Miles AI Safety
10 Reasons to Ignore AI Safety
Robert Miles AI Safety
Sharing the Benefits of AI: The Windfall Clause
Robert Miles AI Safety
Quantilizers: AI That Doesn't Try Too Hard
Robert Miles AI Safety
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
Robert Miles AI Safety
Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...
Robert Miles AI Safety
Intro to AI Safety, Remastered
Robert Miles AI Safety
We Were Right! Real Inner Misalignment
Robert Miles AI Safety
Apply to AI Safety Camp! #shorts
Robert Miles AI Safety
Win $50k for Solving a Single AI Problem? #Shorts
Robert Miles AI Safety
Free ML Bootcamp for Alignment #shorts
Robert Miles AI Safety
Apply Now for a Paid Residency on Interpretability #short
Robert Miles AI Safety
Why Does AI Lie, and What Can We Do About It?
Robert Miles AI Safety
Apply to Study AI Safety Now! #shorts
Robert Miles AI Safety
AI Ruined My Year
Robert Miles AI Safety
Learn AI Safety at MATS #shorts
Robert Miles AI Safety
Using Dangerous AI, But Safely?
Robert Miles AI Safety
AI Safety Career Advice! (And So Can You!)
Robert Miles AI Safety
Robot Dog! Unitree Go2 review #shorts #robot #dog
Robert Miles AI Safety
Tech is Good, AI Will Be Different
Robert Miles AI Safety
Apply for the Affine Superintelligence Alignment Seminar #shorts
Robert Miles AI Safety
More on: Reading ML Papers
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
The ABCs of reading medical research and review papers these days
Medium · LLM
#1 DevLog Meta-research: I Got Tired of Tab Chaos While Reading Research Papers.
Dev.to AI
How to Set Up a Karpathy-Style Wiki for Your Research Field
Medium · AI
The Non-Optimality of Scientific Knowledge: Path Dependence, Lock-In, and The Local Minimum Trap
ArXiv cs.AI
🎓
Tutor Explanation
DeepCamp AI