Safe Exploration: Concrete Problems in AI Safety Part 6
To learn, you need to try new things, but that can be risky. How do we make AI systems that can explore safely?
Playlist of the series so far: https://www.youtube.com/playlist?list=PLqL14ZxTTA4fEp5ltiNinNHdkPuLK4778
The paper, 'Concrete Problems in AI Safety': https://arxiv.org/pdf/1606.06565.pdf
AI Safety Gridworlds: https://youtu.be/CGTkoUidQ8I
Why Would AI Want to do Bad Things? Instrumental Convergence: https://youtu.be/ZeecOKBus3Q
Scalable Supervision: Concrete Problems in AI Safety Part 5: https://youtu.be/nr1lHuFeq5w
The Evolved Radio and its Implications for Modelling the Evolution of Novel Sensors: https://people.duke.edu/~ng46/topics/evolved-radio.pdf
With thanks to my excellent Patreon supporters:
https://www.patreon.com/robertskmiles
Jason Hise
Steef
Jason Strack
Stefan Skiles
Jordan Medina
Scott Worley
JJ Hepboin
Alex Flint
Pedro A Ortega
James McCuen
Richárd Nagyfi
Alec Johnson
Clemens Arbesser
Simon Strandgaard
Jonatan R
Michael Greve
The Guru Of Vision
Alexander Hartvig Nielsen
Volodymyr
David Tjäder
Julius Brash
Tom O'Connor
Ville Ahlgren
Erik de Bruijn
Robin Green
Maksym Taran
Laura Olds
Jon Halliday
Bobby Cold
Paul Hobbs
Jeroen De Dauw
Tim Neilson
Eric Scammell
christopher dasenbrock
Igor Keller
Ben Glanton
Robert Sokolowski
Vlad D
Jérôme Frossard
Lupuleasa Ionuț
Sylvain Chevalier
DGJono
robertvanduursen
Scott Stevens
Dmitri Afanasjev
Brian Sandberg
Einar Ueland
Marcel Ward
Andrew Weir
Taylor Smith
Ben Archer
Scott McCarthy
Kabs
Phil Moyer
Tendayi Mawushe
Anne Kohlbrenner
Bjorn Nyblad
Jussi Männistö
Mr Fantastic
Matanya Loewenthal
Wr4thon
Dave Tapley
Archy de Berker
Pablo Eder
Kevin
Marc Pauly
Joshua Pratt
Gunnar Guðvarðarson
Shevis Johnson
Andy Kobre
Manuel Weichselbaum
Brian Gillespie
Martin Wind
Peggy Youell
Poker Chen
Kees
Darko Sperac
Paul Moffat
Jelle Langen
Lars Scholz
Anders Öhrt
Marco Tiraboschi
Michael Kuhinica
Fraser Cain
Robin Scharf
Oren Milman
John Rees
Gladamas
Shawn Hartsock
Seth Brothwell
Brian Goodrich
Michael S McReynolds
h
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
Playlist
Uploads from Robert Miles AI Safety · Robert Miles AI Safety · 21 of 47
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
▶
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
Predicting AI: RIP Prof. Hubert Dreyfus
Robert Miles AI Safety
Respectability
Robert Miles AI Safety
Are AI Risks like Nuclear Risks?
Robert Miles AI Safety
Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1
Robert Miles AI Safety
Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.5
Robert Miles AI Safety
Empowerment: Concrete Problems in AI Safety part 2
Robert Miles AI Safety
Why Not Just: Raise AI Like Kids?
Robert Miles AI Safety
Reward Hacking: Concrete Problems in AI Safety Part 3
Robert Miles AI Safety
The other "Killer Robot Arms Race" Elon Musk should worry about
Robert Miles AI Safety
Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.5
Robert Miles AI Safety
What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 4
Robert Miles AI Safety
What can AGI do? I/O and Speed
Robert Miles AI Safety
AI learns to Create ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #1
Robert Miles AI Safety
AI Safety at EAGlobal2017 Conference
Robert Miles AI Safety
Scalable Supervision: Concrete Problems in AI Safety Part 5
Robert Miles AI Safety
Superintelligence Mod for Civilization V
Robert Miles AI Safety
Why Would AI Want to do Bad Things? Instrumental Convergence
Robert Miles AI Safety
Experts' Predictions about the Future of AI
Robert Miles AI Safety
AI Safety Gridworlds
Robert Miles AI Safety
Friend or Foe? AI Safety Gridworlds extra bit
Robert Miles AI Safety
Safe Exploration: Concrete Problems in AI Safety Part 6
Robert Miles AI Safety
Why Not Just: Think of AGI Like a Corporation?
Robert Miles AI Safety
How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification
Robert Miles AI Safety
Is AI Safety a Pascal's Mugging?
Robert Miles AI Safety
AI That Doesn't Try Too Hard - Maximizers and Satisficers
Robert Miles AI Safety
Training AI Without Writing A Reward Function, with Reward Modelling
Robert Miles AI Safety
9 Examples of Specification Gaming
Robert Miles AI Safety
10 Reasons to Ignore AI Safety
Robert Miles AI Safety
Sharing the Benefits of AI: The Windfall Clause
Robert Miles AI Safety
Quantilizers: AI That Doesn't Try Too Hard
Robert Miles AI Safety
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
Robert Miles AI Safety
Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...
Robert Miles AI Safety
Intro to AI Safety, Remastered
Robert Miles AI Safety
We Were Right! Real Inner Misalignment
Robert Miles AI Safety
Apply to AI Safety Camp! #shorts
Robert Miles AI Safety
Win $50k for Solving a Single AI Problem? #Shorts
Robert Miles AI Safety
Free ML Bootcamp for Alignment #shorts
Robert Miles AI Safety
Apply Now for a Paid Residency on Interpretability #short
Robert Miles AI Safety
Why Does AI Lie, and What Can We Do About It?
Robert Miles AI Safety
Apply to Study AI Safety Now! #shorts
Robert Miles AI Safety
AI Ruined My Year
Robert Miles AI Safety
Learn AI Safety at MATS #shorts
Robert Miles AI Safety
Using Dangerous AI, But Safely?
Robert Miles AI Safety
AI Safety Career Advice! (And So Can You!)
Robert Miles AI Safety
Robot Dog! Unitree Go2 review #shorts #robot #dog
Robert Miles AI Safety
Tech is Good, AI Will Be Different
Robert Miles AI Safety
Apply for the Affine Superintelligence Alignment Seminar #shorts
Robert Miles AI Safety
More on: AI Alignment Basics
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
The ABCs of reading medical research and review papers these days
Medium · LLM
#1 DevLog Meta-research: I Got Tired of Tab Chaos While Reading Research Papers.
Dev.to AI
How to Set Up a Karpathy-Style Wiki for Your Research Field
Medium · AI
The Non-Optimality of Scientific Knowledge: Path Dependence, Lock-In, and The Local Minimum Trap
ArXiv cs.AI
🎓
Tutor Explanation
DeepCamp AI