Tech is Good, AI Will Be Different
Why is technology good? Are there any exceptions to the rule?
Check out AISafety.info!
http://aisafety.info
Sources:
Large Language Models can Strategically Deceive their Users when Put Under Pressure: https://arxiv.org/abs/2311.07590
https://www.apolloresearch.ai/research/our-research-on-strategic-deception-presented-at-the-uks-ai-safety-summit
Large Language Models Often Know When They Are Being Evaluated: https://arxiv.org/abs/2505.23836
Frontier Models are Capable of In-context Scheming: https://arxiv.org/abs/2412.04984
Claude 4 System Card: https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdf
Thanks to my wonderful patrons:
https://www.patreon.com/robertskmiles
Links:
Why misaligned AI is different from other technologies: https://aisafety.info/questions/MNAK
Is the AI safety movement about stopping all technology? (No.)
aisafety.info/questions/AHS5
Steef
John Brewer
Timothy Lillicrap
Juan Benet
Kieryn
Maxim
AlliedToasters
Scott Worley
Manuel Weichselbaum
Clemens Arbesser
Tor Barstad
Francisco Tolmasky
David Reid
Cam MacFarlane
Olivier Coutu
CaptObvious
Andy Southgate
Raf Jakubanis
Isaac
Elriel
Nathan Rogowski
Jamie Kawabata
Matt Fallshaw
Boris Mezhibovskiy
Steven Lee
armedtoe
Nicolas Pouillard
Erik de Bruijn
Jeroen De Dauw
Ludwig Schubert
Eric James
Owen Campbell-Moore
Studio Esagames
Nathan Metzger
Kan Kireon
Leo Cymbalista
Mark Jocas
Laura Olds
Paul Hobbs
Bastiaan Cnossen
Eric Scammell
Alexare
Will Glynn
Reslav Hollós
Jérôme Beaulieu
Nathan Fish
Taras Bobrovytsky
Jeremy
Vaskó Richárd
Andrew Harcourt
Tegaki
Andrew Blackledge
Forodriac Origamius
Chris Beacham
Zachary Gidwitz
Art Code Outdoors
Abigail Novick
Edmund Fokschaner
DragonSheep
Richard Newcombe
Mutual Information
Joshua Michel
Richard
Scott Fenton
Sophia Michelle Andren
Alan J. Etchings
James Vera
Stumbleboots
Peter Lillian
Grimrukh
noggieB
DN
Dr Cats
Robert Paul Schwin
Roland G. McIntosh
Sarah Howell
ikke89
Joanny Raby
Tom Miller
Eran Glicksman
Stanley Sisson
CheeseB
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
Playlist
Uploads from Robert Miles AI Safety · Robert Miles AI Safety · 46 of 47
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
▶
47
Predicting AI: RIP Prof. Hubert Dreyfus
Robert Miles AI Safety
Respectability
Robert Miles AI Safety
Are AI Risks like Nuclear Risks?
Robert Miles AI Safety
Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1
Robert Miles AI Safety
Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.5
Robert Miles AI Safety
Empowerment: Concrete Problems in AI Safety part 2
Robert Miles AI Safety
Why Not Just: Raise AI Like Kids?
Robert Miles AI Safety
Reward Hacking: Concrete Problems in AI Safety Part 3
Robert Miles AI Safety
The other "Killer Robot Arms Race" Elon Musk should worry about
Robert Miles AI Safety
Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.5
Robert Miles AI Safety
What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 4
Robert Miles AI Safety
What can AGI do? I/O and Speed
Robert Miles AI Safety
AI learns to Create ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #1
Robert Miles AI Safety
AI Safety at EAGlobal2017 Conference
Robert Miles AI Safety
Scalable Supervision: Concrete Problems in AI Safety Part 5
Robert Miles AI Safety
Superintelligence Mod for Civilization V
Robert Miles AI Safety
Why Would AI Want to do Bad Things? Instrumental Convergence
Robert Miles AI Safety
Experts' Predictions about the Future of AI
Robert Miles AI Safety
AI Safety Gridworlds
Robert Miles AI Safety
Friend or Foe? AI Safety Gridworlds extra bit
Robert Miles AI Safety
Safe Exploration: Concrete Problems in AI Safety Part 6
Robert Miles AI Safety
Why Not Just: Think of AGI Like a Corporation?
Robert Miles AI Safety
How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification
Robert Miles AI Safety
Is AI Safety a Pascal's Mugging?
Robert Miles AI Safety
AI That Doesn't Try Too Hard - Maximizers and Satisficers
Robert Miles AI Safety
Training AI Without Writing A Reward Function, with Reward Modelling
Robert Miles AI Safety
9 Examples of Specification Gaming
Robert Miles AI Safety
10 Reasons to Ignore AI Safety
Robert Miles AI Safety
Sharing the Benefits of AI: The Windfall Clause
Robert Miles AI Safety
Quantilizers: AI That Doesn't Try Too Hard
Robert Miles AI Safety
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
Robert Miles AI Safety
Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...
Robert Miles AI Safety
Intro to AI Safety, Remastered
Robert Miles AI Safety
We Were Right! Real Inner Misalignment
Robert Miles AI Safety
Apply to AI Safety Camp! #shorts
Robert Miles AI Safety
Win $50k for Solving a Single AI Problem? #Shorts
Robert Miles AI Safety
Free ML Bootcamp for Alignment #shorts
Robert Miles AI Safety
Apply Now for a Paid Residency on Interpretability #short
Robert Miles AI Safety
Why Does AI Lie, and What Can We Do About It?
Robert Miles AI Safety
Apply to Study AI Safety Now! #shorts
Robert Miles AI Safety
AI Ruined My Year
Robert Miles AI Safety
Learn AI Safety at MATS #shorts
Robert Miles AI Safety
Using Dangerous AI, But Safely?
Robert Miles AI Safety
AI Safety Career Advice! (And So Can You!)
Robert Miles AI Safety
Robot Dog! Unitree Go2 review #shorts #robot #dog
Robert Miles AI Safety
Tech is Good, AI Will Be Different
Robert Miles AI Safety
Apply for the Affine Superintelligence Alignment Seminar #shorts
Robert Miles AI Safety
More on: AI Alignment Basics
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
Stop Evaluating LLMs with “Vibe Checks”
Towards Data Science
How I Made My Android App Discoverable on 4 LLMs in 24 Hours (llms.txt, IndexNow, JSON-LD, the Bing Cycle)
Dev.to · TAMSIV
What LLMs Can Actually Do for Your Business
Medium · AI
MiMo-V2.5-Pro: The Long-Context LLM I’d Actually Test Before Paying More for Claude or GPT
Medium · Programming
🎓
Tutor Explanation
DeepCamp AI