Translating Claude’s thoughts into language
AI models like Claude talk in words but think in numbers. These numbers, called activations, encode Claude’s thoughts, but not in a language we can read.
We are introducing Natural Language Autoencoders, or NLAs, which translate AI models’ activations into readable text. NLAs have already helped us improve how we test our models for safety and better understand why they do what they do.
Read more about this research on our blog: https://www.anthropic.com/research/natural-language-autoencoders
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: LLM Foundations
View skill →
🎓
Tutor Explanation
DeepCamp AI