Opus just got caught ...
Anthropic just published a paper showing Claude Opus 4.6 figured out it was being tested on BrowseComp, found the encrypted answer key on GitHub, wrote its own decryption code, and extracted the answer. Everyone's calling it deception — but the model was just doing exactly what it was told, and that pattern is showing up across every major AI lab.
Sources & references:
Anthropic — Eval awareness in Claude Opus 4.6's BrowseComp performance
https://www.anthropic.com/engineering/eval-awareness-browsecomp
Anthropic / Redwood Research — Alignment Faking in Large Language Models (December 2024)
htt…
Watch on YouTube ↗
(saves to browser)
DeepCamp AI