Poisoned Identifiers Survive LLM Deobfuscation: A Case Study on Claude Opus 4.6

📰 ArXiv cs.AI

Poisoned identifiers in JavaScript code can survive LLM deobfuscation, even when the model understands the correct semantics

advanced Published 7 Apr 2026
Action Steps
  1. Run experiments on LLM deobfuscation using Claude Opus 4.6
  2. Analyze the persistence of poisoned identifier names in the reconstructed code
  3. Investigate the impact of poisoned identifiers on model performance and security
Who Needs to Know This

AI engineers and researchers working on LLMs and code deobfuscation can benefit from this study to improve model performance and identify potential security risks

Key Insight

💡 Poisoned identifier names can persist in LLM-reconstructed code, posing potential security risks

Share This
🚨 Poisoned identifiers can survive LLM deobfuscation! 🤖
Read full paper → ← Back to News