Poisoned Identifiers Survive LLM Deobfuscation: A Case Study on Claude Opus 4.6
📰 ArXiv cs.AI
Poisoned identifiers in JavaScript code can survive LLM deobfuscation, even when the model understands the correct semantics
Action Steps
- Run experiments on LLM deobfuscation using Claude Opus 4.6
- Analyze the persistence of poisoned identifier names in the reconstructed code
- Investigate the impact of poisoned identifiers on model performance and security
Who Needs to Know This
AI engineers and researchers working on LLMs and code deobfuscation can benefit from this study to improve model performance and identify potential security risks
Key Insight
💡 Poisoned identifier names can persist in LLM-reconstructed code, posing potential security risks
Share This
🚨 Poisoned identifiers can survive LLM deobfuscation! 🤖
DeepCamp AI