Quality and Safety for LLM Applications
It’s always crucial to address and monitor safety and quality concerns in your applications. Building LLM applications poses special challenges.
In this course, you’ll explore new metrics and best practices to monitor your LLM systems and ensure safety and quality. You’ll learn how to:
1. Identify hallucinations with methods like SelfCheckGPT.
2. Detect jailbreaks (prompts that attempt to manipulate LLM responses) using sentiment analysis and implicit toxicity detection models.
3. Identify data leakage using entity recognition and vector similarity analysis.
4. Build your own monitoring …
Watch on Coursera ↗
(saves to browser)
DeepCamp AI