Quality and Safety for LLM Applications

Coursera Courses ↗ · Coursera

Open Course on Coursera

Free to audit · Opens on Coursera

Quality and Safety for LLM Applications

Coursera · Intermediate ·🧠 Large Language Models ·1mo ago
It’s always crucial to address and monitor safety and quality concerns in your applications. Building LLM applications poses special challenges. In this course, you’ll explore new metrics and best practices to monitor your LLM systems and ensure safety and quality. You’ll learn how to: 1. Identify hallucinations with methods like SelfCheckGPT. 2. Detect jailbreaks (prompts that attempt to manipulate LLM responses) using sentiment analysis and implicit toxicity detection models. 3. Identify data leakage using entity recognition and vector similarity analysis. 4. Build your own monitoring system to evaluate app safety and security over time. Upon completing the course, you’ll have the ability to identify common security concerns in LLM-based applications, and be able to customize your safety and security evaluation tools to the LLM that you’re using for your application.
Watch on Coursera ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Up next
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)
Watch →