LLM Tracing with Langfuse: Debug and Observe Complex AI Pipelines Locally

Ready Tensor · Intermediate ·🧠 Large Language Models ·2mo ago
In this video, we dive into LLM tracing and observability using Langfuse, one of the most popular tools for understanding what happens inside your LLM-powered applications. You’ll learn how to run Langfuse locally using Docker and use it to trace simple LLM calls, post-processing logic, and multi-step pipelines involving multiple LLM invocations. We cover how tracing works for: Single LLM API calls LLM calls followed by custom Python logic Multi-step pipelines with multiple LLM calls and intermediate outputs You’ll also explore the Langfuse UI to inspect traces, token usage, latency, cos…
Watch on YouTube ↗ (saves to browser)

Chapters (8)

What is LLM tracing and why it matters
0:45 Running Langfuse locally with Docker
1:50 Creating a project and API keys
3:15 Tracing a simple LLM call
4:12 Tracing LLM output with custom post-processing
4:58 Tracing a multi-step LLM pipeline
5:32 Exploring traces in the Langfuse dashboard
8:02 Understanding inputs, outputs, and pipeline results
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Next Up
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)