LangSmith Tracing Tutorial: Monitor and Debug Your LLM Calls Step by Step

Ready Tensor · Beginner ·🧠 Large Language Models ·2mo ago
In this video, we walk through how to trace and monitor your LLM application using LangSmith, a powerful observability tool built by the creators of LangChain. You’ll see how LangSmith captures every LLM call, groups them into traces, and gives you deep visibility into inputs, outputs, latency, token usage, and cost — even for multi-step workflows. You’ll learn how to: Set up LangSmith and generate an API key Configure environment variables to enable tracing Use LangSmith with any LLM framework (not just LangChain) Trace simple single-call LLM interactions Track complex multi-step workf…
Watch on YouTube ↗ (saves to browser)

Chapters (6)

What is LangSmith and when to use it
0:37 LangSmith setup and environment variables
1:28 Overview of traced LLM examples
2:15 Viewing traces in the LangSmith dashboard
3:19 Inspecting multi-step traces and outputs
4:05 Token usage, latency, and cost analysis
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Next Up
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)