Local AI Agent with LangGraph + Ollama (Full Tutorial, Qwen3)

Shane | LLM Implementation ยท Intermediate ยท๐Ÿค– AI Agents & Automation ยท6mo ago
๐Ÿš€ Code: https://github.com/langchain-ai/langchain-academy/ Tired of API bills and vendor lock-in? In this tutorial, youโ€™ll learn how to adapt a LangGraph agent to run entirely locally using the free, open-source Qwen3 model with Ollama. We'll take a standard LangChain Academy example built for GPT-4o and show you how to swap it out for a powerful local alternative, giving you full control over your data and costs. // WHAT YOU'LL LEARN Setting up a Python development environment with UV, a blazingly fast package manager. Installing and running powerful open-source models like Qwen3 locally with Ollama. Building a conversational agent with memory using LangGraph's StateGraph. Defining and binding tools (like a calculator) to your local LLM. Adapting existing LangChain code from proprietary to open-source models. Launching and interacting with your agent using LangGraph Studio. Observing and debugging your agent's thought process with LangSmith. Understanding the core components of an agentic graph: nodes, edges, and state. // RESOURCES LangChain Academy Course: https://academy.langchain.com/ LangGraph Docs: https://docs.langchain.com/oss/python/langgraph/overview Ollama & Qwen3 Model: https://ollama.com/ https://ollama.com/library/qwen3 UV Package Manager: https://github.com/astral-sh/uv // CHAPTERS 00:00 - Intro: Moving Beyond Proprietary Models 01:03 - The Stack: LangGraph, Ollama, and UV 01:14 - Environment Setup with UV Package Manager 01:50 - Installing Ollama & Pulling the Qwen3 Model 04:01 - Creating a Virtual Environment & Project Dependencies 05:35 - Code Deep Dive: From OpenAI to Ollama 06:21 - Building the Agent Graph (Nodes & Edges) 11:00 - Configuring LangSmith for Tracing 12:12 - Running the Agent in LangGraph Studio 12:46 - Live Demo: Testing the Agent's Memory 13:56 - Recap & Next Steps // NEXT STEPS If this helped, give the video a like and subscribe! What should we build next with this local stack? Let me know in the comments below. Discord
Watch on YouTube โ†— (saves to browser)
Sign in to unlock AI tutor explanation ยท โšก30

Related AI Lessons

โšก
Your Agent Retried. The Email Sent Twice.
Learn how idempotency gates, budget enforcers, and risk gates prevent duplicate emails, double charges, and runaway API costs with real TypeScript code and zero runtime dependencies.
Dev.to AI
โšก
Building Workforce AI Agents with Visier and Amazon Quick
Learn to build workforce AI agents with Visier and Amazon QuickSight to improve HR analytics and decision-making
AWS Machine Learning
โšก
"Top 5 AI Automation Mistakes Enterprises Make and How to Avoid Them"
Learn the top 5 AI automation mistakes enterprises make and how to avoid them to ensure successful implementation
Dev.to AI
โšก
On Continuity Without Memory
Learn how continuity can exist without memory in AI systems and codebases, and why this matters for understanding identity and persistence
Dev.to AI

Chapters (11)

Intro: Moving Beyond Proprietary Models
1:03 The Stack: LangGraph, Ollama, and UV
1:14 Environment Setup with UV Package Manager
1:50 Installing Ollama & Pulling the Qwen3 Model
4:01 Creating a Virtual Environment & Project Dependencies
5:35 Code Deep Dive: From OpenAI to Ollama
6:21 Building the Agent Graph (Nodes & Edges)
11:00 Configuring LangSmith for Tracing
12:12 Running the Agent in LangGraph Studio
12:46 Live Demo: Testing the Agent's Memory
13:56 Recap & Next Steps
Up next
Give your Gemini Live Agent a phone number!
Google for Developers
Watch โ†’