Run State-of-the-art LLMs on RTX | NVIDIA NIM x AnythingLLM

Tim Carambat · Beginner ·📰 AI News & Updates ·1y ago
Hey everyone! In this video, I'm thrilled to announce our exciting partnership with NVIDIA, showcasing their groundbreaking NIM technology paired with AnythingLLM. NVIDIA NIM provide a supercharged method to run state-of-the-art models on your RTX AI PCs, effortlessly enhancing your AI capabilities. With AnythingLLM, you can easily harness this power without any complicated setup – just bring your GPU and go! Join us as we explore NVIDIA NIM and the numerous advantages of integrating this cutting-edge technology with AnythingLLM. Discover how you can run powerful LLMs entirely on your local h…
Watch on YouTube ↗ (saves to browser)

Chapters (13)

Introduction
1:00 What is NVIDIA NIM?
2:11 CES 2025 Announcement
3:14 What NIMs are available today?
4:18 Lets run a NIM on a 4090
5:00 NVIDIA NIM set up in AnythingLLM
6:40 Choosing your preferred LLM
8:00 Pulling in a model
8:30 Starting a NIM for inference
9:50 Lets use the model for chats, agents, and more!
13:00 Running Deepseek-R1
15:55 Summary of NVIDIA NIM w/AnythingLLM
16:30 Thanking everyone for supporting AnythingLLM
Advertising in the Age of Generative AI
Next Up
Advertising in the Age of Generative AI
Coursera