Unleash the power of Local LLM's with Ollama x AnythingLLM
Running local LLMS for inferencing, character building, private chats, or just custom documents has been all the rage, but it isn't easy for the layperson.
Today, with only a single laptop, no GPU, and two free applications you can get a fully private Local LLM RAG chatbot running in less than 5 minutes!
This is no joke - the teams at Ollama and AnythingLLM are now fully compatible, meaning that the sky is the limit. Run models like LLama-2, Mistral, CodeLLama, and more to make your dreams a reality with only a CPU.
Ollama: https://ollama.com/
AnythingLLM: https://useanything.com/download
…
Watch on YouTube ↗
(saves to browser)
Chapters (18)
Introduction to Ollama x AnythingLLM on a laptop
0:36
Introduction to Ollama
1:11
Technical limitations
1:48
Ollama Windows is coming soon!
2:11
Let’s get started already!
2:17
Install Ollama
2:25
Ollama model selection
2:41
Running your first model
3:33
Running the Llama-2 Model by Meta
3:57
Sending our first Local LLM chat!
4:53
Giving Ollama superpowers with AnythingLLM
5:31
Connecting Ollama to AnythingLLM
6:45
AnythingLLM express setup details
7:28
Create your AnythingLLM workspace
7:45
Embedding custom documents for RAG for Ollama
8:22
Advanced settings for AnythingLLM
8:53
Sending a chat to Ollama with full RAG capabilities
9:30
Closing thoughts and considerations
DeepCamp AI