Deep Dive into LLMs like ChatGPT

Andrej Karpathy · Intermediate ·🧠 Large Language Models ·1y ago
This is a general audience deep dive into the Large Language Model (LLM) AI technology that powers ChatGPT and related products. It is covers the full training stack of how the models are developed, along with mental models of how to think about their "psychology", and how to get the best use them in practical applications. I have one "Intro to LLMs" video already from ~year ago, but that is just a re-recording of a random talk, so I wanted to loop around and do a lot more comprehensive version. Instructor Andrej was a founding member at OpenAI (2015) and then Sr. Director of AI at Tesla (2017-2022), and is now a founder at Eureka Labs, which is building an AI-native school. His goal in this video is to raise knowledge and understanding of the state of the art in AI, and empower people to effectively use the latest and greatest in their work. Find more at https://karpathy.ai/ and https://x.com/karpathy Chapters 00:00:00 introduction 00:01:00 pretraining data (internet) 00:07:47 tokenization 00:14:27 neural network I/O 00:20:11 neural network internals 00:26:01 inference 00:31:09 GPT-2: training and inference 00:42:52 Llama 3.1 base model inference 00:59:23 pretraining to post-training 01:01:06 post-training data (conversations) 01:20:32 hallucinations, tool use, knowledge/working memory 01:41:46 knowledge of self 01:46:56 models need tokens to think 02:01:11 tokenization revisited: models struggle with spelling 02:04:53 jagged intelligence 02:07:28 supervised finetuning to reinforcement learning 02:14:42 reinforcement learning 02:27:47 DeepSeek-R1 02:42:07 AlphaGo 02:48:26 reinforcement learning from human feedback (RLHF) 03:09:39 preview of things to come 03:15:15 keeping track of LLMs 03:18:34 where to find LLMs 03:21:46 grand summary Links - ChatGPT https://chatgpt.com/ - FineWeb (pretraining dataset): https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1 - Tiktokenizer: https://tiktokenizer.vercel.app/ - Transformer Neural Net 3D visualizer: https:/
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Stop Evaluating LLMs with “Vibe Checks”
Learn to evaluate LLMs effectively by building a decision-grade scorecard, moving beyond subjective 'vibe checks'
Towards Data Science
I gave the OpenAI SDK live web search by changing one line
Enable live web search in OpenAI SDK by modifying a single line of code, unlocking new possibilities for web-based applications
Dev.to · mv7
How I Made My Android App Discoverable on 4 LLMs in 24 Hours (llms.txt, IndexNow, JSON-LD, the Bing Cycle)
Make your Android app discoverable on 4 LLMs in 24 hours using llms.txt, IndexNow, JSON-LD, and the Bing Cycle
Dev.to · TAMSIV
What LLMs Can Actually Do for Your Business
Discover how LLMs can revolutionize your business by automating written content generation, improving email management, and enhancing overall productivity
Medium · AI

Chapters (24)

introduction
1:00 pretraining data (internet)
7:47 tokenization
14:27 neural network I/O
20:11 neural network internals
26:01 inference
31:09 GPT-2: training and inference
42:52 Llama 3.1 base model inference
59:23 pretraining to post-training
1:01:06 post-training data (conversations)
1:20:32 hallucinations, tool use, knowledge/working memory
1:41:46 knowledge of self
1:46:56 models need tokens to think
2:01:11 tokenization revisited: models struggle with spelling
2:04:53 jagged intelligence
2:07:28 supervised finetuning to reinforcement learning
2:14:42 reinforcement learning
2:27:47 DeepSeek-R1
2:42:07 AlphaGo
2:48:26 reinforcement learning from human feedback (RLHF)
3:09:39 preview of things to come
3:15:15 keeping track of LLMs
3:18:34 where to find LLMs
3:21:46 grand summary
Up next
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)
Watch →