Deep Dive into LLMs like ChatGPT
This is a general audience deep dive into the Large Language Model (LLM) AI technology that powers ChatGPT and related products. It is covers the full training stack of how the models are developed, along with mental models of how to think about their "psychology", and how to get the best use them in practical applications. I have one "Intro to LLMs" video already from ~year ago, but that is just a re-recording of a random talk, so I wanted to loop around and do a lot more comprehensive version.
Instructor
Andrej was a founding member at OpenAI (2015) and then Sr. Director of AI at Tesla (2017-2022), and is now a founder at Eureka Labs, which is building an AI-native school. His goal in this video is to raise knowledge and understanding of the state of the art in AI, and empower people to effectively use the latest and greatest in their work.
Find more at https://karpathy.ai/ and https://x.com/karpathy
Chapters
00:00:00 introduction
00:01:00 pretraining data (internet)
00:07:47 tokenization
00:14:27 neural network I/O
00:20:11 neural network internals
00:26:01 inference
00:31:09 GPT-2: training and inference
00:42:52 Llama 3.1 base model inference
00:59:23 pretraining to post-training
01:01:06 post-training data (conversations)
01:20:32 hallucinations, tool use, knowledge/working memory
01:41:46 knowledge of self
01:46:56 models need tokens to think
02:01:11 tokenization revisited: models struggle with spelling
02:04:53 jagged intelligence
02:07:28 supervised finetuning to reinforcement learning
02:14:42 reinforcement learning
02:27:47 DeepSeek-R1
02:42:07 AlphaGo
02:48:26 reinforcement learning from human feedback (RLHF)
03:09:39 preview of things to come
03:15:15 keeping track of LLMs
03:18:34 where to find LLMs
03:21:46 grand summary
Links
- ChatGPT https://chatgpt.com/
- FineWeb (pretraining dataset): https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1
- Tiktokenizer: https://tiktokenizer.vercel.app/
- Transformer Neural Net 3D visualizer: https:/
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
Playlist
Uploads from Andrej Karpathy · Andrej Karpathy · 16 of 17
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
▶
17
Stable diffusion dreams of steam punk neural networks
Andrej Karpathy
Stable diffusion dreams of "blueberry spaghetti" for one night
Andrej Karpathy
The spelled-out intro to neural networks and backpropagation: building micrograd
Andrej Karpathy
Stable diffusion dreams of tattoos
Andrej Karpathy
Stable diffusion dreams of steampunk brains
Andrej Karpathy
Stable diffusion dreams of psychedelic faces
Andrej Karpathy
The spelled-out intro to language modeling: building makemore
Andrej Karpathy
Building makemore Part 2: MLP
Andrej Karpathy
Building makemore Part 3: Activations & Gradients, BatchNorm
Andrej Karpathy
Building makemore Part 4: Becoming a Backprop Ninja
Andrej Karpathy
Building makemore Part 5: Building a WaveNet
Andrej Karpathy
Let's build GPT: from scratch, in code, spelled out.
Andrej Karpathy
[1hr Talk] Intro to Large Language Models
Andrej Karpathy
Let's build the GPT Tokenizer
Andrej Karpathy
Let's reproduce GPT-2 (124M)
Andrej Karpathy
Deep Dive into LLMs like ChatGPT
Andrej Karpathy
How I use LLMs
Andrej Karpathy
More on: LLM Foundations
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
How I Made My Android App Discoverable on 4 LLMs in 24 Hours (llms.txt, IndexNow, JSON-LD, the Bing Cycle)
Dev.to · TAMSIV
What LLMs Can Actually Do for Your Business
Medium · AI
MiMo-V2.5-Pro: The Long-Context LLM I’d Actually Test Before Paying More for Claude or GPT
Medium · Programming
25 Deep Learning Questions Every GenAI Engineer Gets Asked (And How to Answer Them)- Part I
Medium · Deep Learning
Chapters (24)
introduction
1:00
pretraining data (internet)
7:47
tokenization
14:27
neural network I/O
20:11
neural network internals
26:01
inference
31:09
GPT-2: training and inference
42:52
Llama 3.1 base model inference
59:23
pretraining to post-training
1:01:06
post-training data (conversations)
1:20:32
hallucinations, tool use, knowledge/working memory
1:41:46
knowledge of self
1:46:56
models need tokens to think
2:01:11
tokenization revisited: models struggle with spelling
2:04:53
jagged intelligence
2:07:28
supervised finetuning to reinforcement learning
2:14:42
reinforcement learning
2:27:47
DeepSeek-R1
2:42:07
AlphaGo
2:48:26
reinforcement learning from human feedback (RLHF)
3:09:39
preview of things to come
3:15:15
keeping track of LLMs
3:18:34
where to find LLMs
3:21:46
grand summary
🎓
Tutor Explanation
DeepCamp AI