Let's reproduce GPT-2 (124M)

Andrej Karpathy · Advanced ·🧠 Large Language Models ·1y ago
We reproduce the GPT-2 (124M) from scratch. This video covers the whole process: First we build the GPT-2 network, then we optimize its training to be really fast, then we set up the training run following the GPT-2 and GPT-3 paper and their hyperparameters, then we hit run, and come back the next morning to see our results, and enjoy some amusing model generations. Keep in mind that in some places this video builds on the knowledge from earlier videos in the Zero to Hero Playlist (see my channel). You could also see this video as building my nanoGPT repo, which by the end is about 90% similar. Links: - build-nanogpt GitHub repo, with all the changes in this video as individual commits: https://github.com/karpathy/build-nanogpt - nanoGPT repo: https://github.com/karpathy/nanoGPT - llm.c repo: https://github.com/karpathy/llm.c - my website: https://karpathy.ai - my twitter: https://twitter.com/karpathy - our Discord channel: https://discord.gg/3zy8kqD9Cp Supplementary links: - Attention is All You Need paper: https://arxiv.org/abs/1706.03762 - OpenAI GPT-3 paper: https://arxiv.org/abs/2005.14165 - OpenAI GPT-2 paper: https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf- The GPU I'm training the model on is from Lambda GPU Cloud, I think the best and easiest way to spin up an on-demand GPU instance in the cloud that you can ssh to: https://lambdalabs.com Chapters: 00:00:00 intro: Let’s reproduce GPT-2 (124M) 00:03:39 exploring the GPT-2 (124M) OpenAI checkpoint 00:13:47 SECTION 1: implementing the GPT-2 nn.Module 00:28:08 loading the huggingface/GPT-2 parameters 00:31:00 implementing the forward pass to get logits 00:33:31 sampling init, prefix tokens, tokenization 00:37:02 sampling loop 00:41:47 sample, auto-detect the device 00:45:50 let’s train: data batches (B,T) → logits (B,T,C) 00:52:53 cross entropy loss 00:56:42 optimization loop: overfit a single batch 01:02:00 data loader lite 01:06:14 paramet
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Moonshot AI and the Rise of Beijing’s Open-Source Frontier: What a $20B Valuation Signals for…
Moonshot AI's $20B valuation signals a shift in the AI landscape, with Beijing emerging as a hub for open-source innovation
Medium · LLM
“LLMs Do Not Remember Anything”: They only process the context we give them.
LLMs don't have memory, they process context given to them, and bigger models won't solve context accumulation problems
Dev.to AI
Why My Coding Assistant Started Replying in Korean When I Typed Chinese
Explore how coding assistants can unexpectedly switch languages due to embedding space overlaps, and learn to analyze such phenomena using vector databases and language models.
Towards Data Science
Claude AI vs ChatGPT: What I Noticed After Using Both for Real Projects
Compare Claude AI and ChatGPT for real projects to determine their strengths and weaknesses
Medium · ChatGPT

Chapters (13)

intro: Let’s reproduce GPT-2 (124M)
3:39 exploring the GPT-2 (124M) OpenAI checkpoint
13:47 SECTION 1: implementing the GPT-2 nn.Module
28:08 loading the huggingface/GPT-2 parameters
31:00 implementing the forward pass to get logits
33:31 sampling init, prefix tokens, tokenization
37:02 sampling loop
41:47 sample, auto-detect the device
45:50 let’s train: data batches (B,T) → logits (B,T,C)
52:53 cross entropy loss
56:42 optimization loop: overfit a single batch
1:02:00 data loader lite
1:06:14 paramet
Up next
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)
Watch →