The spelled-out intro to language modeling: building makemore

Andrej Karpathy · Beginner ·🧠 Large Language Models ·3y ago
We implement a bigram character-level language model, which we will further complexify in followup videos into a modern Transformer language model, like GPT. In this video, the focus is on (1) introducing torch.Tensor and its subtleties and use in efficiently evaluating neural networks and (2) the overall framework of language modeling that includes model training, sampling, and the evaluation of a loss (e.g. the negative log likelihood for classification). Links: - makemore on github: https://github.com/karpathy/makemore - jupyter notebook I built in this video: https://github.com/karpathy/nn-zero-to-hero/blob/master/lectures/makemore/makemore_part1_bigrams.ipynb - my website: https://karpathy.ai - my twitter: https://twitter.com/karpathy - (new) Neural Networks: Zero to Hero series Discord channel: https://discord.gg/3zy8kqD9Cp , for people who'd like to chat more and go beyond youtube comments Useful links for practice: - Python + Numpy tutorial from CS231n https://cs231n.github.io/python-numpy-tutorial/ . We use torch.tensor instead of numpy.array in this video. Their design (e.g. broadcasting, data types, etc.) is so similar that practicing one is basically practicing the other, just be careful with some of the APIs - how various functions are named, what arguments they take, etc. - these details can vary. - PyTorch tutorial on Tensor https://pytorch.org/tutorials/beginner/basics/tensorqs_tutorial.html - Another PyTorch intro to Tensor https://pytorch.org/tutorials/beginner/nlp/pytorch_tutorial.html Exercises: E01: train a trigram language model, i.e. take two characters as an input to predict the 3rd one. Feel free to use either counting or a neural net. Evaluate the loss; Did it improve over a bigram model? E02: split up the dataset randomly into 80% train set, 10% dev set, 10% test set. Train the bigram and trigram models only on the training set. Evaluate them on dev and test splits. What can you see? E03: use the dev set to tune the strength of smoothin
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

How I Made My Android App Discoverable on 4 LLMs in 24 Hours (llms.txt, IndexNow, JSON-LD, the Bing Cycle)
Make your Android app discoverable on 4 LLMs in 24 hours using llms.txt, IndexNow, JSON-LD, and the Bing Cycle
Dev.to · TAMSIV
What LLMs Can Actually Do for Your Business
Discover how LLMs can revolutionize your business by automating written content generation, improving email management, and enhancing overall productivity
Medium · AI
MiMo-V2.5-Pro: The Long-Context LLM I’d Actually Test Before Paying More for Claude or GPT
Learn about MiMo-V2.5-Pro, a long-context LLM, and why you should test it before paying for alternatives like Claude or GPT
Medium · Programming
25 Deep Learning Questions Every GenAI Engineer Gets Asked (And How to Answer Them)- Part I
Learn how to answer 25 deep learning questions for GenAI engineers, covering topics like RAG pipelines and multi-agent workflows
Medium · Deep Learning
Up next
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)
Watch →