Building makemore Part 2: MLP

Andrej Karpathy · Beginner ·📐 ML Fundamentals ·3y ago
We implement a multilayer perceptron (MLP) character-level language model. In this video we also introduce many basics of machine learning (e.g. model training, learning rate tuning, hyperparameters, evaluation, train/dev/test splits, under/overfitting, etc.). Links: - makemore on github: https://github.com/karpathy/makemore - jupyter notebook I built in this video: https://github.com/karpathy/nn-zero-to-hero/blob/master/lectures/makemore/makemore_part2_mlp.ipynb - collab notebook (new)!!!: https://colab.research.google.com/drive/1YIfmkftLrz6MPTOO9Vwqrop2Q5llHIGK?usp=sharing - Bengio et al. 2003 MLP language model paper (pdf): https://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf - my website: https://karpathy.ai - my twitter: https://twitter.com/karpathy - (new) Neural Networks: Zero to Hero series Discord channel: https://discord.gg/3zy8kqD9Cp , for people who'd like to chat more and go beyond youtube comments Useful links: - PyTorch internals ref http://blog.ezyang.com/2019/05/pytorch-internals/ Exercises: - E01: Tune the hyperparameters of the training to beat my best validation loss of 2.2 - E02: I was not careful with the intialization of the network in this video. (1) What is the loss you'd get if the predicted probabilities at initialization were perfectly uniform? What loss do we achieve? (2) Can you tune the initialization to get a starting loss that is much more similar to (1)? - E03: Read the Bengio et al 2003 paper (link above), implement and try any idea from the paper. Did it work? Chapters: 00:00:00 intro 00:01:48 Bengio et al. 2003 (MLP language model) paper walkthrough 00:09:03 (re-)building our training dataset 00:12:19 implementing the embedding lookup table 00:18:35 implementing the hidden layer + internals of torch.Tensor: storage, views 00:29:15 implementing the output layer 00:29:53 implementing the negative log likelihood loss 00:32:17 summary of the full network 00:32:49 introducing F.cross_entropy and why 00:37:56 implementing th
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Python Programming Course in Delhi
Learn Python programming with a practical course in Delhi, designed for beginners and students
Medium · Python
Choosing the Right Architecture: A Software Engineer’s Field Guide to Neural Networks
Learn to choose the right neural network architecture for your AI project and understand the key considerations involved
Medium · Data Science
Chandra OCR 2: When Open Source Reads What Others Miss
Improve text extraction from documents with Chandra OCR 2, an open-source solution that outperforms others in accuracy
Medium · Machine Learning
The hidden value of teaching ML to Non-ML teams
Teaching ML to non-ML teams can break knowledge silos and increase project success, making it a valuable investment for companies
Medium · Machine Learning

Chapters (10)

intro
1:48 Bengio et al. 2003 (MLP language model) paper walkthrough
9:03 (re-)building our training dataset
12:19 implementing the embedding lookup table
18:35 implementing the hidden layer + internals of torch.Tensor: storage, views
29:15 implementing the output layer
29:53 implementing the negative log likelihood loss
32:17 summary of the full network
32:49 introducing F.cross_entropy and why
37:56 implementing th
Up next
Computational Thinking with JavaScript 2: Model & Analyse
Coursera
Watch →