Building makemore Part 3: Activations & Gradients, BatchNorm

Andrej Karpathy · Advanced ·📄 Research Papers Explained ·3y ago
We dive into some of the internals of MLPs with multiple layers and scrutinize the statistics of the forward pass activations, backward pass gradients, and some of the pitfalls when they are improperly scaled. We also look at the typical diagnostic tools and visualizations you'd want to use to understand the health of your deep network. We learn why training deep neural nets can be fragile and introduce the first modern innovation that made doing so much easier: Batch Normalization. Residual connections and the Adam optimizer remain notable todos for later video. Links: - makemore on github: …
Watch on YouTube ↗ (saves to browser)
How to Ace a Career Change Interview
Next Up
How to Ace a Career Change Interview
Coursera