Gradient Descent Explained | How Do ML and DL Models Learn? | Simple Explanation!
๐ฅ In this video we cover gradient descent - an optimization algorithm used to train most of the ML and DL models. During the training process, the algorithm computes the gradient of the error function, which shows the magnitude and the direction to update the weights to reduce the error. Calculus is the core of gradient descent. Remember that in complex scenarios, we can't surely say if we found the global best solution or just a good one.
In the next video we will cover the main types of gradient descents and their pros and cons. Subscribe for more!
๐ Key points covered:
0:00 - Introductโฆ
Watch on YouTube โ
(saves to browser)
Chapters (11)
Introduction.
0:09
Training step 1 - Random weights initialization.
0:20
Training step 2 - Get predictions.
0:30
Training step 3 - Error computation.
0:45
Training step 4 - Gradient Descent formula.
0:56
Gradient Descent Explained.
1:25
Learning Rate Explained.
1:32
Do you always find the best solution?
1:54
The purpose of the learning rate.
2:27
The main types of gradient descent.
2:40
Subscribe to our channel!
DeepCamp AI