Gradient Descent

Gradient Descent is an Optimization technique, using the concept of Gradient (Calculus) to tune the coefficient and bias of a linear equation, I.e. reach local or global minima. It can be used to study how the output changes when input is changed.


Notes:

  • Gradient descent adjusts parameters to minimize particular functions to local minima.
  • In Linear Regression, it finds weight and biases, and Deep Learning backward propagation uses the method.
  • The algorithm objective is to identify model parameters like weight and bias that reduce model error on training data, I.e. minimize an error function.
  • In order for our model to fit data the best way possible, we would have to to find the global minimum of the cost function. However, finding that global minimum and changing all those parameters is usually very costly and time-consuming. That is why we are using iterative optimization techniques like gradient descent.
  • gradient groups all partial derivatives, the Gradient (Calculus) is just the Vector containing all the partial derivatives. In essence, it generalizes derivatives to scalar functions of several variables.

Types of Gradient Descent:

GRADIENT_DESCENT.png


References: