What is the learning Rate?
The learning rate controls how much we should adjust the weights concerning the loss gradient. Learning rates are randomly initialized.
Lower the values of the learning rate slower will be the convergence to global minima.
Higher values for the learning rate will not allow the gradient descent to converge.
Since our goal is to minimize the function cost to find the optimized value for weights, we run multiples iteration with different weights and calculate the cost to arrive at a minimum cost.