In Artificial Neural Networks (ANN), a batch refers to a fixed number of training examples utilized in one iteration during the training process. I.e. we divide training data into smaller subsets called batches.


  • Batch size is the size of each input given to a model during training in each iteration.
  • The model’s parameters (Weights and biases) are updated based on the average Gradient computed from the Loss Functions over this batch.
  • Larger batch sizes can lead to faster training (since we process more examples at once), but they require more memory.
  • Smaller batch sizes may lead to more frequent updates and better generalization, but training might be slower.