Cross-Validation

It's an evaluation technique to measure the performance of a Machine Learning model by splitting the data into training and validation sets, training the model on the training set, and evaluating its performance on the validation set. This process is repeated multiple times, with different splits of the data, and the performance is averaged over all the splits.
Resampling methods are used for rearranging and randomizing data samples for Cross-Validation.

Benefits:

  • Measures level of overfitting.
  • Measure how the model generalizes to new, unseen data.