A crucial aspect of the definition and optimization of any Machine Learning model is the right choice of the parameters that describe the network and methodology. These parameters are collectively called hyperparameters in order to distinguish them from the weights of the network.
In a first approximation, the choice of hyperparameters in any Machine Learning application can be made manually, however, such a choice does not guarantee we have chosen the best model: maybe we are stopping too soon, maybe the learning rate is too slow or the network not flexible enough.
In order to overcome the shortcomings of a manual choice we introduce an automatic hyperparameter scan. To that end we define a figure of merit tailored to avoid overlearning while obtaining the best possible model and then run thousands of (simplified) fits to evaluate said figure of merit.
The exact details of the hyperparameter optimization are further described in the code documentation.