You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the ABLR algorithm (and DNGO as well, for that matter), we could create a validation set to prevent overfitting.
NOTE: I don't have any evidence of overfitting with ABLR (or DNGO for that matter).
It would be used to do some early stopping, so that we don't overfit on the training dataset. We could either retrain for the number of epochs that early stopping did, with the train + val set, or we could just add the validation dataset to the training dataset, and keep going for a few epochs, something like that.
Training is super fast, so it wouldn't be that long to do.
In the ABLR algorithm (and DNGO as well, for that matter), we could create a validation set to prevent overfitting.
NOTE: I don't have any evidence of overfitting with ABLR (or DNGO for that matter).
It would be used to do some early stopping, so that we don't overfit on the training dataset. We could either retrain for the number of epochs that early stopping did, with the train + val set, or we could just add the validation dataset to the training dataset, and keep going for a few epochs, something like that.
Training is super fast, so it wouldn't be that long to do.
Originally posted by @lebrice in #10 (comment)
The text was updated successfully, but these errors were encountered: