-
Notifications
You must be signed in to change notification settings - Fork 938
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to keep the best performance model #23
Comments
Hi JunqiZhao ! In keras you can just use the callback named ModelCheckpoint to checkpoint the model and saving the weights while monitoring a quantity eg-val_loss, val_acc, etc. Later you can load the weights saved in .hdf5 format using function load_weights and sending in the path of the weights as an argument for it. |
Hi @deadskull7 , |
I usually have a habit of evaluating the model performance by first plotting the learning graph and each time I plot the graph I try to see one of the following matching to the plot
The 4th one is quite different which says that your model is more good at testing the data which your model hasn't even seen which is quite suspicious. So this takes you to re-evaluate your data splitting method earlier. I hope I answered you. |
Hi Guillaume,
Thanks for your great post, this helps me a lot.
When training the RNN model, the model gives a very high performance in the middle of the training process, while after all the iterations, the final performance is not the best. I am not sure whether this case is normal, is there any way I can keep the best performance model in the training process, instead of the final model after all the iterations?
Thanks!
The text was updated successfully, but these errors were encountered: