-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluating checkpoints #290
Comments
In my opnion, the validition loss is not a relaible indiactor for the training status since teacher forcing input is used. You can keep out some data, inference on the text and calculate the mel specturm distance between the inferenced mel specturm and ground truth. This indiactor is used in Pre-Alignment Guided Attention for Improving Training Efficiency and Model Stability in End-to-End Speech Synthesis and Location-Relative Attention Mechanisms For Robust Long-Form Speech |
Thanks @bfs18! Do you know some implementation of it in Python? |
You can use this python package https://github.com/MattShannon/mcd
|
Thanks, and how do I get ref and syn mel spectrograms from an audio and text pair? I think this should be included in the repository. I wonder how people gets the best checkpoint with the current code... |
Use my branch. |
I've already done the training phase. I wouldn't like to retrain it. Is there a way to just evaluate held out data? |
Skip the training phase, and evaluate your model on validation set or test set. |
Sorry, but I don't know how |
Closing due to inactivity. |
Hi,
I'm training a model for new english corpora. My question is: during training how can I know which is the best checkpoint according to the evaluation set? I need to know the best model for decoding.
The text was updated successfully, but these errors were encountered: