Issue with addition of new speakers #303
Replies: 1 comment
-
I loaded the wrong checkpoint. Issue got resolved after loading the right checkpoint |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I'm trying to finetune a custom fastpitch model provided by AI4 Bharat on the openSLR dataset. It looks like they have trained the model using 4 speakers. However, I would like to use a new speaker instead of the available speakers. So, i have formatted the custom dataset accordingly with new speaker labels. And in addition, I have changed the number of speakers to 5 in the config file after adding an ID for the new speaker in the .json file. I finetuned the weights with this information for 100 epoch's. Later, i just tried to do inference using the new checkpoint. However, I'm getting the following error
size mismatch for emb_g.weight: copying a param with shape torch.Size([4, 512]) from checkpoint, the shape in current model is torch.Size([5, 512]).
From this, it looks like the finetuned model still somehow correlates with 4 speakers, but not 5 speakers. Is my understanding correct? Please let me know the procedure to resolve this issue.
Beta Was this translation helpful? Give feedback.
All reactions