You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey I was curios if you have tried any methods for making multi-speaker VITS models with your encoder. Normal VITS seems to have a multi-speaker capability with this extra embedding layer for encoding speaker ID and providing that to various downstream parts (all the parts that take g)
Update I tried this approach with Libritts multispeaker dataset (901 speakers) and it did NOT produce good comprehensible. Let me know if you want to know more about the experiment. Here is a sample after 500k steps. Maybe this is what it usually sounds like around 500k? (batch size 64)
(github doesnt suppoer wav files so i zipped them) Samples.zip
Sorry, I do not have enough time to test the multispeaker experiments. But I believe that it could still works well. You can double check with the experiment from this pull pull request.
Hey I was curios if you have tried any methods for making multi-speaker VITS models with your encoder. Normal VITS seems to have a multi-speaker capability with this extra embedding layer for encoding speaker ID and providing that to various downstream parts (all the parts that take g)
would using your XPhoneBert encoder have much of an effect on this?
The text was updated successfully, but these errors were encountered: