Replies: 11 comments
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> byuns9334 |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> byuns9334 |
Beta Was this translation helpful? Give feedback.
-
>>> byuns9334 |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> byuns9334 |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> byuns9334 |
Beta Was this translation helpful? Give feedback.
-
>>> byuns9334 |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> byuns9334
[January 16, 2020, 9:58am]
HI, I added GravesAttention class to our custom Tacotron-gst code, and
when we train it, the attention weight value (alpha_t) gets decreased
over iterations and eventually it becomes complete zero vector. Can
anyone help with this issue? or has anyone experienced this issue?
[This is an archived TTS discussion thread from discourse.mozilla.org/t/attention-weights-of-graves-attention-get-decreased-and-reach-zero-value-as-iteration-goes]
Beta Was this translation helpful? Give feedback.
All reactions