You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Maybe I don't interpret the model(...)-function correctly, but I see the following:
While training you put the correct and wrong rocstories together into the decoder. They both go through the embedding + decoder and then into the sparse_softmax_cross_entropy-function.
This means, though, that the model also learns to generate wrong sentences, or am I missing something?
My intuition would be to set all masks to 0 for the wrong sentences?!
Thanks and regards
The text was updated successfully, but these errors were encountered:
Maybe I don't interpret the model(...)-function correctly, but I see the following:
While training you put the correct and wrong rocstories together into the decoder. They both go through the embedding + decoder and then into the sparse_softmax_cross_entropy-function.
This means, though, that the model also learns to generate wrong sentences, or am I missing something?
My intuition would be to set all masks to 0 for the wrong sentences?!
Thanks and regards
The text was updated successfully, but these errors were encountered: