You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. In the tfhub of elmo (https://tfhub.dev/google/elmo/2) there is an output like you provide: elmo: the weighted sum of the 3 layers, where the weights are trainable. This tensor has shape [batch_size, max_length, 1024]
I believe it's the equivalent for -1 for an average of 3 layers. (default)
I want to take the output you provide (elmo) and turn it into sentence embedding of elmo: default: a fixed mean-pooling of all contextualized word representations with shape [batch_size, 1024].
Hi. In the tfhub of elmo (https://tfhub.dev/google/elmo/2) there is an output like you provide:
elmo: the weighted sum of the 3 layers, where the weights are trainable. This tensor has shape [batch_size, max_length, 1024]
I believe it's the equivalent for
-1 for an average of 3 layers. (default)
I want to take the output you provide (elmo) and turn it into sentence embedding of elmo:
default: a fixed mean-pooling of all contextualized word representations with shape [batch_size, 1024].
How do I do this fixed mean pooling?
How do I get sentence embedding from your output?
Produces different outputs, but no output is in shape of (2, 1024) which I want (2 sentences embedding)
How can I do this max pooling in order to reach output of (2, 1024)?
Thanks!
The text was updated successfully, but these errors were encountered: