You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it is a common way of basic transformers, however, the official implementation of SATRN places it before. Do you have any plans to align your code to the original one, or basis for reliability?
The text was updated successfully, but these errors were encountered:
Hello,
I found Layer Normalization is placed after self-attention and the residual connection in
TransformerEncoderLayer2D
.vedastr/vedastr/models/bodies/sequences/transformer/unit/encoder.py
Lines 60 to 63 in d617764
I think it is a common way of basic transformers, however, the official implementation of SATRN places it before. Do you have any plans to align your code to the original one, or basis for reliability?
The text was updated successfully, but these errors were encountered: