-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
is KLD calculation correct? #3
Comments
Hi, yes, it looks like a mistake. Thanks for spotting it. I will also change Best, |
I will fix it in in several days, when I will have time to make sure everything still works. |
No problem! I think the reconstruction losses in the paper are similarly given as norms, where they are also means in the code. And the latent space loss is said to be L2 but appears to really be cosine. |
The encoder transforms all output vectors to have a norm of 1 (i.e. mapping to a unit sphere). But if this is the case, a batch of such vectors cannot reach the unit Gaussian as demanded by the KL loss function, even when perfectly distributed around the sphere. Have I missed something, or shouldn't the loss function be calculated with the standard deviation of the transformed vectors, rather than 1? |
KL divergence in fact will not be zero in the perfect case, but when KL is minimal Q ~ uniform on sphere, that is what we want. |
AGE/src/losses.py
Lines 38 to 41 in 0915760
KLD appears to use variance in place of standard deviation. utils.var() computes variance as squared distance from mean. Then it's squared again in the KLN01Loss module. Should it be (in the default 'qp' direction):
?
(Additionally, the paper gives the KLD as a sum but here it's a mean, changing the meaning of the hyperparameters weighting the reconstruction losses)
The text was updated successfully, but these errors were encountered: