Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vtsne theoretical basis #2

Open
bhomass opened this issue Jul 24, 2018 · 1 comment
Open

vtsne theoretical basis #2

bhomass opened this issue Jul 24, 2018 · 1 comment

Comments

@bhomass
Copy link

bhomass commented Jul 24, 2018

It seems vtsne is just tsne with an additional loss derived from reparameterization, something reminiscent of variational auto encoding. is there any paper explaining the theoretical basis?

@bhomass
Copy link
Author

bhomass commented Jul 25, 2018

I think I get the point. You like to apply a 2d isotropic gaussian as the prior for p_ij. Therefore, in addition to minimizing the KLD between the similarity based pdf of the final and original vector, you like the final tsne distribution to form k gaussian like clusters. Intuitively, that would be nice. Its always easier to focus on clustered points to try to find their latent characteristics. We all know that the original tsne can sometimes give misleading formations. It would be nice to apply this technique to some of the tough cases to verify that it leads to more accurate interpretations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant