-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inquiry about how to obtain visualizations of samples from spatial-VAEs #1
Comments
Of course, I'm glad to hear the code release is helpful. To generate images from the model, you need to select some point in latent space ( I've been meaning to add a jupyter notebook to this repo with a code sample. Hopefully, I'll be able to get to it sometime this week. |
Thank you very much for your prompt reply! I appreciate your plan for adding the code example to explain the image generation. Besides, when it comes to the selection of points in the latent space, I am a little bit confused about the way to do that. As noticing that Look forward to your kind reply! |
I was referring to the unstructured latent variables with z. These are set to have standard normal prior by the training procedure. If you want to perform inference on z for some specific image, then you can use the inference network, but this is not required to generate images with the generative model. The rotation and translation parameters are separate, structured latent variables. The structure is imposed through the generative network, not the inference network, by transforming the coordinates which are then decoded by the spatial generator network. This is described in the paper, but see the eval_minibatch function "if rotate" and "if translate" sections to see how this is done in the code. |
I see. Thank you very much for your clarification! |
Hello! Thank you for your time reading this!
Your work of spatial-VAE is very impressive. I really appreciate that you release your code, and I've managed to run your code (
train_mnist.py
) and got reasonable values for ELBO, Error, KL.Besides, I think the animations (GIFs) of learned motions of different bio-particles in the
README.md
are very helpful for novices (like me) to understand the main idea of your paper. I wonder if I could also generate similar images (like each frame in your GIFs) using my own dataset? If so, would you please give me some suggestion on how to achieve that?Thank you in advance!
The text was updated successfully, but these errors were encountered: