Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to convert sketch into fixed_z (.pth file) when running generate.py #27

Open
jingnian-yxq opened this issue Mar 13, 2023 · 4 comments

Comments

@jingnian-yxq
Copy link

I tried pix2latent to get fixed_z (.pth file), but met several problems:

  1. after running invert_stylegan2_*.py, a .npy file is saved for variables, but how can I get a .pth file?
  2. the repo for stylegan2 is for size 384x512, while my sketch is 256x256, thus it reports many errors for wrong dims. I don't know how to correct it.
    Thanks a lot and looking forward to responding.
@PeterWang512
Copy link
Owner

I think pix2latent is more suitable when your input is an RGB image. The idea is the get the latent z that captures the texture, color, and background of the image, and then edit the shape by changing the model weights. Hope this helps!

@jingnian-yxq
Copy link
Author

Thanks for your reply! I am wondering how can I generate an image from a given sketch input by running generate.py, since the samples are generated without user sketch input. I viewed some issues here and found pix2latent is mentioned for getting fixed_z for the input of generate.py, but I still don't know how to get it, for the file type as well as the image size :(

@PeterWang512
Copy link
Owner

Our work focuses on creating a model based on a sketch, so you'll need to train a model with the sketch. Instruction is specified in this section: https://github.com/PeterWang512/GANSketching#model-training

After that, you will have models that generate infinite images with the same shape and pose as the sketch. Hope this helps!

@jingnian-yxq
Copy link
Author

I'm confused. I have completed the model training with my sketch. Then after the model is completed, how can I generate images with another sketch(not training sketches) by applying the trained model? In fact, this is what I want to do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants