Skip to content

aa1234241/vqgan

Repository files navigation

Bug fixed version of Dominic Rampas's VQGAN implementation. Many thanks to the original repo.

Update

The original implementation has some bugs.

  • Visualization error, as shown in the example above. I have corrected it.
  • Perceptual loss error. The NetLinLayer does not have the right name, so the pre-trained model failed to load weights on them. have fixed this issue by replacing it with the official VQGAN code.
  • Also the decoder part in VQGAN model is incorrect, I replaced the encoder and decoder using the official VQGAN code. For the gan loss, the disc start should not be too early. The previous given value is 10000; replace it with 100000.
  • Also, the disc factor is too large, previous is 1, change it to 0.2 I also find that the GAN loss becomes smaller as the training progresses. The lbd value converges to a very small value, like 0.02, after 200 epochs.
  • The original code in the codebook does not have a normalization operation. This cause many entry of the codebook not used during training. Add the embedding normalization made the codebook usage rage grow significantly.

My results are shown below: 309_50 The top row shows the original images, and the bottom row shows the reconstructed images.

transformer_10 miniGPT generated images.

Note:

Code Tutorial + Implementation Tutorial

Qries Qries

VQGAN

Vector Quantized Generative Adversarial Networks (VQGAN) is a generative model for image modeling. It was introduced in Taming Transformers for High-Resolution Image Synthesis. The concept is build upon two stages. The first stage learns in an autoencoder-like fashion by encoding images into a low-dimensional latent space, then applying vector quantization by making use of a codebook. Afterwards, the quantized latent vectors are projected back to the original image space by using a decoder. Encoder and Decoder are fully convolutional. The second stage is learning a transformer for the latent space. Over the course of training it learns which codebook vectors go along together and which not. This can then be used in an autoregressive fashion to generate before unseen images from the data distribution.

Results for First Stage (Reconstruction):

1. Epoch:

50. Epoch:

Results for Second Stage (Generating new Images):

Original Left | Reconstruction Middle Left | Completion Middle Right | New Image Right

1. Epoch:

100. Epoch:

Note: Let the model train for even longer to get better results.


Train VQGAN on your own data:

Training First Stage

  1. (optional) Configure Hyperparameters in training_vqgan.py
  2. Set path to dataset in training_vqgan.py
  3. python training_vqgan.py

Training Second Stage

  1. (optional) Configure Hyperparameters in training_transformer.py
  2. Set path to dataset in training_transformer.py
  3. python training_transformer.py

Citation

@misc{esser2021taming,
      title={Taming Transformers for High-Resolution Image Synthesis}, 
      author={Patrick Esser and Robin Rombach and Björn Ommer},
      year={2021},
      eprint={2012.09841},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages