Implementation of DCGAN paper with bernoulli distribution used as input noise vector and slight modification to train it better and skip modal collapse. The results are taken on anime dataset generated from danbooru.donmai.us website using 21k images as test samples and training on 50 epochs with learning-rate = 0.002 . Lots of slight modifications have been done to improve GANs training.
- GANs are really hard to train and try at your own luck.
- DCGAN generally works well, simply adding fully-connected layers causes problems and more layers for Generator yields better images, in the sense that Generator should be more powerful than Discriminator.
- Add noise to Discriminator's inputs and labels helps stablize training.
- Usage of different input and output resolution (64x64 vs 96x96), there seems no obvious difference during training, the generated images are also very similar.
- Binray Noise as G's input amazingly works, but the images are not as good as those with Gussian Noise.
- For some additional GAN tips, see @soumith's ganhacks
- Credits to jayleicn and carpedm20