-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
a357ea4
commit 602d95b
Showing
1 changed file
with
4 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,11 +1,11 @@ | ||
# DCGAN | ||
Implementation of DCGAN paper with bernoulli distribution used as input noise vector and slight modification to train it better and skip modal collapse. The results are taken on anime dataset generated from doonami.ru website using 21k images as test samples and training on 50 epochs with learning-rate = 0.002 . Lots of slight modifications have been done to improve GANs training. | ||
Implementation of DCGAN paper with bernoulli distribution used as input noise vector and slight modification to train it better and skip modal collapse. The results are taken on anime dataset generated from danbooru.donmai.us website using 21k images as test samples and training on 50 epochs with learning-rate = 0.002 . Lots of slight modifications have been done to improve GANs training. | ||
|
||
### Things I've learned | ||
1. GANs are really hard to train and try at your own luck. | ||
2. DCGAN generally works well, simply add fully-connected layers causes problems and more layers for G yields better images, in the sense that G should be more powerful than D. | ||
4. Add noise to D's inputs and labels helps stablize training. | ||
5. Use differnet input and generate resolution (64x64 vs 96x96), there seems no obvious difference during training, the generated images are also very similar. | ||
2. DCGAN generally works well, simply adding fully-connected layers causes problems and more layers for Generator yields better images, in the sense that Generator should be more powerful than Discriminator. | ||
4. Add noise to Discriminator's inputs and labels helps stablize training. | ||
5. Usage of different input and output resolution (64x64 vs 96x96), there seems no obvious difference during training, the generated images are also very similar. | ||
6. Binray Noise as G's input amazingly works, but the images are not as good as those with Gussian Noise. | ||
7. For some additional GAN tips, see @soumith's [ganhacks](https://github.com/soumith/ganhacks) | ||
8. Credits to [jayleicn](https://github.com/jayleicn) and [carpedm20](https://github.com/carpedm20/DCGAN-tensorflow) |