Skip to content

A Tensorflow implementation of GAN, WGAN and WGAN with gradient penalty.

Notifications You must be signed in to change notification settings

dhirajpatnaik16297/unified-gan-tensorflow

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Original, Wasserstein, and Wasserstein-Gradient-Penalty DCGAN

(*) This repo is a modification of carpedm20/DCGAN-tensorflow.

(*) The full credit of the model structure design goes to carpedm20/DCGAN-tensorflow.

I started with carpedm20/DCGAN-tensorflow because its DCGAN implementation is not fixed for one dataset, which is not a common setting. Most WGAN and WGAN-GP implementations only work on 'mnist' or one given dataset.

Modifications

A couple of modifications I've made that could be helpful to people who try to implement GAN on their own for the first time.

  1. Added model_type which could be one of 'GAN' (original), 'WGAN' (Wasserstein distance as loss), and 'WGAN_GP' (Wasserstein distance as loss function with gradient penalty), each corresponding to one variation of GAN model.
  2. UnifiedDCGAN can build and train the graph differently according to model_type.
  3. Some model methods were reconstructed so that the code is easier to read through.
  4. Many comments were added for important, or potential confusing functions, like conv and deconv operations in ops.py.

The download.py file stays same as in carpedm20/DCGAN-tensorflow. I keep this file in the repo for the sake of easily fetching dataset for testing.

Reading

If you are interested in the math behind the loss functions of GAN and WGAN, read here.

Related Papers

Test Runs:

(left) python main.py --dataset=mnist --model_type=GAN --batch_size=64 --input_height=28 --output_height=28 --max_iter=10000 --learning_rate=0.0002 --train
(middle) python main.py --dataset=mnist --model_type=WGAN --batch_size=64 --input_height=28 --output_height=28 --d_iter=5 --max_iter=10000 --learning_rate=0.00005 --train
(right) python main.py --dataset=mnist --model_type=WGAN_GP --batch_size=64 --input_height=28 --output_height=28 --d_iter=5 --max_iter=10000 --learning_rate=0.0001 --train

  

About

A Tensorflow implementation of GAN, WGAN and WGAN with gradient penalty.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%