Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interpreting Generator and Critic loss #77

Open
KhrystynaFaryna opened this issue Jun 19, 2020 · 1 comment
Open

Interpreting Generator and Critic loss #77

KhrystynaFaryna opened this issue Jun 19, 2020 · 1 comment

Comments

@KhrystynaFaryna
Copy link

Dear @martinarjovsky,
I am currently working on a project with MRI data.
I was using WGAN -GP loss on 2D implementation, with hyperparameters proposed in WGAN-GP paper - everything worked smoothly.
Now I switched to 3D implementation and started facing issues.
The G loss explodes to extremely high values(10^7), while D loss goes really low(-10^6).
I understand that for WGAN to work the critic needs to be near optima. However if done so, the Critic keeps producing high output for fake images which makes G loss skyrocket. My patch size is (176,144,16), in 2d it was (176,144).
1)I tried adding layer normalization to Critic, even though the loss values do not explode, the GAN fails to converge.
2) I tried tinkering the learning rate.
2.1.) High learning obviously make it even worse
2.2.) With low learning rates this explosion still happens but later in training.
3) I tried changing number of C iterations
3.1.) The more of Critic iteration I do - the faster it skyrockets.
3.2.) If i do same number of Critic/Generator iterations(1:1) the loss stays in normal margins, but the net does not converge to anything reasonable.
Any idea what could be the cause?
Thank you!

wganlooo

@tony10101105
Copy link

@KhrystynaFaryna I'm facing the same problem. Have you solved it now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants