-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dropout behavior? #8
Comments
@Yuliang-Zou I have also an implementation in Pytorch. However, still I'm not able to make it training since the reconstructed images are pure noise. Please, take a look maybe you find something. https://github.com/edgarriba/ali-pytorch ping @vdumoulin |
@Yuliang-Zou @edgarriba Sorry about the delay! The way dropout is added to the network is indeed a bit hard to parse. What this block of code does is it first retrieves the symbolic variables to the inputs of the layers identified in the list (lines 126-129) and applies dropout to them via graph replacement (line 131). The layers in the list correspond to
To express it more compactly, dropout is applied to the input of all convolutions in the discriminator. Another thing which may be confusing is that the 0.2 value in the call to I hope this clears things up! Please don't hesitate to reply with further questions if you're still having trouble. |
@vdumoulin yep, pytorch uses the same convention |
Hi, I am replicating the code with PyTorch.
But I am not sure about the dropout behavior here. Seems that you apply dropout after all layers of the Discriminator (i.e. conv -> dropout -> bn -> dropout -> leaky relu -> dropout etc.), is that correct?
Also, do you apply any preprocessing on the input data? Seems that you rescale it to [0, 1]?
Thanks!
The text was updated successfully, but these errors were encountered: