-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Torch implemantation of the fast gradient method #1211
Comments
Hi Amir, Thanks for pointing this out. Indeed, it would be much better if the loss function is included in the function signature as done in the TensorFlow implementation. Would you like to send a PR addressing this? |
Sure :) |
Hey, @tejuafonja is this issue still open? I would like to work if that's the case |
Hi @Kkuntal990, yes it is. Please feel free to send a PR anytime : ) |
Hello, I've implemented a fix for this here: #1222 |
Hi!
Thanks for that great library!
It seems like the torch implementation assumes that the given model had been trained with cross-entropy loss, line 80 at torch/attacks/fast_gradient_method.py.
In the TF version(as in the paper) one can specify the loss function, line 46 at tf2/attacks/fast_gradient_method.py and in the Function signature.
Thanks!
The text was updated successfully, but these errors were encountered: