You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I would like to use the randomized version of FGSM described in https://arxiv.org/pdf/1611.01236.pdf (see the final paragraph of section 3: "We observed that if we fix epsilon during training then networks become robust only to that specific value of epsilon. We therefore recommend choosing epsilon randomly, independently for each training example....").
Specifically, this is identical to FGSM except the epsilon is randomly drawn from some distribution (e.g. truncated normal). This can reduce the "brittleness" of models trained via FGSM -- they only are robust within some neighborhood of the epsilon used to train the model. Of course, this is also useful for benchmarking according to the results from that specific paper.
Describe the solution you'd like
It would be nice to have a boolean flag to the cleverhans.attacks.FastGradientMethod where I can specify whether a fixed epsilon should be used, or a randomized epsilon. Even better would be to specify a tensorflow operation which will be used to draw the epsilon (e.g. I can specify the distribution and its parameters used to draw epsilon).
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
I don't see a clear alternative. This seems to be a fairly straightforward feature to implement and I would be open to contributing it.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
I would like to use the randomized version of FGSM described in https://arxiv.org/pdf/1611.01236.pdf (see the final paragraph of section 3: "We observed that if we fix epsilon during training then networks become robust only to that specific value of epsilon. We therefore recommend choosing epsilon randomly, independently for each training example....").
Specifically, this is identical to FGSM except the epsilon is randomly drawn from some distribution (e.g. truncated normal). This can reduce the "brittleness" of models trained via FGSM -- they only are robust within some neighborhood of the epsilon used to train the model. Of course, this is also useful for benchmarking according to the results from that specific paper.
Describe the solution you'd like
It would be nice to have a boolean flag to the
cleverhans.attacks.FastGradientMethod
where I can specify whether a fixed epsilon should be used, or a randomized epsilon. Even better would be to specify a tensorflow operation which will be used to draw the epsilon (e.g. I can specify the distribution and its parameters used to draw epsilon).Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
I don't see a clear alternative. This seems to be a fairly straightforward feature to implement and I would be open to contributing it.
The text was updated successfully, but these errors were encountered: