Facing issues (loss.backwards not computable if model NOT in training mode) while generating adversarial examples #1243
Unanswered
SubrangshuDas
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am getting the following error while using fast_gradient_method. The model_fn is called in eval mode, which seems to be the right state for inference computation on adversarial inputs.
/tmp/ipykernel_104065/3543041581.py in fast_gradient_method(model_fn, x, eps, norm, clip_min, clip_max, y, targeted, sanity_checks)
79 # Define gradient of loss wrt input
80
---> 81 loss.backward()
82 optimal_perturbation = optimize_linear(x.grad, eps, norm)
83
~/anaconda3/lib/python3.9/site-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
486 inputs=inputs,
487 )
--> 488 torch.autograd.backward(
489 self, gradient, retain_graph, create_graph, inputs=inputs
490 )
~/anaconda3/lib/python3.9/site-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
195 # some Python versions print out the first line of a multi-line function
196 # calls in the traceback and some print out the last line
--> 197 Variable.execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
198 tensors, grad_tensors, retain_graph, create_graph, inputs,
199 allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: cudnn RNN backward can only be called in training mode
now = datetime.datetime.now()
print(now)
Beta Was this translation helpful? Give feedback.
All reactions