You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
I didn't realize until now that the track_higher_grads.
But now I realized I might have a weird version of MAML going on in my code and wanted to make sure it was correct.
What I did is make the gradient a raw tensor by detaching them from the computation graph e.g.
if self.fo: # first-order
g = g.detach() # dissallows flow of higher order grad while still letting params track gradients.
I was wondering if this would be equivalent to track_higher_grads=False.
In particular I have the detach but leave track_higher_grads=True...which is the point that confuses me.
I didn't realize until now that the
track_higher_grads
.But now I realized I might have a weird version of MAML going on in my code and wanted to make sure it was correct.
What I did is make the gradient a raw tensor by detaching them from the computation graph e.g.
I was wondering if this would be equivalent to
track_higher_grads=False
.In particular I have the detach but leave
track_higher_grads=True
...which is the point that confuses me.Related:
official fo maml: #63
docs: https://higher.readthedocs.io/en/latest/optim.html
cross: #128 , https://stackoverflow.com/questions/70947042/how-does-one-run-first-order-maml-with-pytorchs-higher-library
The text was updated successfully, but these errors were encountered: