-
-
Notifications
You must be signed in to change notification settings - Fork 792
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add manual optimization to core task #1796
base: develop
Are you sure you want to change the base?
Add manual optimization to core task #1796
Conversation
In the same way that is done for the speaker diarization task
…ages/pyannote-audio into fix-optimization-issue
Co-authored-by: Hervé BREDIN <[email protected]>
"clip_val": 5.0, | ||
"clip_algorithm": "norm", | ||
"accumulate_batches": 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we instead try to grab these values from trainer options directly?
Maybe trainer
is exposed as an attribute of model
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, looks like you could use something like
self.model.trainer.{accumulate_grad_batches, gradient_clip_val, gradient_clip_algorithm}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Passing accumulate_grad_batches
, gradient_clip_val
or gradient_clip_algorithm
to the Trainer
will raise a MisconfigurationException
if we set automatic_optimization=False
.
lightning_fabric.utilities.exceptions.MisconfigurationException: Automatic gradient clipping is not supported for manual optimization. Remove `Trainer(gradient_clip_val=5.0)` or switch to automatic optimization.
This PR adds:
manual_optimization
topyannote.audio.core.Task
gradient
attribute to pyannote's tasks, which is a dict of gradient arguments: gradient clipping (val and algo), and gradient batch accumulation. (needed bymanual_optimisation