You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to continue training the model with the saved optimizer, but it crashed. The traceback is shown as follows:
Traceback (most recent call last):
File "lgesql/text2sql.py", line 105, in
optimizer.step()
File "lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
return wrapped(*args, **kwargs)
File "lib/python3.6/site-packages/torch/optim/optimizer.py", line 88, in wrapper
return func(*args, **kwargs)
File "lgesql/utils/optimization.py", line 220, in step
exp_avg.mul_(beta1).add_(grad, alpha=1.0 - beta1)
RuntimeError: The size of tensor a (768) must match the size of tensor b (2) at non-singleton dimension 0
Have you met this problem? And how can I fix it?
The text was updated successfully, but these errors were encountered:
More information:
When I comment the code "optimizer.load_state_dict(check_point['optim'])", the program will not crash but the training loss will be much larger than the loss in the last epoch of the saved model.
We also find this problem when loading from checkpoints. Honestly, we never used this interface for training from checkpoints in our experiments and neglected this bug by accident. The problem is caused by mismatches about key-value pairs in self.state of the optimizer. And the cause is that the set() operations over the parameters in function set_optimizer lead to different orders when invoked in different runs. Thus, the self.state mappings in load_state_dict for the optimizer fails. (See load_state_dict in Pytorch Optimizer for more details)
We have fixed this bug by removing all set() operations in function set_optimizer in utils/optimization.py. And everything seems ok if you now train from scratch and load from checkpoints.
I wanted to continue training the model with the saved optimizer, but it crashed. The traceback is shown as follows:
Traceback (most recent call last):
File "lgesql/text2sql.py", line 105, in
optimizer.step()
File "lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
return wrapped(*args, **kwargs)
File "lib/python3.6/site-packages/torch/optim/optimizer.py", line 88, in wrapper
return func(*args, **kwargs)
File "lgesql/utils/optimization.py", line 220, in step
exp_avg.mul_(beta1).add_(grad, alpha=1.0 - beta1)
RuntimeError: The size of tensor a (768) must match the size of tensor b (2) at non-singleton dimension 0
Have you met this problem? And how can I fix it?
The text was updated successfully, but these errors were encountered: