Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I load up from a previously logged checkpoint? #9

Open
Pinak1392 opened this issue Sep 27, 2024 · 0 comments
Open

How do I load up from a previously logged checkpoint? #9

Pinak1392 opened this issue Sep 27, 2024 · 0 comments

Comments

@Pinak1392
Copy link

After a couple of epochs of training, the model has saved a number of checkpoints. I want to resume the training from the latest checkpoint. But when I input the actual_resume parameter with the saved ckpt path, I get an error saying that the key state_dict doesn't exist. When I checked the outputs of the torch.load in load model from config, I found it to be an empty dictionary. Is there a different parameter I need to use to resume training?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant