Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with batch size #131

Open
kkannan8291 opened this issue Aug 24, 2022 · 1 comment
Open

Issue with batch size #131

kkannan8291 opened this issue Aug 24, 2022 · 1 comment

Comments

@kkannan8291
Copy link

Hello @KaimingHe, I am running the MoCo pre-training with the following configurations:

Num of GPUs = 8
GPU type: NVIDIA Quadro RTX 8000

But I am unable to run this with a batch size greater than 8. It throws a PyTorch Spawn error when I set the batch size greater than 8 (ex:16, 32). I confirmed that all GPUs are running at full utilization when I run the pre training with the above mentioned batch size. Do you have any suggestions or thoughts on how I might proceed? With a batch size this small, would it provide any meaningful results?

@avinabsaha
Copy link

I think you might want to reduce the number of workers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants