You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello @KaimingHe, I am running the MoCo pre-training with the following configurations:
Num of GPUs = 8
GPU type: NVIDIA Quadro RTX 8000
But I am unable to run this with a batch size greater than 8. It throws a PyTorch Spawn error when I set the batch size greater than 8 (ex:16, 32). I confirmed that all GPUs are running at full utilization when I run the pre training with the above mentioned batch size. Do you have any suggestions or thoughts on how I might proceed? With a batch size this small, would it provide any meaningful results?
The text was updated successfully, but these errors were encountered:
Hello @KaimingHe, I am running the MoCo pre-training with the following configurations:
Num of GPUs = 8
GPU type: NVIDIA Quadro RTX 8000
But I am unable to run this with a batch size greater than 8. It throws a PyTorch Spawn error when I set the batch size greater than 8 (ex:16, 32). I confirmed that all GPUs are running at full utilization when I run the pre training with the above mentioned batch size. Do you have any suggestions or thoughts on how I might proceed? With a batch size this small, would it provide any meaningful results?
The text was updated successfully, but these errors were encountered: