You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have submitted a issue in pytorch: pytorch/pytorch#138074 which describes the problem, hoping they will add a new interface of setting custom stream for communication.
This problem hasn't occurred so far because the send kernel of NCCL will ignore the recv kernel and complete, when the size of data is less than 64MB.
Do you guys know of any other solutions?
The text was updated successfully, but these errors were encountered:
Your code snippet in the issue is very helpful. But, can you also give us a run script to reproduce the error in xdit. Also what kind of GPU cluser are you using?
@feifeibear Sorry, I was busy recently. It's hard to reproduce the error on gpu, because i can only change the output picture size to make size of the patch_latent bigger, and it will OOM to make the picture big enough to reproduce the error.
I came up with an idea that we can pair up the ranks for send and recv and create group each pair to solve the problem, so the recv will not wait the send of the same rank. Here is a demo picture:
num_pipeline_patch can not be set too large, for example I sometimes encounter stuck when it is set to 16.
I did not delve into this problem. I guess it maybe to because of much async P2P.
I have submitted a issue in pytorch: pytorch/pytorch#138074 which describes the problem, hoping they will add a new interface of setting custom stream for communication.
This problem hasn't occurred so far because the send kernel of NCCL will ignore the recv kernel and complete, when the size of data is less than 64MB.
Do you guys know of any other solutions?
The text was updated successfully, but these errors were encountered: