-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AssertionError: input should be in float32 type, got torch.float16 #9
Comments
Hi, I think there is some bug in the mixed precision training. A work-around is to change the forward method in the VFELayer from
to
|
Hi @georghess , I reproduced the same issue, and the proposed solution did not resolve the problem. I would appreciate any other ideas? |
Hi @gorkemguzeler, pretty sure that the solution above should work. That's what we've done on our dev-branch at least. There we've switched to our own fork of SST which has the above changes. To help you more than this, I'd need some more info. Could you send the entire error trace? |
Hi @georghess, thanks a lot for your quick reply and information! I switched my branch to the dev and used your own fork of SST. I did not run into the above issue this time. One thing I noticed is:
I just tried with the dev branch and forked SST, there is no such warnings during training but it takes 11 hours per epoch.
|
I also meet this error when I use mmdet3d , the situation I encountered is, the Decorator |
Hi, this problem is caused by the assertions in the norm.py file, it shows that some parts require the input data type to be torch.float32 but actually the input data type is torch.float16, what is causing this problem, is the interpreter not working,?
looking forward to your reply, thanks!
The text was updated successfully, but these errors were encountered: