Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cpu inference colab #1

Open
AK391 opened this issue Dec 8, 2021 · 8 comments
Open

cpu inference colab #1

AK391 opened this issue Dec 8, 2021 · 8 comments

Comments

@AK391
Copy link

AK391 commented Dec 8, 2021

Hi, is cpu inference possible in colab?

@AK391 AK391 closed this as completed Dec 9, 2021
@kormoczi
Copy link

Hi, may I ask if you have resolved to run the inference on cpu, or you just decided to forget about this issue?
Because I am also interested about the inference on cpu, and until now I was not able to resolve, so any help would be appreciated!
Thanks

@Sxela
Copy link
Owner

Sxela commented Feb 17, 2022

Hi, You've closed the issue, so I've thought you figured this out :D Which notebook are you talking about?

@Sxela Sxela reopened this Feb 17, 2022
@kormoczi
Copy link

I am asking about the image inference colab on this link:
https://colab.research.google.com/drive/1r1hhciakk5wHaUn1eJk7TP58fV9mjy_W

@Sxela
Copy link
Owner

Sxela commented Feb 17, 2022

Just replace all the .cuda() with .cpu() in the code.
I guess I should add dynamic selection based on the environment.

@kormoczi
Copy link

Sure, that was the method I have tried (to replace all .cuda() with .cpu()), but it did not work, unfortunatelly.
I can't remember the error message by heart, but later I will check again and I will send some logs.

@Sxela
Copy link
Owner

Sxela commented Feb 17, 2022

Ah, probably half-precision isn't supported on CPU, so try replacing .half() with .float() as well.
This might still not work because of hardcoded .jit datatypes inside the model, but worth a try.

@kormoczi
Copy link

I have replaced all the .cuda() with .cpu() and all the .half() with .float(), but still I get this error:
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

The line, which cause this error is the following:
model = torch.jit.load(model_path).eval().cpu().float()

@kormoczi
Copy link

I think I have found some kind of a solution... (maybe not the best)

  1. replace all the .cuda() with .cpu()
  2. replace all the .half() with .half().float()
  3. add map_location='cpu' parameter to the torch.jit.load
  4. use torch=1.8.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants