-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any chance we can load the onnx model and run it into TensorRT #2
Comments
Yes I made it work. I am getting 70 ms runtime on 3060 RTX for the 512 upscale model. (GFPGAN 1.4) using TensorRT. (without pre and post processing) |
Thanks for the sharing. I follow your fork from Face-Restoration-TensorRT and I tried to build the same fgpgan tensorrt, however I encounter the following error, do I miss something on the CMakeLists.txt?
|
@nelsontseng0704 the fork has a different purpose. It is intended to construct a python wrapper around the C++ code for inference. For GFPGAN I made it work with the official repository. Also it is heavily WIP. So no wonder you are getting some errors when trying to build it :) |
Got it working with model.onnx (including inference) Do you know what the fix is for this? |
You would need to debug the error within the C++ code. Cuda 700 is a standard error which does not tell much. |
Such a great work. I am looking for quicker inference for GFPGAN. Is there any chance we can load the GFPGAN.onnx and run it on TensorRT? Looking forward to your suggestion.
Thanks.
The text was updated successfully, but these errors were encountered: