We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nice work!
i run at 4090,the infer time is: 0.6s,a little long. Is this the normal effect?
The text was updated successfully, but these errors were encountered:
3060 infer time is 10s
Sorry, something went wrong.
nice work! i run at 4090,the infer time is: 0.6s,a little long. Is this the normal effect?
How did you get 0.6s?
I am trying with CUDAExecutionProvider but it still giving 10s infer time on RTX A6000 ADA
No branches or pull requests
nice work!
i run at 4090,the infer time is: 0.6s,a little long. Is this the normal effect?
The text was updated successfully, but these errors were encountered: