Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

如何使yolo模型运行到GPU上 #224

Closed
1091492188 opened this issue Dec 13, 2024 · 7 comments
Closed

如何使yolo模型运行到GPU上 #224

1091492188 opened this issue Dec 13, 2024 · 7 comments

Comments

@1091492188
Copy link

1091492188 commented Dec 13, 2024

真心请问,因为测试有些论文,要10几分钟,有点慢

@Byaidu
Copy link
Owner

Byaidu commented Dec 13, 2024

瓶颈在网络上,不是yolo模型

@Byaidu Byaidu closed this as completed Dec 13, 2024
@1091492188
Copy link
Author

这个网络是指使用国外翻译API吗,关键我使用的是国内的API

@1091492188
Copy link
Author

image
你可以看看,速度还是比较慢的

@hellofinch
Copy link
Contributor

API无论国内国外都会限制单位时间内请求的数量,还有一种可能是这个issue中提到的内容。#216

@awwaawwa
Copy link
Contributor

我初步看了一下,CUDA+苹果CoreML是肯定能跑,DirectML onnxruntime有支持,但是还没看pypi上的有没有这个支持。后续有时间后会profile一下看看。我估摸着把各步骤并行起来后就看不见这个瓶颈了。后续有时间的话我应该会改改然后发pr

@imClumsyPanda
Copy link
Contributor

https://onnxruntime.ai/docs/execution-providers/CoreML-ExecutionProvider.html#configuration-options-new-api

使用coreML疑似可以参考上述代码,但是不确定如何判断是否使用了coreML与提升效果,因此目前仅在本地尝试过,暂未发起pr

@awwaawwa
Copy link
Contributor

https://onnxruntime.ai/docs/execution-providers/CoreML-ExecutionProvider.html#configuration-options-new-api

使用coreML疑似可以参考上述代码,但是不确定如何判断是否使用了coreML与提升效果,因此目前仅在本地尝试过,暂未发起pr

我本地也试过,在 #330 打满 20qps 后,用 CoreML 可以大幅降低 CPU 占用。后续我会将相关代码在这个 PR 里合进去。(and,也会试试 DirectML)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants