Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lower resolution for faster inference? #1

Open
antithing opened this issue Dec 26, 2024 · 3 comments
Open

Lower resolution for faster inference? #1

antithing opened this issue Dec 26, 2024 · 3 comments

Comments

@antithing
Copy link

Hi, and thanks for making this code available!

Is it possible to run at low resolution, eg 512x512, and reach faster inference speeds?

I am aiming for 20ms per image.

@leon0514
Copy link
Owner

I don't know how to do it, I tried to export 512x512 onnx model but it failed. But it is possible to export onnx models with dynamic width and height, before I had problems reasoning about models with dynamic width and height with onnxruntime reasoning about them @antithing

@leon0514
Copy link
Owner

你可以看看这个pr apple/ml-depth-pro#45 ,里面有一个脚本convert_to_coreml.py,我成功了一部分,但是我改动的太多了,导致onnx的结构乱了。现在也没有精力弄了 @antithing

@leon0514
Copy link
Owner

我成功改成了768 x 768,3090时间在60ms,但是结果完全不对

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants