Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility for PTH to ONNX conversion #1

Open
Zarxrax opened this issue Nov 20, 2023 · 2 comments
Open

Possibility for PTH to ONNX conversion #1

Zarxrax opened this issue Nov 20, 2023 · 2 comments

Comments

@Zarxrax
Copy link

Zarxrax commented Nov 20, 2023

I was doing some thinking about the possibility of AnimeJaNaiConverterGUI being able to do PTH to ONNX conversion itself.
A problem is that pytorch is a huge dependency, and would be kind of weird to require it just for converting models.
However, the CPU-only version of pytorch is significantly smaller than the CUDA version, and should be able to to the onnx conversion just fine.

As a test, I ran the conversion in chaiNNer with it set to CPU mode. A conversion in fp32 mode happened at essentially the same speed as it would on the gpu. However, chainner would not allow an fp16 conversion while in cpu mode. I'm not sure if this is a limitation of chaiNNer, or a limitation of pytorch.

Pytorch2 also has a new onnx exporter, so even if the old one doesnt allow for an fp16 conversion, I wonder if this new one might?
https://pytorch.org/docs/stable/onnx.html

@the-database
Copy link
Owner

the-database commented Nov 20, 2023

Good idea, this is worth exploring. If we can convert pth to fp32 ONNX on the fly with PyTorch CPU, and vs-mlrt can further convert fp32 to fp16 on the fly, that would be ideal - end users wouldn't need to be concerned about fp32 or fp16 for ONNX, and pth support would make support for OpenModelDB models much simpler.

For DirectML and NCNN vs-mlrt is already doing this on the fly fp16 conversion for us. It seems vs-mlrt should be able to do the same for TensorRT, and I believe mpv_lazy already configures vs-mlrt to do it. I tried borrowing some of their engine generation code but ran into errors, though I didn't spend too much time on it yet.

@Zarxrax
Copy link
Author

Zarxrax commented Feb 2, 2024

A couple more quick thoughts on this.
I believe spandrel can allow for easily loading Pytorch models and getting various parameters from them. https://github.com/chaiNNer-org/spandrel

Neosr recently added a pth to onnx conversion script which might serve as a good example of how the conversion can work. https://github.com/muslll/neosr/blob/master/convert.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants