Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to deploy the quantized model? #15

Open
jiinhui opened this issue Oct 8, 2021 · 1 comment
Open

How to deploy the quantized model? #15

jiinhui opened this issue Oct 8, 2021 · 1 comment
Labels
question Further information is requested

Comments

@jiinhui
Copy link

jiinhui commented Oct 8, 2021

When we have trained the quantization model, how to deploy it in cpu backend?

@zhutmost
Copy link
Owner

zhutmost commented Oct 8, 2021

For research proposes, you can run it with cpu backend by just modifying the config yaml.
I am not similar with deployment in production environment. Maybe you can use the saved checkpoints to re-construct your model (or export it in ONNX)?

@zhutmost zhutmost added the question Further information is requested label Oct 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants