Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using NPU inference model, how to set up a single process #12519

Open
Bofuser opened this issue Dec 10, 2024 · 2 comments
Open

Using NPU inference model, how to set up a single process #12519

Bofuser opened this issue Dec 10, 2024 · 2 comments
Assignees

Comments

@Bofuser
Copy link

Bofuser commented Dec 10, 2024

When using NPU to inference the MiniCPM_2_6 model, it was found that it divided python into four parallel processes for inferencing, which took up a lot of memory.
40d75b5789744ceb4f11a44ba4aef83d

The use of GPU inference is single-process inference,
1733810498139_D90AB50F-2BF5-4fc2-8A36-4723B9D72249

in the demo coding did not find the relevant setting of single-process code, and how to set it?

@Oscilloscope98
Copy link
Contributor

Oscilloscope98 commented Dec 11, 2024

Hi @Bofuser,

For MiniCPM-V_2_6, we currently only supports running on NPU with multiple processes. We will update here for any updates if we further support this model running on a single process :)

@Bofuser
Copy link
Author

Bofuser commented Dec 11, 2024

Got it, thanks for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants