You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on discussion with Zhiwei at Intel, I understand there are plans to create OIM for selecting inference backend based on the models.
I'm opening this feature request to add the requirement that OIM should allow user to configure and override the backend used. The reason is that not all inference backends will be available on all hardware and software platforms due to various reasons, so we want to ensure the user can manually configure the actual inference backend for the OPEA workload.
The text was updated successfully, but these errors were encountered:
Priority
P2-High
OS type
Ubuntu
Hardware type
GPU-AMD
Running nodes
Single Node
Description
Based on discussion with Zhiwei at Intel, I understand there are plans to create OIM for selecting inference backend based on the models.
I'm opening this feature request to add the requirement that OIM should allow user to configure and override the backend used. The reason is that not all inference backends will be available on all hardware and software platforms due to various reasons, so we want to ensure the user can manually configure the actual inference backend for the OPEA workload.
The text was updated successfully, but these errors were encountered: