You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
Disabling LLAVA for GPU builds should get you a version that use the Intel GPU.
Current Behavior
This is a follow up from #1709, which describes the build steps for using an Intel iGPU with oneAPI.
We noted that, when you use -DLLAVA_BUILD=OFF , the resulting build doesn't have iGPU support
So building this project like:
fromllama_cppimportLlamallm=Llama(
model_path="C:/Users/dnoliver/Downloads/llama-2-7b.Q4_0.gguf",
n_gpu_layers=-1,
seed=1337,
n_ctx=2048,
)
output=llm(
"Name the planets in the solar system.",
max_tokens=256,
echo=True
)
print(output)
Works, but it only uses the CPU.
Environment and Context
12th Gen Intel Core i7-1270P
Intel Iris Xe Graphics
Windows 11
Python 3.11.10
Visual Studio 2022
Intel oneAPI Toolkit 2025.0
Failure Information (for bugs)
There is no GPU usage, as noted by the failure log
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Disabling LLAVA for GPU builds should get you a version that use the Intel GPU.
Current Behavior
This is a follow up from #1709, which describes the build steps for using an Intel iGPU with oneAPI.
We noted that, when you use
-DLLAVA_BUILD=OFF
, the resulting build doesn't have iGPU supportSo building this project like:
And then running the following sample code:
Works, but it only uses the CPU.
Environment and Context
Failure Information (for bugs)
There is no GPU usage, as noted by the failure log
Steps to Reproduce
Failure Logs
It works, but it doesn't use the GPU backend
The text was updated successfully, but these errors were encountered: