-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WSL / Docker ipex-llm-inference-cpp-xpu:latest SIGSEGV on model load #12592
Comments
Just noticed the following in WSL Ubuntu host dmesg:
I'm not sure what to make of this? Windows <> WSL GPU driver incompatibility? Possible kernel issue? |
@vladislavdonchev You can follow this guide to start docker container on windows wsl and run it again. Maybe some environment setting on your script crash the ollama program like |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello,
Below is my Alder Lake A770 WSL / Docker setup configuration (2 GPUs):
From the IPEX-LLM container:
This is the Docker command used for IPEX-LLM:
Switching the device selection between level_zero:0 / 1 / * doesn't change the below observed behaviour.
Pulling a model with ollama works just fine, but trying to run it results in the following:
docker_logs.txt
Full log attached. Any hints / ideas on what I might be going wrong are welcome as it's my 3rd day battling this (rookie numbers, I know, but still).
Update:
Confirmed with WSL kernels 6.6.36.6-microsoft-standard-WSL2+ and 5.15.167.4
The text was updated successfully, but these errors were encountered: