You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Follow the build process described in the Expected Behavior section
Run the build with set CMAKE_ARGS=-DGGML_SYCL=ON -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DCMAKE_BUILD_TYPE=Release and pip install -e .
Failure Logs
This is a follow up from #1709, which describes the build steps for using an Intel iGPU with oneAPI.
(poc) C:\Users\dnoliver\GitHub\dnoliver\llama-cpp-python>set CMAKE_ARGS=-DGGML_SYCL=ON -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DCMAKE_BUILD_TYPE=Release
(poc) C:\Users\dnoliver\GitHub\dnoliver\llama-cpp-python>pip install -e .
Obtaining file:///C:/Users/dnoliver/GitHub/dnoliver/llama-cpp-python
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... done
Preparing editable metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in c:\users\dnoliver\appdata\local\miniconda3\envs\poc\lib\site-packages (from llama_cpp_python==0.3.2) (4.12.2)
Requirement already satisfied: numpy>=1.20.0 in c:\users\dnoliver\appdata\local\miniconda3\envs\poc\lib\site-packages (from llama_cpp_python==0.3.2) (1.26.4)
Requirement already satisfied: diskcache>=5.6.1 in c:\users\dnoliver\appdata\local\miniconda3\envs\poc\lib\site-packages (from llama_cpp_python==0.3.2) (5.6.3)
Requirement already satisfied: jinja2>=2.11.3 in c:\users\dnoliver\appdata\local\miniconda3\envs\poc\lib\site-packages (from llama_cpp_python==0.3.2) (3.1.4)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\dnoliver\appdata\local\miniconda3\envs\poc\lib\site-packages (from jinja2>=2.11.3->llama_cpp_python==0.3.2) (3.0.2)
Building wheels for collected packages: llama_cpp_python
Building editable for llama_cpp_python (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building editable for llama_cpp_python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [73 lines of output]
*** scikit-build-core 0.10.7 using CMake 3.31.0 (editable)
*** Configuring CMake...
2024-12-02 10:52:34,863 - scikit_build_core - WARNING - Unsupported CMAKE_ARGS ignored: -DCMAKE_BUILD_TYPE=Release
2024-12-02 10:52:35,007 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None
2024-12-02 10:52:35,012 - scikit_build_core - WARNING - Unsupported CMAKE_ARGS ignored: -DCMAKE_BUILD_TYPE=Release
loading initial cache file C:\Users\dnoliver\AppData\Local\Temp\tmpd8q_q4t6\build\CMakeInit.txt
-- Building for: Visual Studio 17 2022
-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.22631.
-- The C compiler identification is MSVC 19.42.34433.0
-- The CXX compiler identification is MSVC 19.42.34433.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.47.0.windows.2")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- CMAKE_GENERATOR_PLATFORM: x64
-- Found OpenMP_C: -openmp (found version "2.0")
-- Found OpenMP_CXX: -openmp (found version "2.0")
-- Found OpenMP: TRUE (found version "2.0")
-- OpenMP found
-- Using llamafile
-- x86 detected
-- Performing Test HAS_AVX_1
-- Performing Test HAS_AVX_1 - Success
-- Performing Test HAS_AVX2_1
-- Performing Test HAS_AVX2_1 - Success
-- Performing Test HAS_FMA_1
-- Performing Test HAS_FMA_1 - Success
-- Performing Test HAS_AVX512_1
-- Performing Test HAS_AVX512_1 - Failed
-- Performing Test HAS_AVX512_2
-- Performing Test HAS_AVX512_2 - Failed
-- Using runtime weight conversion of Q4_0 to Q4_0_x_x to enable optimized GEMM/GEMV kernels
-- Including CPU backend
-- Using AMX
-- Including AMX backend
-- Performing Test SUPPORTS_SYCL
-- Performing Test SUPPORTS_SYCL - Failed
-- Using oneAPI Release SYCL compiler (icpx).
-- SYCL found
-- DNNL found:1
CMake Error at vendor/llama.cpp/ggml/src/ggml-sycl/CMakeLists.txt:63 (find_package):
Found package configuration file:
C:/Program Files (x86)/Intel/oneAPI/compiler/latest/lib/cmake/IntelSYCL/IntelSYCLConfig.cmake
but it set IntelSYCL_FOUND to FALSE so package "IntelSYCL" is considered to
be NOT FOUND. Reason given by package:
Unsupported compiler family MSVC and compiler C:/Program Files/Microsoft
Visual
Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe!!
-- Configuring incomplete, errors occurred!
*** CMake configuration failed
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building editable for llama_cpp_python
Failed to build llama_cpp_python
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama_cpp_python)
The workaround is to select Ninja as the build backend with set CMAKE_GENERATOR=Ninja
My first guess was that LLAVA was causing the problem, but if I disable that I get no GPU support (see #1851)
The text was updated successfully, but these errors were encountered:
dnoliver
changed the title
Windows with Intel iGPU fails to build if Ninja is not the selected backend
Windows with Intel GPU fails to build if Ninja is not the selected backend
Dec 2, 2024
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
GPU enabled build should work by using the following parameters:
Current Behavior
Build fails with at IntelSYCLConfig.cmake
Environment and Context
Failure Information (for bugs)
CMake fails to configure the project, see logs for details.
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
set CMAKE_ARGS=-DGGML_SYCL=ON -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DCMAKE_BUILD_TYPE=Release
andpip install -e .
Failure Logs
This is a follow up from #1709, which describes the build steps for using an Intel iGPU with oneAPI.
The workaround is to select Ninja as the build backend with
set CMAKE_GENERATOR=Ninja
My first guess was that LLAVA was causing the problem, but if I disable that I get no GPU support (see #1851)
The text was updated successfully, but these errors were encountered: