Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows with Intel GPU fails to build if Ninja is not the selected backend #1852

Open
4 tasks done
dnoliver opened this issue Dec 2, 2024 · 0 comments
Open
4 tasks done

Comments

@dnoliver
Copy link

dnoliver commented Dec 2, 2024

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

GPU enabled build should work by using the following parameters:

set CMAKE_ARGS=-DGGML_SYCL=ON -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DCMAKE_BUILD_TYPE=Release
pip install -e .

Current Behavior

Build fails with at IntelSYCLConfig.cmake

Environment and Context

2th Gen Intel Core i7-1270P
Intel Iris Xe Graphics
Windows 11
Python 3.11.10
Visual Studio 2022
Intel oneAPI Toolkit 2025.0

Failure Information (for bugs)

CMake fails to configure the project, see logs for details.

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

  1. Follow the steps at https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md#windows to get the SYCL build ready
  2. Follow the build process described in the Expected Behavior section
  3. Run the build with set CMAKE_ARGS=-DGGML_SYCL=ON -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DCMAKE_BUILD_TYPE=Release and pip install -e .

Failure Logs

This is a follow up from #1709, which describes the build steps for using an Intel iGPU with oneAPI.

(poc) C:\Users\dnoliver\GitHub\dnoliver\llama-cpp-python>set CMAKE_ARGS=-DGGML_SYCL=ON -DCMAKE_C_COMPILER=cl -DCMAKE_CXX_COMPILER=icx -DCMAKE_BUILD_TYPE=Release

(poc) C:\Users\dnoliver\GitHub\dnoliver\llama-cpp-python>pip install -e .
Obtaining file:///C:/Users/dnoliver/GitHub/dnoliver/llama-cpp-python
  Installing build dependencies ... done
  Checking if build backend supports build_editable ... done
  Getting requirements to build editable ... done
  Preparing editable metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in c:\users\dnoliver\appdata\local\miniconda3\envs\poc\lib\site-packages (from llama_cpp_python==0.3.2) (4.12.2)
Requirement already satisfied: numpy>=1.20.0 in c:\users\dnoliver\appdata\local\miniconda3\envs\poc\lib\site-packages (from llama_cpp_python==0.3.2) (1.26.4)
Requirement already satisfied: diskcache>=5.6.1 in c:\users\dnoliver\appdata\local\miniconda3\envs\poc\lib\site-packages (from llama_cpp_python==0.3.2) (5.6.3)
Requirement already satisfied: jinja2>=2.11.3 in c:\users\dnoliver\appdata\local\miniconda3\envs\poc\lib\site-packages (from llama_cpp_python==0.3.2) (3.1.4)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\dnoliver\appdata\local\miniconda3\envs\poc\lib\site-packages (from jinja2>=2.11.3->llama_cpp_python==0.3.2) (3.0.2)
Building wheels for collected packages: llama_cpp_python
  Building editable for llama_cpp_python (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building editable for llama_cpp_python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [73 lines of output]
      *** scikit-build-core 0.10.7 using CMake 3.31.0 (editable)
      *** Configuring CMake...
      2024-12-02 10:52:34,863 - scikit_build_core - WARNING - Unsupported CMAKE_ARGS ignored: -DCMAKE_BUILD_TYPE=Release
      2024-12-02 10:52:35,007 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None
      2024-12-02 10:52:35,012 - scikit_build_core - WARNING - Unsupported CMAKE_ARGS ignored: -DCMAKE_BUILD_TYPE=Release
      loading initial cache file C:\Users\dnoliver\AppData\Local\Temp\tmpd8q_q4t6\build\CMakeInit.txt
      -- Building for: Visual Studio 17 2022
      -- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.22631.
      -- The C compiler identification is MSVC 19.42.34433.0
      -- The CXX compiler identification is MSVC 19.42.34433.0
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe - skipped
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe - skipped
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.47.0.windows.2")
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
      -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
      -- Looking for pthread_create in pthreads
      -- Looking for pthread_create in pthreads - not found
      -- Looking for pthread_create in pthread
      -- Looking for pthread_create in pthread - not found
      -- Found Threads: TRUE
      -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
      -- CMAKE_SYSTEM_PROCESSOR: AMD64
      -- CMAKE_GENERATOR_PLATFORM: x64
      -- Found OpenMP_C: -openmp (found version "2.0")
      -- Found OpenMP_CXX: -openmp (found version "2.0")
      -- Found OpenMP: TRUE (found version "2.0")
      -- OpenMP found
      -- Using llamafile
      -- x86 detected
      -- Performing Test HAS_AVX_1
      -- Performing Test HAS_AVX_1 - Success
      -- Performing Test HAS_AVX2_1
      -- Performing Test HAS_AVX2_1 - Success
      -- Performing Test HAS_FMA_1
      -- Performing Test HAS_FMA_1 - Success
      -- Performing Test HAS_AVX512_1
      -- Performing Test HAS_AVX512_1 - Failed
      -- Performing Test HAS_AVX512_2
      -- Performing Test HAS_AVX512_2 - Failed
      -- Using runtime weight conversion of Q4_0 to Q4_0_x_x to enable optimized GEMM/GEMV kernels
      -- Including CPU backend
      -- Using AMX
      -- Including AMX backend
      -- Performing Test SUPPORTS_SYCL
      -- Performing Test SUPPORTS_SYCL - Failed
      -- Using oneAPI Release SYCL compiler (icpx).
      -- SYCL found
      -- DNNL found:1
      CMake Error at vendor/llama.cpp/ggml/src/ggml-sycl/CMakeLists.txt:63 (find_package):
        Found package configuration file:

          C:/Program Files (x86)/Intel/oneAPI/compiler/latest/lib/cmake/IntelSYCL/IntelSYCLConfig.cmake

        but it set IntelSYCL_FOUND to FALSE so package "IntelSYCL" is considered to
        be NOT FOUND.  Reason given by package:

        Unsupported compiler family MSVC and compiler C:/Program Files/Microsoft
        Visual
        Studio/2022/Community/VC/Tools/MSVC/14.42.34433/bin/Hostx64/x64/cl.exe!!



      -- Configuring incomplete, errors occurred!

      *** CMake configuration failed
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building editable for llama_cpp_python
Failed to build llama_cpp_python
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama_cpp_python)

The workaround is to select Ninja as the build backend with set CMAKE_GENERATOR=Ninja
My first guess was that LLAVA was causing the problem, but if I disable that I get no GPU support (see #1851)

@dnoliver dnoliver changed the title Windows with Intel iGPU fails to build if Ninja is not the selected backend Windows with Intel GPU fails to build if Ninja is not the selected backend Dec 2, 2024
@dnoliver dnoliver mentioned this issue Dec 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant