Parse https://ollama.com/library/ syntax (#11480) #3892
build.yml
on: push
Matrix: windows-2019-cmake-cuda
Matrix: windows-latest-cmake-hip-release
Matrix: windows-latest-cmake
macOS-latest-cmake-arm64
13m 5s
macOS-latest-cmake-x64
5m 31s
ubuntu-cpu-cmake
3m 0s
ubuntu-latest-cmake-rpc
3m 1s
ubuntu-22-cmake-vulkan
18m 59s
ubuntu-22-cmake-hip
20m 22s
ubuntu-22-cmake-musa
12m 37s
ubuntu-22-cmake-sycl
5m 12s
ubuntu-22-cmake-sycl-fp16
5m 21s
macOS-latest-cmake-ios
2m 58s
macOS-latest-cmake-tvos
1m 55s
ubuntu-latest-cmake-cuda
12m 3s
windows-latest-cmake-sycl
10m 16s
windows-latest-cmake-hip
21m 35s
ios-xcode-build
2m 45s
android-build
6m 42s
Matrix: macOS-latest-swift
Matrix: openEuler-latest-cmake-cann
Matrix: ubuntu-latest-cmake-sanitizer
Matrix: windows-msys2
release
1m 59s
Annotations
1 error
windows-latest-cmake (avx512-x64, -DGGML_NATIVE=OFF -DLLAMA_BUILD_SERVER=ON -DGGML_RPC=ON -DGGML_...
Process completed with exit code 1.
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
cudart-llama-bin-win-cu11.7-x64.zip
|
303 MB |
|
cudart-llama-bin-win-cu12.4-x64.zip
|
372 MB |
|
llama-bin-macos-arm64.zip
|
20.9 MB |
|
llama-bin-macos-x64.zip
|
22.4 MB |
|
llama-bin-ubuntu-x64.zip
|
24.2 MB |
|
llama-bin-win-avx-x64.zip
|
13.8 MB |
|
llama-bin-win-avx2-x64.zip
|
13.8 MB |
|
llama-bin-win-avx512-x64.zip
|
13.8 MB |
|
llama-bin-win-cu11.7-x64.zip
|
150 MB |
|
llama-bin-win-cu12.4-x64.zip
|
150 MB |
|
llama-bin-win-hip-x64-gfx1030.zip
|
236 MB |
|
llama-bin-win-hip-x64-gfx1100.zip
|
238 MB |
|
llama-bin-win-hip-x64-gfx1101.zip
|
238 MB |
|
llama-bin-win-kompute-x64.zip
|
14.1 MB |
|
llama-bin-win-llvm-arm64-opencl-adreno.zip
|
17.5 MB |
|
llama-bin-win-llvm-arm64.zip
|
17.5 MB |
|
llama-bin-win-msvc-arm64.zip
|
56.4 MB |
|
llama-bin-win-noavx-x64.zip
|
13.8 MB |
|
llama-bin-win-openblas-x64.zip
|
24.8 MB |
|
llama-bin-win-sycl-x64.zip
|
95.3 MB |
|
llama-bin-win-vulkan-x64.zip
|
15.9 MB |
|