Skip to content

Actions: ggerganov/llama.cpp

CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
8,254 workflow run results
8,254 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

common : Changed tuple to struct (TODO fix)
CI #14284: Pull request #8823 synchronize by Septa2112
August 5, 2024 08:36 2h 36m 18s Septa2112:pr/2
August 5, 2024 08:36 2h 36m 18s
August 5, 2024 07:38 4h 29m 17s
llama : refactor sampling
CI #14280: Pull request #8643 synchronize by ggerganov
August 5, 2024 07:08 2h 51m 32s gg/llama-refactor-sampling
August 5, 2024 07:08 2h 51m 32s
cmake: fix paths for vulkan shaders compilation on Windows (#8573)
CI #14279: Commit e31a4f6 pushed by 0cc4m
August 5, 2024 06:18 4h 45m 54s master
August 5, 2024 06:18 4h 45m 54s
llama : better replace_all (#8852)
CI #14278: Commit f1ea514 pushed by ggerganov
August 5, 2024 05:53 4h 51m 14s master
August 5, 2024 05:53 4h 51m 14s
vulkan : fix Qantized Mat-Vec Mul on AMD GPUs for ncols < 64 (#8855)
CI #14277: Commit 064cdc2 pushed by ggerganov
August 5, 2024 05:52 4h 52m 37s master
August 5, 2024 05:52 4h 52m 37s
sync : ggml
CI #14276: Commit 5587e57 pushed by ggerganov
August 5, 2024 05:50 4h 4m 37s master
August 5, 2024 05:50 4h 4m 37s
Add support for getting cpu info on Windows for llama_bench
CI #14275: Pull request #8824 synchronize by kylo5aby
August 5, 2024 04:23 1h 35m 30s kylo5aby:cpu-info
August 5, 2024 04:23 1h 35m 30s
cann: support q4_0 model (#8822)
CI #14274: Commit c02b0a8 pushed by hipudding
August 5, 2024 04:22 1h 4m 17s master
August 5, 2024 04:22 1h 4m 17s
common : Changed tuple to struct (TODO fix)
CI #14273: Pull request #8823 synchronize by Septa2112
August 5, 2024 03:29 3h 9m 4s Septa2112:pr/2
August 5, 2024 03:29 3h 9m 4s
[CANN] Support Q4_0 for Ascend NPU
CI #14272: Pull request #8822 synchronize by wangshuai09
August 5, 2024 03:23 56m 46s wangshuai09:q8_0
August 5, 2024 03:23 56m 46s
[CANN] Support Q4_0 for Ascend NPU
CI #14270: Pull request #8822 synchronize by wangshuai09
August 5, 2024 01:15 59m 8s wangshuai09:q8_0
August 5, 2024 01:15 59m 8s
Server: Don't ignore llama.cpp params (#8754)
CI #14267: Commit 978ba3d pushed by ngxson
August 4, 2024 18:16 1h 23m 43s master
August 4, 2024 18:16 1h 23m 43s
server : add lora hotswap endpoint
CI #14266: Pull request #8857 synchronize by ngxson
August 4, 2024 18:00 48m 4s ngxson:xsn/lora_server_hotswap
August 4, 2024 18:00 48m 4s
sync : ggml
CI #14264: Pull request #8854 synchronize by ggerganov
August 4, 2024 16:16 50m 56s sync
August 4, 2024 16:16 50m 56s
sync : ggml
CI #14262: Pull request #8854 opened by ggerganov
August 4, 2024 15:30 45m 45s sync
August 4, 2024 15:30 45m 45s
llama : refactor sampling
CI #14261: Pull request #8643 synchronize by ggerganov
August 4, 2024 14:40 49m 2s gg/llama-refactor-sampling
August 4, 2024 14:40 49m 2s
batched-bench : handle empty -npl (#8839)
CI #14260: Commit ecf6b7f pushed by ggerganov
August 4, 2024 10:55 1h 59m 27s master
August 4, 2024 10:55 1h 59m 27s
[example] batched-bench "segmentation fault"
CI #14259: Pull request #8839 synchronize by ggerganov
August 4, 2024 10:54 1h 45m 45s cunnie:batched-bench-no-segfault
August 4, 2024 10:54 1h 45m 45s
llama : better replace_all
CI #14258: Pull request #8852 opened by ggerganov
August 4, 2024 10:45 1h 21m 47s gg/replace-all
August 4, 2024 10:45 1h 21m 47s
baby-llama : remove duplicate vector include
CI #14257: Commit 01aae2b pushed by ggerganov
August 4, 2024 10:25 2h 11m 27s master
August 4, 2024 10:25 2h 11m 27s