Skip to content

Actions: ggerganov/llama.cpp

CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
8,096 workflow run results
8,096 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

llama.swiftui: Fix a small bug
CI #13974: Pull request #8268 synchronize by ggerganov
July 20, 2024 13:09 40m 17s ho2103:master
July 20, 2024 13:09 40m 17s
llama.swiftui: Fix a small bug
CI #13973: Pull request #8268 synchronize by ggerganov
July 20, 2024 13:07 2m 4s ho2103:master
July 20, 2024 13:07 2m 4s
llama.swiftui: Fix a small bug
CI #13971: Pull request #8268 synchronize by ho2103
July 20, 2024 03:50 54m 28s ho2103:master
July 20, 2024 03:50 54m 28s
llama : Added support for Tekken pre-tokenizer (#8577)
CI #13969: Pull request #8579 synchronize by m18coppola
July 19, 2024 23:10 29m 56s m18coppola:master
July 19, 2024 23:10 29m 56s
Tokenizer fixes
CI #13968: Pull request #8379 synchronize by jaime-m-p
July 19, 2024 15:24 1h 39m 18s jaime-m-p:tokenizer-fixes
July 19, 2024 15:24 1h 39m 18s
ggml : fix quant dot product with odd number of blocks (#8549)
CI #13967: Commit 87e397d pushed by slaren
July 19, 2024 15:17 1h 24m 8s master
July 19, 2024 15:17 1h 24m 8s
ggml : fix iq4_nl dot product with odd number of blocks
CI #13965: Pull request #8549 synchronize by ggerganov
July 19, 2024 14:13 1h 27m 30s sl/fix-iqnl-odd-blocks
July 19, 2024 14:13 1h 27m 30s
ggml : fix odd blocks for ARM_NEON
CI #13964: Pull request #8556 synchronize by ggerganov
July 19, 2024 14:01 1h 5m 22s gg/fix-odd-blocks-arm
July 19, 2024 14:01 1h 5m 22s
llama : bump max layers from 256 to 512 (#8530)
CI #13963: Commit d197545 pushed by ggerganov
July 19, 2024 13:50 1h 18m 25s master
July 19, 2024 13:50 1h 18m 25s
llama : Added support for Tekken pre-tokenizer (#8577)
CI #13962: Pull request #8579 synchronize by m18coppola
July 19, 2024 13:34 43m 42s m18coppola:master
July 19, 2024 13:34 43m 42s
llama : move vocab, grammar and sampling into separate files
CI #13959: Pull request #8508 synchronize by ggerganov
July 19, 2024 11:22 1h 9m 47s gg/llama-reorganize
July 19, 2024 11:22 1h 9m 47s
ggml : add friendlier error message to fopen errors (#8575)
CI #13958: Commit b57eb9c pushed by ggerganov
July 19, 2024 11:05 1h 3m 14s master
July 19, 2024 11:05 1h 3m 14s
gguf : handle null name during init
CI #13957: Pull request #8587 opened by ggerganov
July 19, 2024 10:46 50m 50s gg/gguf-fix-null-defer
July 19, 2024 10:46 50m 50s
llama : Added support for Tekken pre-tokenizer (#8577)
CI #13956: Pull request #8579 synchronize by ggerganov
July 19, 2024 10:21 47m 30s m18coppola:master
July 19, 2024 10:21 47m 30s
fix: typo of chatglm4 chat tmpl (#8586)
CI #13955: Commit f299aa9 pushed by ngxson
July 19, 2024 09:44 55m 39s master
July 19, 2024 09:44 55m 39s
fix: typo of chatglm4 chat tmpl
CI #13954: Pull request #8586 opened by thxCode
July 19, 2024 09:11 50m 20s thxCode:chatmpl
July 19, 2024 09:11 50m 20s
llama : Added support for Tekken pre-tokenizer (#8577)
CI #13952: Pull request #8579 synchronize by m18coppola
July 19, 2024 02:08 41m 59s m18coppola:master
July 19, 2024 02:08 41m 59s
llama : Added support for Tekken pre-tokenizer (#8577)
CI #13951: Pull request #8579 synchronize by m18coppola
July 19, 2024 02:05 2m 41s m18coppola:master
July 19, 2024 02:05 2m 41s