Skip to content

Actions: ggml-org/llama.cpp

CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
6,135 workflow run results
6,135 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Add support for encoder-only T5 models (#8900)
CI #14420: Commit 7c3f55c pushed by fairydreaming
August 10, 2024 09:43 49m 14s master
August 10, 2024 09:43 49m 14s
llama : default n_swa for phi-3
CI #14419: Pull request #8931 synchronize by ngxson
August 10, 2024 09:25 49m 22s ngxson:xsn/phi-3-default-swa
August 10, 2024 09:25 49m 22s
Retrieval: Fix Memory Leak in Retrieval Query Handling
CI #14418: Pull request #8955 synchronize by gtygo
August 10, 2024 09:03 49m 22s gtygo:master
August 10, 2024 09:03 49m 22s
Add support for encoder-only T5 models
CI #14417: Pull request #8960 synchronize by fairydreaming
August 10, 2024 08:43 51m 42s fairydreaming:t5-encoder
August 10, 2024 08:43 51m 42s
llama : refactor sampling
CI #14416: Pull request #8643 synchronize by ggerganov
August 10, 2024 08:10 50m 48s gg/llama-refactor-sampling
August 10, 2024 08:10 50m 48s
ggml : move rope type enum to ggml.h
CI #14415: Pull request #8949 synchronize by danbev
August 10, 2024 05:05 51m 55s danbev:ggml-rope-type-refactor
August 10, 2024 05:05 51m 55s
Add support for encoder-only T5 models
CI #14413: Pull request #8960 opened by fairydreaming
August 9, 2024 20:59 1h 21m 4s fairydreaming:t5-encoder
August 9, 2024 20:59 1h 21m 4s
Changes for the existing quant strategies / FTYPEs and new ones
CI #14412: Pull request #8836 synchronize by Nexesenex
August 9, 2024 20:49 1h 21m 1s Nexesenex:patch-1
August 9, 2024 20:49 1h 21m 1s
Vulkan Optimizations and Fixes
CI #14411: Pull request #8959 opened by 0cc4m
August 9, 2024 20:30 1h 5m 41s 0cc4m/vulkan-optimization
August 9, 2024 20:30 1h 5m 41s
Merge commit from fork
CI #14410: Commit b72942f pushed by ggerganov
August 9, 2024 20:03 1h 41m 39s master
August 9, 2024 20:03 1h 41m 39s
Retrieval: Fix Memory Leak in Retrieval Query Handling
CI #14406: Pull request #8955 synchronize by gtygo
August 9, 2024 17:44 1h 13m 19s gtygo:master
August 9, 2024 17:44 1h 13m 19s
Threadpool: take 2
CI #14404: Pull request #8672 synchronize by max-krasnyansky
August 9, 2024 16:59 1h 45m 13s CodeLinaro:threadpool
August 9, 2024 16:59 1h 45m 13s
llama : add support for lora adapters in T5 model (#8938)
CI #14403: Commit 6afd1a9 pushed by fairydreaming
August 9, 2024 16:53 1h 31m 26s master
August 9, 2024 16:53 1h 31m 26s
Threadpool: take 2
CI #14402: Pull request #8672 synchronize by fmz
August 9, 2024 15:45 1h 14m 24s CodeLinaro:threadpool
August 9, 2024 15:45 1h 14m 24s
make : fix llava obj file race (#8946)
CI #14401: Commit 272e3bd pushed by ggerganov
August 9, 2024 15:24 2h 45m 15s master
August 9, 2024 15:24 2h 45m 15s
llama : better replace_all (cont) (#8926)
CI #14400: Commit 45a55b9 pushed by ggerganov
August 9, 2024 15:23 1h 21m 40s master
August 9, 2024 15:23 1h 21m 40s
Threadpool: take 2
CI #14398: Pull request #8672 synchronize by fmz
August 9, 2024 15:07 37m 24s CodeLinaro:threadpool
August 9, 2024 15:07 37m 24s
Add support for lora adapters in T5 model
CI #14396: Pull request #8951 opened by fairydreaming
August 9, 2024 14:14 57m 24s fairydreaming:t5-lora
August 9, 2024 14:14 57m 24s
ggml : move rope type enum to ggml.h
CI #14395: Pull request #8949 opened by danbev
August 9, 2024 13:38 52m 49s danbev:ggml-rope-type-refactor
August 9, 2024 13:38 52m 49s
make : fix llava obj file race
CI #14394: Pull request #8946 opened by ggerganov
August 9, 2024 11:58 1h 25m 48s gg/llava-fix-obj-race
August 9, 2024 11:58 1h 25m 48s
llama : simplify Mamba with advanced batch splits
CI #14393: Pull request #8526 synchronize by ggerganov
August 9, 2024 11:38 56m 0s compilade/batch-splits
August 9, 2024 11:38 56m 0s
llava : support MiniCPM-V-2.5 (#7599)
CI #14392: Commit 3071c0a pushed by ggerganov
August 9, 2024 10:33 58m 49s master
August 9, 2024 10:33 58m 49s