Skip to content

Actions: ggerganov/llama.cpp

Python check requirements.txt

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
1,811 workflow run results
1,811 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

convert : refactor rope_freqs generation
Python check requirements.txt #2028: Pull request #9396 opened by compilade
September 10, 2024 00:59 2m 32s compilade:compilade/convert-separate-extra-tensors
September 10, 2024 00:59 2m 32s
RWKV v6: Add time_mix_decay_w1/w2 in quant exclusion list
Python check requirements.txt #2027: Pull request #9387 opened by MollySophia
September 9, 2024 13:39 29m 34s MollySophia:rwkv-quant-exclusion
September 9, 2024 13:39 29m 34s
feat: add Phi-1.5/Phi-2 tokenizer
Python check requirements.txt #2026: Pull request #9361 synchronize by daminho
September 9, 2024 07:20 2m 29s daminho:master
September 9, 2024 07:20 2m 29s
Add Phi-2/Phi-1.5 Tokenizer
Python check requirements.txt #2023: Pull request #9351 opened by daminho
September 7, 2024 16:04 2m 28s daminho:master
September 7, 2024 16:04 2m 28s
style : rearrange code + add comments and TODOs
Python check requirements.txt #2022: Commit 4b27235 pushed by ggerganov
September 7, 2024 09:30 2m 39s gg/llama-refactor-sampling-v2
September 7, 2024 09:30 2m 39s
Support MiniCPM3.
Python check requirements.txt #2021: Pull request #9322 synchronize by CarryFun
September 6, 2024 03:10 8m 29s OpenBMB:minicpm3
September 6, 2024 03:10 8m 29s
ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151)
Python check requirements.txt #2020: Commit 9bc6db2 pushed by compilade
September 6, 2024 01:48 2m 31s master
September 6, 2024 01:48 2m 31s
ggml-quants : ternary packing for TriLMs and BitNet b1.58
Python check requirements.txt #2018: Pull request #8151 synchronize by compilade
September 4, 2024 19:02 2m 38s compilade/bitnet-ternary
September 4, 2024 19:02 2m 38s
ggml-quants : ternary packing for TriLMs and BitNet b1.58
Python check requirements.txt #2017: Pull request #8151 synchronize by compilade
September 4, 2024 18:53 2m 55s compilade/bitnet-ternary
September 4, 2024 18:53 2m 55s
test-backend-ops : add TQ1_0 and TQ2_0 to all_types
Python check requirements.txt #2016: Commit e4dc48a pushed by compilade
September 4, 2024 18:53 2m 34s compilade/bitnet-ternary
September 4, 2024 18:53 2m 34s
Support video understanding
Python check requirements.txt #2015: Pull request #9165 synchronize by tc-mb
September 3, 2024 08:03 18m 39s OpenBMB:support-video-understanding
September 3, 2024 08:03 18m 39s
llama : add llama_sampling API + move grammar in libllama
Python check requirements.txt #2014: Commit f648ca2 pushed by ggerganov
September 3, 2024 07:33 16m 32s gg/llama-refactor-sampling
September 3, 2024 07:33 16m 32s
llama : support Jamba hybrid Transformer-Mamba models
Python check requirements.txt #2013: Pull request #7531 synchronize by compilade
September 2, 2024 02:00 2m 35s compilade/refactor-kv-cache
September 2, 2024 02:00 2m 35s
llama : support Jamba hybrid Transformer-Mamba models
Python check requirements.txt #2012: Pull request #7531 synchronize by compilade
September 2, 2024 01:50 3m 0s compilade/refactor-kv-cache
September 2, 2024 01:50 3m 0s
llama : support Jamba hybrid Transformer-Mamba models
Python check requirements.txt #2011: Pull request #7531 synchronize by compilade
September 2, 2024 01:47 2m 31s compilade/refactor-kv-cache
September 2, 2024 01:47 2m 31s
convert_hf : fix Jamba conversion
Python check requirements.txt #2010: Commit 9d3f44d pushed by compilade
September 2, 2024 01:47 2m 38s compilade/refactor-kv-cache
September 2, 2024 01:47 2m 38s
Merge branch 'master' into compilade/refactor-kv-cache
Python check requirements.txt #2009: Commit a03e32a pushed by compilade
September 2, 2024 01:33 16m 22s compilade/refactor-kv-cache
September 2, 2024 01:33 16m 22s
llama : support Jamba hybrid Transformer-Mamba models
Python check requirements.txt #2008: Pull request #7531 synchronize by compilade
September 2, 2024 01:33 11m 49s compilade/refactor-kv-cache
September 2, 2024 01:33 11m 49s
llama : support RWKV v6 models (#8980)
Python check requirements.txt #2007: Commit 8f1d81a pushed by ggerganov
September 1, 2024 14:38 2m 33s master
September 1, 2024 14:38 2m 33s
llama : support RWKV v6 models
Python check requirements.txt #2006: Pull request #8980 synchronize by MollySophia
August 31, 2024 04:18 2m 30s MollySophia:for-upstream
August 31, 2024 04:18 2m 30s
llama : support RWKV v6 models
Python check requirements.txt #2005: Pull request #8980 synchronize by MollySophia
August 31, 2024 03:59 2m 53s MollySophia:for-upstream
August 31, 2024 03:59 2m 53s
llama : support RWKV v6 models
Python check requirements.txt #2004: Pull request #8980 synchronize by ggerganov
August 30, 2024 10:31 2m 40s MollySophia:for-upstream
August 30, 2024 10:31 2m 40s
llama : support RWKV v6 models
Python check requirements.txt #2003: Pull request #8980 synchronize by ggerganov
August 30, 2024 10:19 2m 26s MollySophia:for-upstream
August 30, 2024 10:19 2m 26s
llama : support RWKV v6 models
Python check requirements.txt #2002: Pull request #8980 synchronize by MollySophia
August 30, 2024 04:13 2m 34s MollySophia:for-upstream
August 30, 2024 04:13 2m 34s
lora : raise error if lm_head is ignored
Python check requirements.txt #2001: Pull request #9103 synchronize by ngxson
August 28, 2024 09:24 2h 57m 48s xsn/lora_convert_ignore_lm_head
August 28, 2024 09:24 2h 57m 48s