-
Notifications
You must be signed in to change notification settings - Fork 991
Pull requests: abetlen/llama-cpp-python
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
fix: replace anyio.Lock with asyncio.Lock to resolve lock handling issues
#1871
opened Dec 16, 2024 by
sergey21000
Loading…
Add: Include option to normalize & truncate embeddings in create_embe…
#1864
opened Dec 15, 2024 by
KanishkNavale
Loading…
Add musa_simple Dockerfile for supporting Moore Threads GPU
#1842
opened Nov 25, 2024 by
yeahdongcn
Loading…
3 tasks done
use n_threads param to call _embed_image_bytes fun
#1834
opened Nov 16, 2024 by
KenForever1
Loading…
Support LoRA hotswapping and multiple LoRAs at a time
#1817
opened Oct 30, 2024 by
richdougherty
Loading…
server types: Move 'model' parameter to clarify it is used
#1786
opened Oct 5, 2024 by
domdomegg
Loading…
Resync llama_grammar with llama.cpp implementation and use curly braces quantities instead of repetitions
#1721
opened Aug 31, 2024 by
gbloisi-openaire
Loading…
feat: adding support for external chat format contribution
#1716
opened Aug 29, 2024 by
axel7083
Loading…
Allow server to accept openai's new structured output "json_schema" format.
#1677
opened Aug 13, 2024 by
cerealbox
Loading…
Updated README.md, llama_cpp/llama.py and pyproject.toml to add support for cross-encoders
#1605
opened Jul 17, 2024 by
perpendicularai
Loading…
Support images from local storage for Llava models
#1583
opened Jul 9, 2024 by
GokulMuraliRajasekar
Loading…
Previous Next
ProTip!
What’s not been updated in a month: updated:<2024-11-17.