-
Notifications
You must be signed in to change notification settings - Fork 10.7k
Issues: ggml-org/llama.cpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
llama binary release misses libaries. (ubuntu but perhaps others too)
#11893
opened Feb 15, 2025 by
0wwafa
Compile bug: C++ One Definition Rule [-Wodr] violations in common/json.hpp
bug-unconfirmed
#11876
opened Feb 14, 2025 by
srcshelton
Feature Request: APIkey
enhancement
New feature or request
#11874
opened Feb 14, 2025 by
gsm1258
4 tasks done
Feature Request: Use all (or configurable #) of threads for model loading, not constrainted by --threads specified for inference
enhancement
New feature or request
#11873
opened Feb 14, 2025 by
VanceVagell
4 tasks done
Misc. bug: Problems with official jinja templates (Gemma 2, Llama 3.2, Qwen 2.5)
bug
Something isn't working
#11866
opened Feb 14, 2025 by
MoonRide303
Misc. bug: Missing <think> tag in response (DeepSeek R1)
bug-unconfirmed
#11861
opened Feb 14, 2025 by
9chu
Compile bug: llama.cpp latest build fails on OmniOS with undefined symbol error
bug-unconfirmed
#11857
opened Feb 14, 2025 by
nmartin0
Eval bug: image encode time slow on mobile device
bug-unconfirmed
#11856
opened Feb 14, 2025 by
perp
Misc. bug: "response_format" on the OpenAI compatible "v1/chat/completions" issue
bug-unconfirmed
#11847
opened Feb 13, 2025 by
tulang3587
Misc. bug: CUDA error: CUDA-capable device(s) is/are busy or unavailable from
cudaSetDevice(device)
bug-unconfirmed
#11841
opened Feb 13, 2025 by
wjkim00
[BENCHMARKS] DeepScaleR-1.5B-Preview F16 ollama GGUF vs llama.cpp
#11828
opened Feb 12, 2025 by
loretoparisi
Misc. bug: Quantization process 100 times slower on Windows (dockerized)
bug-unconfirmed
#11825
opened Feb 12, 2025 by
dclipca
Misc. bug: llama-cli crash on ubuntu with GGML-VULKAN=ON
bug-unconfirmed
#11823
opened Feb 12, 2025 by
gaykawadpk
Misc. bug: llama-server does not print model loading errors by default (log level misconfigured?)
bug-unconfirmed
#11819
opened Feb 12, 2025 by
akx
Misc. bug: webui: extreme sluggish performance typing into textarea with long-context conversations
bug-unconfirmed
#11813
opened Feb 12, 2025 by
woof-dog
Misc. bug: Native API failed. Native API returns: 20 (UR_RESULT_ERROR_DEVICE_LOST)
bug-unconfirmed
#11812
opened Feb 12, 2025 by
simonchen
Misc. bug: server not exit after
missing result_output tensor
error
server
#11808
opened Feb 11, 2025 by
ngxson
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.