Bug: Can't quantize in gguf q5_k_m a mamba architecture codestral #8690
Labels
bug-unconfirmed
low severity
Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
stale
What happened?
Error: Error converting to fp16: b'INFO:hf-to-gguf:Loading model: mamba-codestral-7B-v0.1\nTraceback (most recent call last):\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 3673, in \n main()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 3645, in main\n model_architecture = hparams["architectures"][0]\nKeyError: 'architectures'\n'
Name and Version
latest (hf space)
What operating system are you seeing the problem on?
Other? (Please let us know in description)
Relevant log output
The text was updated successfully, but these errors were encountered: