Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Can't quantize in gguf q5_k_m a mamba architecture codestral #8690

Closed
Volko61 opened this issue Jul 25, 2024 · 2 comments
Closed

Bug: Can't quantize in gguf q5_k_m a mamba architecture codestral #8690

Volko61 opened this issue Jul 25, 2024 · 2 comments
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) stale

Comments

@Volko61
Copy link

Volko61 commented Jul 25, 2024

What happened?

Error: Error converting to fp16: b'INFO:hf-to-gguf:Loading model: mamba-codestral-7B-v0.1\nTraceback (most recent call last):\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 3673, in \n main()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 3645, in main\n model_architecture = hparams["architectures"][0]\nKeyError: 'architectures'\n'

image

Name and Version

latest (hf space)

What operating system are you seeing the problem on?

Other? (Please let us know in description)

Relevant log output

Error: Error converting to fp16: b'INFO:hf-to-gguf:Loading model: mamba-codestral-7B-v0.1\nTraceback (most recent call last):\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 3673, in \n main()\n File "/home/user/app/llama.cpp/convert_hf_to_gguf.py", line 3645, in main\n model_architecture = hparams["architectures"][0]\nKeyError: 'architectures'\n'

![image](https://github.com/user-attachments/assets/6f1ab039-754d-4c6c-8bb8-274c0c99ca13)
@Volko61 Volko61 added bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) labels Jul 25, 2024
@NextGenOP
Copy link

#8519 related to this

@github-actions github-actions bot added the stale label Aug 28, 2024
Copy link
Contributor

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) stale
Projects
None yet
Development

No branches or pull requests

2 participants