-
Notifications
You must be signed in to change notification settings - Fork 10k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Command-R GGUF conversion no longer working #7030
Comments
I had the same issue and the same error! Had to rollback to before Llama 3 to get it to work. |
This patch appears to get it working again.
|
There seems to be an issue using
This is after adding this to
|
I downloaded the model manually earlier today and the tokenizer.json is definitely a real tokenizer. The file size mentioned in the placeholder that you got matches the actual size of what I got from cohere's HF repo (~12.1mb). |
This is caused by Git LFS (for more details). To fix this, download the |
As @candre23 noted, this also affects |
Got this working myself about the same time I saw your PR, @dranger003. It does appear this is fixed by #7033 so let's get that merged asap. :) |
IMHO, this is all unreasonably complicated, yes. Plus the classic lack of clear documentation. I'm trying to wrap my head around all this and I get to this comment: "# - Copy-paste the generated get_vocab_base_pre() function into convert-hf-to-gguf.py" |
As recently as a few days ago, Command-R (and presumably R+) could be converted with convert-hf-to-gguf.py. I double checked and conversion completes successfully in b2751. However, with the recent changes to accommodate Llama3, Command-R compatibility has been broken. Trying to convert today with b2777 I get
I know that L3 required a new tokenizer provided by meta to facilitate proper conversion. Do we require something new from cohere, or is this something that can be fixed internally?
The text was updated successfully, but these errors were encountered: