You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Attempting to load a fine tuned model into llama.cpp. As the error sources from my sft ChatMusician model I post this here. I went back to try this on a saved copy of the model.
Greetings
Attempting to load a fine tuned model into llama.cpp. As the error sources from my sft ChatMusician model I post this here. I went back to try this on a saved copy of the model.
Running this:
python3 convert.py /Users/petergreis/Dropbox/Leeds/Project/chatmusician_model_tokenizer
Yields this:
And in the model directory itself I see:
Which explains why the token count is off by one. Any idea how I can get the two to agree?
The text was updated successfully, but these errors were encountered: