-
Notifications
You must be signed in to change notification settings - Fork 10k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vocab mismatch when I convert original Llama 2 Model on Macbook Pro M1 Pro #4045
Comments
I have the same exact problem |
I've opened a month old version and everything works fine, so it's definitely a recent bug 1e0e873 |
@PeterWrighten , @glemiron , if you go into your llama2 model directly and edit the params.json "vocab_size" to be 32000 rather than -1 does it work for you? |
Thanks! That works well! |
I get the same bug, but in Docker on WSL. |
Changing Thanks @TortoiseHam! 👌🏻 |
@TortoiseHam hello, I got a same error when I tried to quantize the DeepSeek-coder model with llama.cpp then, I edit the size in the |
@hyperbolic-c I have also met the same problem, have you solved now? |
No, I am looking forward to supporting Deepseek model #5981. |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Current Behavior
Environment and Context
I tried converting original Llama2 Model into ggml format with
python3 convert.py /llama/llama-2-7b-chat
Apple M1 Pro
Darwin PeterWrightMacBook-Pro14.local 23.1.0 Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000 arm64
This program built for i386-apple-darwin11.3.0
Failure Information (for bugs)
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
step 1
Download original Llama2 model from MetaAI
step2
git clone this repo,
I have tried add 'added_tokens.json', but it still doesn't work.
Failure Logs
The text was updated successfully, but these errors were encountered: