We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
./llama-cli --version version: 4310 (5555c0c) built with Apple clang version 16.0.0 (clang-1600.0.26.4) for arm64-apple-darwin24.0.0
Mac
CPU
Mac M2
Meta-Llama-3.1-8B-Instruct
Step 1 huggingface-cli login huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --local-dir Meta-Llama-3.1-8B-Instruct
Step 2 git clone https://github.com/ggerganov/llama.cpp.git python3 -m pip install -r llama.cpp/requirements.txt cmake -B build && cmake --build build --config Release
Step 3 python3 llama.cpp/convert_hf_to_gguf.py Meta-Llama-3.1-8B-Instruct ./llama.cpp/llama-quantize Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct-F16.gguf Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin q4_0
While running above Step 3 getting below error : ./llama.cpp/build/bin/llama-quantize ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin q4_0
(1-ai-env) ajitw@ajit-mac tools % ./llama.cpp/build/bin/llama-quantize ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin q4_0 main: build = 4310 (5555c0c) main: built with Apple clang version 16.0.0 (clang-1600.0.26.4) for arm64-apple-darwin24.0.0 main: quantizing '../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf' to '../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin' as Q4_0 llama_model_loader: loaded meta data with 33 key-value pairs and 1 tensors from ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = 8b-Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 0.06K llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Meta Llama 3.1 8B llama_model_loader: - kv 9: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Met... llama_model_loader: - kv 11: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 12: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 13: llama.block_count u32 = 32 llama_model_loader: - kv 14: llama.context_length u32 = 131072 llama_model_loader: - kv 15: llama.embedding_length u32 = 4096 llama_model_loader: - kv 16: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 17: llama.attention.head_count u32 = 32 llama_model_loader: - kv 18: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 20: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: general.file_type u32 = 1 llama_model_loader: - kv 22: llama.vocab_size u32 = 128256 llama_model_loader: - kv 23: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 25: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ... llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 30: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 31: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - type f32: 1 tensors /Users/ajitw/Ajit-Data/VIT/Projects/code/gen-ai-api/tools/llama.cpp/src/llama.cpp:18812: GGML_ASSERT((qs.n_attention_wv == n_attn_layer) && "n_attention_wv is unexpected") failed zsh: abort ./llama.cpp/build/bin/llama-quantize q4_0
No response
**Step 1** huggingface-cli login huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --local-dir Meta-Llama-3.1-8B-Instruct **Step 2** git clone https://github.com/ggerganov/llama.cpp.git python3 -m pip install -r llama.cpp/requirements.txt cmake -B build && cmake --build build --config Release **Step 3** python3 llama.cpp/convert_hf_to_gguf.py Meta-Llama-3.1-8B-Instruct ./llama.cpp/llama-quantize Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct-F16.gguf Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin q4_0 **While running above Step 3 getting below error :** ./llama.cpp/build/bin/llama-quantize ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin q4_0 (1-ai-env) ajitw@ajit-mac tools % ./llama.cpp/build/bin/llama-quantize ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin q4_0 main: build = 4310 (5555c0c1) main: built with Apple clang version 16.0.0 (clang-1600.0.26.4) for arm64-apple-darwin24.0.0 main: quantizing '../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf' to '../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin' as Q4_0 llama_model_loader: loaded meta data with 33 key-value pairs and 1 tensors from ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct llama_model_loader: - kv 3: general.finetune str = 8b-Instruct llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1 llama_model_loader: - kv 5: general.size_label str = 0.06K llama_model_loader: - kv 6: general.license str = llama3.1 llama_model_loader: - kv 7: general.base_model.count u32 = 1 llama_model_loader: - kv 8: general.base_model.0.name str = Meta Llama 3.1 8B llama_model_loader: - kv 9: general.base_model.0.organization str = Meta Llama llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Met... llama_model_loader: - kv 11: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam... llama_model_loader: - kv 12: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ... llama_model_loader: - kv 13: llama.block_count u32 = 32 llama_model_loader: - kv 14: llama.context_length u32 = 131072 llama_model_loader: - kv 15: llama.embedding_length u32 = 4096 llama_model_loader: - kv 16: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 17: llama.attention.head_count u32 = 32 llama_model_loader: - kv 18: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 19: llama.rope.freq_base f32 = 500000.000000 llama_model_loader: - kv 20: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 21: general.file_type u32 = 1 llama_model_loader: - kv 22: llama.vocab_size u32 = 128256 llama_model_loader: - kv 23: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 25: tokenizer.ggml.pre str = llama-bpe llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "... llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 128000 llama_model_loader: - kv 30: tokenizer.ggml.eos_token_id u32 = 128009 llama_model_loader: - kv 31: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ... llama_model_loader: - kv 32: general.quantization_version u32 = 2 llama_model_loader: - type f32: 1 tensors /Users/ajitw/Ajit-Data/VIT/Projects/code/gen-ai-api/tools/llama.cpp/src/llama.cpp:18812: GGML_ASSERT((qs.n_attention_wv == n_attn_layer) && "n_attention_wv is unexpected") failed zsh: abort ./llama.cpp/build/bin/llama-quantize q4_0
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Name and Version
./llama-cli --version
version: 4310 (5555c0c)
built with Apple clang version 16.0.0 (clang-1600.0.26.4) for arm64-apple-darwin24.0.0
Operating systems
Mac
GGML backends
CPU
Hardware
Mac M2
Models
Meta-Llama-3.1-8B-Instruct
Problem description & steps to reproduce
Step 1
huggingface-cli login
huggingface-cli download meta-llama/Meta-Llama-3.1-8B-Instruct --local-dir Meta-Llama-3.1-8B-Instruct
Step 2
git clone https://github.com/ggerganov/llama.cpp.git
python3 -m pip install -r llama.cpp/requirements.txt
cmake -B build && cmake --build build --config Release
Step 3
python3 llama.cpp/convert_hf_to_gguf.py Meta-Llama-3.1-8B-Instruct
./llama.cpp/llama-quantize Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct-F16.gguf Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin q4_0
While running above Step 3 getting below error :
./llama.cpp/build/bin/llama-quantize ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin q4_0
(1-ai-env) ajitw@ajit-mac tools % ./llama.cpp/build/bin/llama-quantize ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin q4_0
main: build = 4310 (5555c0c)
main: built with Apple clang version 16.0.0 (clang-1600.0.26.4) for arm64-apple-darwin24.0.0
main: quantizing '../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf' to '../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct--q4_0.bin' as Q4_0
llama_model_loader: loaded meta data with 33 key-value pairs and 1 tensors from ../local_models/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-0.06K-8b-Instruct-F16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 8B Instruct
llama_model_loader: - kv 3: general.finetune str = 8b-Instruct
llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
llama_model_loader: - kv 5: general.size_label str = 0.06K
llama_model_loader: - kv 6: general.license str = llama3.1
llama_model_loader: - kv 7: general.base_model.count u32 = 1
llama_model_loader: - kv 8: general.base_model.0.name str = Meta Llama 3.1 8B
llama_model_loader: - kv 9: general.base_model.0.organization str = Meta Llama
llama_model_loader: - kv 10: general.base_model.0.repo_url str = https://huggingface.co/meta-llama/Met...
llama_model_loader: - kv 11: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 12: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 13: llama.block_count u32 = 32
llama_model_loader: - kv 14: llama.context_length u32 = 131072
llama_model_loader: - kv 15: llama.embedding_length u32 = 4096
llama_model_loader: - kv 16: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 17: llama.attention.head_count u32 = 32
llama_model_loader: - kv 18: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 19: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 20: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 21: general.file_type u32 = 1
llama_model_loader: - kv 22: llama.vocab_size u32 = 128256
llama_model_loader: - kv 23: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 24: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 25: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 26: tokenizer.ggml.tokens arr[str,128256] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 27: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 28: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 29: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 30: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 31: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 32: general.quantization_version u32 = 2
llama_model_loader: - type f32: 1 tensors
/Users/ajitw/Ajit-Data/VIT/Projects/code/gen-ai-api/tools/llama.cpp/src/llama.cpp:18812: GGML_ASSERT((qs.n_attention_wv == n_attn_layer) && "n_attention_wv is unexpected") failed
zsh: abort ./llama.cpp/build/bin/llama-quantize q4_0
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: