Skip to content

Commit

Permalink
nvidia uses the LLaMAForCausalLM string in their config.json, example…
Browse files Browse the repository at this point in the history
… nvidia/Llama3-ChatQA-2-8B
  • Loading branch information
Csaba Kecskemeti authored and Csaba Kecskemeti committed Sep 14, 2024
1 parent 822b632 commit aaf7f53
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion convert_hf_to_gguf.py
Original file line number Diff line number Diff line change
Expand Up @@ -1487,7 +1487,7 @@ def prepare_tensors(self):
raise ValueError(f"Unprocessed norms: {norms}")


@Model.register("LlamaForCausalLM", "MistralForCausalLM", "MixtralForCausalLM")
@Model.register("LLaMAForCausalLM", "LlamaForCausalLM", "MistralForCausalLM", "MixtralForCausalLM")
class LlamaModel(Model):
model_arch = gguf.MODEL_ARCH.LLAMA

Expand Down

0 comments on commit aaf7f53

Please sign in to comment.