-
Notifications
You must be signed in to change notification settings - Fork 10k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Deepseek MoE v1 & GigaChat models #10827
Conversation
@ggerganov Hi! I think this PR is ready, could you check it up? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor suggestions to fix the location of the new DS code to be located before DS2
convert_hf_to_gguf.py
Outdated
@@ -3506,6 +3509,97 @@ def prepare_tensors(self): | |||
raise ValueError(f"Unprocessed experts: {experts}") | |||
|
|||
|
|||
@Model.register("DeepseekForCausalLM") | |||
class DeepseekModel(Model): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move before DeepseekV2Model
above
src/llama.cpp
Outdated
} | ||
} | ||
} break; | ||
case LLM_ARCH_DEEPSEEK: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move before case LLM_ARCH_DEEPSEEK2
above.
Thank you for your suggestions! I hadn't noticed that. |
* Add deepseek v1 arch & gigachat template * improve template code * add readme * delete comments * remove comment * fix format * lint llama.cpp * fix order of deepseek and deepseek2, move gigachat temlate to the end of func * fix order of deepseek and deepseek2 in constants; mark shared exp as deepseek arch need * remove comments * move deepseek above deepseek2 * change placement of gigachat chat template
Self-reported review complexity:
The PR adds support for DeepSeek MoE v1 models (Base and Instruct) & support new GigaChat models (Base and Instruct). Since GigaChat is based on the Deepseek MoE v1 architecture, the changes for that model is limited to the tokenizer.