-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
configs in the paper #37
Comments
It's this: https://huggingface.co/Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-300K-v0.1. You can go to the end of the page to see the detailed Axolotl configs. Note that since Axolotl has some major updates recently, you may need to slightly modify the configurations in
to something like this:
|
thank you for your detailed explanation! I will follow the config you mentioned. |
Hello @zhangchen-xu Could you provide the config for evaluation with lm-evaluation-harness?? I want to reproduce the performance of Llama-3-8B on MMLU task. |
I am using Lighteval: https://github.com/huggingface/lighteval with its default settings. |
Hello! Thank you for this wonderful work.
It will help my recent work for training private llm :)
I have a question regarding the configurations among in the recipe page
I can find various recipes with the language(eng/chinese), and versions.
Which is the exact configurations for the reported model in the original paper, especially the 'MAGPIE-Pro-300K-Filtered' in the Table 1?
Thanks!
The text was updated successfully, but these errors were encountered: