-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
关于7b模型和lora模型合并的问题 #543
Comments
--model_type auto --tokenizer_path和--resize_emb 不填。 |
感谢作者! |
作者您好,还有个问题想向您请教一下,我们在把训练好的lora模型和预训练模型7b合并后,测试了一下效果,发现合并后的模型效果不如单独使用lora模型的效果(测试数据都是相同的),准确率下降了,请问出现这种现象的原因是什么?谢谢! |
应该是一模一样的,除非你评估时的参数有变动。 |
好的,感谢作者! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
作者您好,想请教一下在使用merge_peft_adapter.py代码合并预训练模型7b和训练好的lora模型时,参数--model_type、--tokenizer_path和--resize_emb应该怎样设置才正确,谢谢您!
The text was updated successfully, but these errors were encountered: